当前位置:首页 > 经验 >

场景深度指什么(场景化的含义是什么)

来源:原点资讯(www.yd166.com)时间:2022-10-25 23:11:52作者:YD166手机阅读>>

场景深度指什么,场景化的含义是什么(13)

2维点匹配
  • TILDE
    • https://cvlab.epfl.ch/research/tilde
  • 协变特征检测[17]
    • http://dvmmweb.cs.columbia.edu/files/3129.pdf
    • https://github.com/ColumbiaDVMM/Transform_Covariant_Detector
  • DeepDesc
    • http://icwww.epfl.ch/~trulls/pdf/iccv-2015-deepdesc.pdf
    • https://github.com/etrulls/deepdesc-release
  • LIFT
    • https://arxiv.org/pdf/1603.09114.pdf
    • https://github.com/cvlab-epfl/LIFT
  • Quad-networks
    • https://arxiv.org/pdf/1611.07571.pdfGMShttp://jwbian.net/gmsVFC
    • http://www.escience.cn/people/jiayima/cxdm.html
3维点匹配
  • PPFNet
    • http://tbirdal.me/downloads/tolga-birdal-cvpr-2018-ppfnet.pdf
  • 文献[51]
    • http://cn.arxiv.org/pdf/1802.07869
  • 文献[49]
    • http://cn.arxiv.org/pdf/1807.05653
  • 文献[50]
    • http://openaccess.thecvf.com/content_ECCV_2018/papers/Hanyu_Wang_Learning_3D_Keypoint_ECCV_2018_paper.pdf
语义匹配
  • 样本LDA分类器
    • http://ci2cv.net/media/papers/2015_ICCV_Hilton.pdf
    • https://github.com/hbristow/epic
  • AnchorNet
    • http://openaccess.thecvf.com/content_cvpr_2017/papers/Novotny_AnchorNet_A_Weakly_CVPR_2017_paper.pdf
  • 文献[28]
    • http://cn.arxiv.org/pdf/1711.07641
线匹配
  • LBD
    • http://www.docin.com/p-1395717977.html
    • https://github.com/mtamburrano/LBD_Descriptor
  • 新线点投影不变量[61]
    • https://github.com/dlut-dimt/LineMatching
模板匹配
  • FAST-Match
    • http://www.eng.tau.ac.il/~simonk/FastMatch/
  • CFAST-Match
    • https://wenku.baidu.com/view/3d96bf9127fff705cc1755270722192e453658a5.html
  • DDIS
    • https://arxiv.org/abs/1612.02190
    • https://github.com/roimehrez/DDIS
  • DIWU
    • http://liortalker.wixsite.com/liortalker/code
  • CoTM
    • http://openaccess.thecvf.com/content_cvpr_2018/CameraReady/2450.pdf
  • OATM
    • http://cn.arxiv.org/pdf/1804.02638
块匹配
  • MatchNet
    • http://www.cs.unc.edu/~xufeng/cs/papers/cvpr15-matchnet.pdf
    • https://github.com/hanxf/matchnet
  • DeepCompare
    • http://imagine.enpc.fr/~zagoruys/publication/deepcompare/
  • PN-Net
    • https://arxiv.org/abs/1601.05030
    • https://github.com/vbalnt/pnnet
  • L2-Net
    • http://www.nlpr.ia.ac.cn/fanbin/pub/L2-Net_CVPR17.pdf
    • https://github.com/yuruntian/L2-Net
  • DeepCD
    • https://www.csie.ntu.edu.tw/~cyy/publications/papers/Yang2017DLD.pdf
    • https://github.com/shamangary/DeepCD

参考文献:

[1] Harris C,Stephens M. A combined corner and edge detector [C]/ /Proceedings of the 4th Alvey Vision Conference. Manchester: AVC,1988: 147-151. [DOI: 10. 5244 /C. 2. 23]

[2] Rosten E,Drummond T. Machine learning for high-speed corner detection[C]/ /Proceedings of the 9th European Conference on Computer Vision. Graz,Austria: Springer,2006: 430-443. [DOI: 10. 1007 /11744023_34]

[3] Lowe D G. Distinctive image features from scale-invariantkeypoints[J]. International Journal of Computer Vision,2004, 60( 2) : 91-110. [DOI: 10. 1023 /B: VISI. 0000029664. 99615. 94]

[4] Liu L,Zhan Y Y,Luo Y,et al. Summarization of the scale invariant feature transform[J]. Journal of Image and Graphics, 2013,18( 8) : 885-892. [刘立,詹茵茵,罗扬,等. 尺度不 变特征 变 换 算 子 综 述[J]. 中 国 图 象 图 形 学 报,2013, 18( 8) : 885-892.][DOI: 10. 11834 /jig. 20130801]

[5] Xu Y X,Chen F. Recent advances in local image descriptor[J]. Journal of Image and Graphics,2015,20( 9) : 1133-1150. [许 允喜,陈方. 局部图像描述符最新研究进展[J]. 中国图象 图形学报,2015,20( 9) : 1133-1150.][DOI: 10. 11834 /jig. 20150901]

[6] Zhang X H,Li B,Yang D. A novel Harris multi-scale corner detection algorithm[J]. Journal of Electronics and Information Technology,2007,29 ( 7) : 1735-1738. [张小 洪,李 博,杨 丹. 一种新的 Harris 多尺度角点检测[J]. 电子与信息学报, 2007,29 ( 7 ) : 1735-1738.] [DOI: 10. 3724 / SP. J. 1146. 2005. 01332]

[7] He H Q,Huang S X. Improved algorithm for Harris rapid subpixel corners detection[J]. Journal of Image and Graphics, 2012,17( 7) : 853-857. [何海清,黄声享. 改进的 Harris 亚 像素角点快速定位[J]. 中国图象图形学报,2012,17( 7) : 853-857.][DOI: 10. 11834 /jig. 20120715]

[8] Zhang L T,Huang X L,Lu L L,et al. Fast Harris corner detection based on gray difference and template[J]. Chinese Journal of Scientific Instrument,2018,39( 2) : 218-224. [张立亭,黄 晓浪,鹿琳琳,等. 基于灰度差分与模板的 Harris 角点检测 快速算法[J]. 仪器仪表学报,2018,39( 2) : 218-224.]

[9] Ke Y,Sukthankar R. PCA-SIFT: a more distinctive representation for local image descriptors[C]/ /Proceedings of 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington,DC: IEEE,2004: 506-513. [DOI: 10. 1109 /CVPR. 2004. 1315206]

[10] Bay H,Tuytelaars T,Gool L. SURF: speeded up robust features [C]/ /Proceedings of the 9th European Conference on Computer Vision. Graz,Austria: Springer,2006: 404-417. [DOI: 10. 1007 /11744023_32]

[11] Liu L,Peng F Y,Zhao K,et al. Simplified SIFT algorithm for fast image matching[J]. Infrared and Laser Engineering,2008, 37( 1) : 181-184. [刘立,彭复员,赵坤,等. 采用简化 SIFT 算法实 现 快 速 图 像 匹 配[J]. 红外与激光工程,2008, 37( 1) : 181-184.][DOI: 10. 3969 /j. issn. 1007-2276. 2008. 01. 042]

[12] Abdel-Hakim A E,Farag A A. CSIFT: a SIFT descriptor with color invariant characteristics[C]/ /Proceedings of 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York,NY: IEEE,2006: 1978-1983. [DOI: 10. 1109 /CVPR. 2006. 95]

[13] Mikolajczyk K,Schmid C. A performance evaluation of local descriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27 ( 10 ) : 1615-1630. [DOI: 10. 1109 /TPAMI. 2005. 188]

[14] Morel J M,Yu G S. ASIFT: a new framework for fully affine invariant image comparison[J]. SIAM Journal on Imaging Sciences,2009,2( 2) : 438-469. [DOI: 10. 1137 /080732730]

[15] Rosten E,Porter R,Drummond T. Faster and better: a machine learning approach to corner detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32( 1) : 105- 119. [DOI: 10. 1109 /TPAMI. 2008. 275]

[16] Verdie Y,Yi K M,Fua P,et al. TILDE: a temporally invariant learned DEtector[C]/ /Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston,MA: IEEE, 2015: 5279-5288. [DOI: 10. 1109 /CVPR. 2015. 7299165]

[17] Zhang X,Yu F X,Karaman S,et al. Learning discriminative and transformation covariant local feature detectors[C]/ /Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI: IEEE,2017: 4923-4931. [DOI: 10. 1109 /CVPR. 2017. 523]

[18] Savinov N,Seki A,Ladicky L,et al. Quad-networks: unsupervised learning to rank for interest point detection[C]/ /Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI: IEEE,2017: 3929-3937. [DOI: 10. 1109 /CVPR. 2017. 418]

[19] Simo-Serra E,Trulls E,Ferraz L,et al. Discriminative learning of deep convolutional feature point descriptors[C]/ /Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago,Chile: IEEE,2015: 118-126. [DOI: 10. 1109 / ICCV. 2015. 22]

[20] Yi K M,Trulls E,Lepetit V,et al. LIFT: learned invariant feature transform[C]/ /Proceedings of the 14th European Conference on Computer Vision. Amsterdam,The Netherlands: Springer,2016: 467-483. [DOI: 10. 1007 /978-3-319-46466-4_28]

[21] Jaderberg M,Simonyan K,Zisserman A,et al. Spatial transformer networks[C]/ /Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: ACM,2015: 2017-2025.

[22] Yi K M,Verdie Y,Fua P,et al. Learning to assign orientations to feature points[C]/ /Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,NV: IEEE,2016: 107-116. [DOI: 10. 1109 /CVPR. 2016. 19]

[23] Liu C,Yuen J,Torralba A. SIFT flow: dense correspondence across scenes and its applications[J]. IEEE Transactions on Pattern Analysisand Machine Intelligence,2011,33( 5) : 978-994. [DOI: 10. 1109 /TPAMI. 2010. 147]

[24] Bristow H,Valmadre J,Lucey S. Dense semantic correspondence where every pixel is a classifier[C]/ /Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE,2015: 4024-4031. [DOI: 10. 1109 / ICCV. 2015. 458]

[25] Novotny D,Larlus D,Vedaldi A. AnchorNet: A weakly supervised network to learn geometry-sensitive features for semantic matching[C]/ /Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI: IEEE, 2017: 2867-2876. [DOI: 10. 1109 /CVPR. 2017. 306]

[26] Kar A,Tulsiani S,Carreira J,et al. Category-specific object reconstruction from a single image[C]/ /Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE,2015: 1966-1974. [DOI: 10. 1109 /CVPR. 2015. 7298807]

[27] Thewlis J,Bilen H,Vedaldi A. Unsupervised learning of object landmarks by factorized spatial embeddings[C]/ /Proceedings of 2017 IEEE International Conference on Computer Vision. Venice,Italy: IEEE,2017: 3229-3238. [DOI: 10. 1109 / ICCV. 2017. 348]

[28] Wang Q Q,Zhou X W,Daniilidis K. Multi-image semantic matching by mining consistent features[C]/ /Proceedings of 2018 IEEE /CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City,UT: IEEE,2018: 685-694. [DOI: 10. 1109 /CVPR. 2018. 00078]

[29] Yu D D,Yang F,Yang C Y,et al. Fast rotation-free featurebased image registration using improved N-SIFT and GMM-based parallel optimization[J]. IEEE Transactions on Biomedical Engineering,2016,63 ( 8) : 1653-1664. [DOI: 10. 1109 /TBME. 2015. 2465855]

[30] Pock T,Urschler M,Zach C,et al. A duality based algorithm for TV - L1 - optical-flow image registration[C]/ /Proceedings of the 10th International Conference on Medical Image Computing and Computer-Assisted Intervention. Brisbane,Australia: Springer, 2007: 511-518. [DOI: 10. 1007 /978-3-540-75759-7_62]

[31] Zhang G M,Sun X X,Liu J X,et al. Research on TV-L1 optical flow model for image registration based on fractional-order differentiation[J]. Acta Automatica Sinica,2017,43 ( 12) : 2213- 2224. [张桂梅,孙晓旭,刘建新,等. 基于分数阶微分的 TV-L1光流 模 型 的 图 像 配 准 方 法 研 究[J]. 自 动 化 学 报, 2017,43 ( 12 ) : 2213-2224.][DOI: 0. 16383 /j. aas. 2017. c160367]

[32] Lu X S,Tu S X,Zhang S. A metric method using multidimensional features for nonrigid registration of medical images[J]. Acta Automatica Sinica,2016,42( 9) : 1413-1420. [陆雪松, 涂圣贤,张素. 一种面向医学图像非刚性配准的多维特征度 量方法[J]. 自动化学报,2016,42( 9) : 1413-1420.][DOI: 10. 16383 /j. aas. 2016. c150608]

[33] Yang W,Zhong L M,Chen Y,et al. Predicting CT image from MRI data through feature matching with learned nonlinear local descriptors[J]. IEEE Transactions on Medical Imaging,2018, 37( 4) : 977-987. [DOI: 10. 1109 /TMI. 2018. 2790962]

[34] Cao X H,Yang J H,Gao Y Z,et al. Region-adaptive deformable registration of CT /MRI pelvic images via learning-based image synthesis[J]. IEEE Transactions on Image Processing, 2018,27 ( 7 ) : 3500-3512. [DOI: 10. 1109 /TIP. 2018. 2820424]

[35] He M M,Guo Q,Li A,et al. Automatic fast feature-level image registration for high-resolution remote sensing images[J]. Journal of Remote Sensing,2018,22( 2) : 277-292. [何梦梦,郭擎, 李安,等. 特征级高分辨率遥感图像快速自动配准[J]. 遥 感 学 报,2018,22 ( 2 ) : 277-292.] [DOI: 10. 11834 /jrs. 20186420]

[36] Fischler M A,Bolles R C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM,1981, 24( 6) : 381-395. [DOI: 10. 1145 /358669. 358692]

[37] Torr P H S,Zisserman A. MLESAC: a new robust estimator with application to estimating image geometry[J]. Computer Vision and Image Understanding,2000,78( 1) : 138-156. [DOI: 10. 1006 /cviu. 1999. 0832]

[38] Li X R,Hu Z Y. Rejecting mismatches by correspondence function[J]. International Journal of Computer Vision,2010, 89( 1) : 1-17. [DOI: 10. 1007 / s11263-010-0318-x]

[39] Liu H R,Yan S C. Common visual pattern discovery via spatially coherent correspondences[C]/ /Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco,CA: IEEE,2010: 1609-1616. [DOI: 10. 1109 /CVPR. 2010. 5539780]

[40] Liu H R,Yan S C. Robust graph mode seeking by graph shift [C]/ /Proceedings of the 27th International Conference on International Conference on Machine Learning. Haifa,Israel: ACM, 2010: 671-678.

[41] Lin W Y D,Cheng M M,Lu J B,et al. Bilateral functions for global motion modeling[C]/ /Proceedings of the 13th European Conference on Computer Vision. Zurich,Switzerland: Springer, 2014: 341-356. [DOI: 10. 1007 /978-3-319-10593-2_23]

[42] Bian J W,Lin W Y,Matsushita Y,et al. GMS: grid-based motion statistics for fast,ultra-robust feature correspondence[C]/ / Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI: IEEE,2017: 2828-2837. [DOI: 10. 1109 /CVPR. 2017. 302]

[43] Chen F J,Han J,Wang Z W,et al. Image registration algorithm based on improved GMS and weighted projection transformation [J]. Laser & Optoelectronics Progress,2018,55 ( 11 ) : 111006. [陈方杰,韩军,王祖武,等. 基于改进 GMS 和加权 投影变换的图像配准算法[J]. 激光与光电子学进展,2018, 55( 11) : 111006.]

[44] Ma J Y,Zhao J,Tian J W,et al. Robust point matching via vector field consensus[J]. IEEE Transactions on Image Processing, 2014,23 ( 4 ) : 1706-1721. [DOI: 10. 1109 /TIP. 2014. 2307478]

[45] Aronszajn N. Theory of reproducing kernels[J]. Transactions of the American Mathematical Society,1950,68 ( 3 ) : 337-404. [DOI: 10. 2307 /1990404]

[46] Charles R Q,Su H,Mo K,et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]/ /Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI: IEEE,2017: 77-85. [DOI: 10. 1109 /CVPR. 2017. 16]

[47] Qi C R,Yi L,Su H,et al. PointNet : deep hierarchical feature learning on point sets in a metric space[C]/ /Proceedings of the 31st Conference on Neural Information Processing Systems. Long Beach,CA: ACM,2017.

[48] Deng H W,Birdal T,Ilic S. PPFNet: global context aware local features for robust 3D point matching[C]/ /Proceedings of 2018 IEEE /CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City,UT: IEEE,2018. [DOI: 10. 1109 / CVPR. 2018. 00028]

[49] Zhou L,Zhu S Y,Luo Z X,et al. Learning and matching multiview descriptors for registration of point clouds[C]/ /Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer,2018. [DOI: 10. 1007 /978-3-030-01267-0 _31]

[50] Wang H Y,Guo J W,Yan D M,et al. Learning 3D keypoint descriptors for non-rigid shape matching[C]/ /Proceedings of the 15th European Conference on Computer Vision. Munich,Germany: Springer,2018. [doi: 10. 1007 /978-3-030-01237-3_1]

[51] Georgakis G,Karanam S,Wu Z Y,et al. End-to-end learning of keypoint detector and descriptor for pose invariant 3D matching [C]/ /Proceedings of 2018 IEEE /CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City,UT: IEEE, 2018. [DOI: 10. 1109 /CVPR. 2018. 00210]

[52] Ren S Q,He K M,Girshick R,et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017,39 ( 6 ) : 1137-1149. [DOI: 10. 1109 /TPAMI. 2016. 2577031]

[53] Wang Z H,Wu F C,Hu Z Y. MSLD: a robust descriptor for line matching[J]. Pattern Recognition,2009,42 ( 5 ) : 941- 953. [DOI: 10. 1016 /j. patcog. 2008. 08. 035]

[54] Wang J X,Zhang X,Zhu H,et al. MSLD descriptor combined regional affine transformation and straight line matching[J]. Journal of Signal Processing,2018,34 ( 2 ) : 183-191. [王竞 雪,张雪,朱红,等. 结合区域仿射变换的 MSLD 描述子与 直线段匹配[J]. 信号处理,2018,34( 2) : 183-191.][DOI: 10. 16798 /j. issn. 1003-0530. 2018. 02. 008]

[55] Zhang L L,Koch R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency[J]. Journal of Visual Communication and Image Representation,2013,24 ( 7 ) : 794-805. [DOI: 10. 1016 /j. jvcir. 2013. 05. 006]

[56] Wang L,Neumann U,You S Y. Wide-baseline image matching using line signatures[C]/ /Proceedings of the 12th International Conference on Computer Vision. Kyoto,Japan: IEEE,2009: 1311-1318. [DOI: 10. 1109 / ICCV. 2009. 5459316]

[57] López J,Santos R,Fdez-Vidal X R,et al. Two-view line matching algorithm based on context and appearance in low-textured images[J]. Pattern Recognition,2015,48 ( 7 ) : 2164-2184. [DOI: 10. 1016 /j. patcog. 2014. 11. 018]

[58] Fan B,Wu F C,Hu Z Y. Line matching leveraged by point correspondences[C]/ /Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco,CA: IEEE,2010: 390-397. [DOI: 10. 1109 / CVPR. 2010. 5540186]

[59] Fan B,Wu F C,Hu Z Y. Robust line matching through line-point invariants[J]. Pattern Recognition,2012,45 ( 2) : 794- 805. [DOI: 10. 1016 /j. patcog. 2011. 08. 004]

[60] Lourakis M I A,Halkidis S T,Orphanoudakis S C. Matching disparate views of planar surfaces using projective invariants[J]. Image and Vision Computing,2000,18 ( 9) : 673-683. [DOI: 10. 1016 / S0262-8856( 99) 00071-2]

[61] Jia Q,Gao X K,Fan X,et al. Novel coplanar line-points invariants for robust line matching across views[C]/ /Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer,2016: 599-611. [DOI: 10. 1007 / 978-3-319-46484-8_36]

[62] Luo Z X,Zhou X C,Gu D X. From a projective invariant to some new properties of algebraic hypersurfaces[J]. Science China Mathematics,2014,57( 11) : 2273-2284. [DOI: 10. 1007 / s11425-014-4877-0]

[63] Ouyang H,Fan D Z,Ji S,et al. Line matching based on discrete description and conjugate point constraint[J]. Acta Geodaetica et Cartographica Sinica,2018,47 ( 10 ) : 1363-1371. [欧阳欢,范大昭,纪松,等. 结合离散化描述与同名点约束 的线特征匹配[J]. 测绘学报,2018,47( 10) : 1363-1371.] [DOI: 10. 11947 /j. AGCS. 2018. 20170231]

[64] Matas J,Chum O,Urban M,et al. Robust wide baseline stereo from maximally stable extremal regions[C]/ /Proceedings of the 13th British Machine Vision Conference. Cardiff: BMVC,2002: 1041-1044.

[65] Nistér D,Stewénius H. Linear time maximally stable extremal regions[C]/ /Proceedings of the 10th European Conference on Computer Vision. Marseille,France: Springer,2008: 183-196. [DOI: 10. 1007 /978-3-540-88688-4_14]

[66] Elnemr H A. Combining SURF and MSER along with color features for image retrieval system based on bag of visual words[J]. Journal of Computer Science,2016,12 ( 4) : 213-222. [DOI: 10. 3844 /jcssp. 2016. 213. 222]

[67] Mo H Y,Wang Z P. A feature detection method combined MSER and SIFT[J]. Journal of Donghua University: Natural Science, 2011,37( 5) : 624-628. [莫会宇,王祝萍. 一种结合 MSER 与 SIFT 算子的特征检测方法[J]. 东华大学学报: 自然科学 版,2011,37 ( 5) : 624-628.][DOI: 10. 3969 /j. issn. 1671- 0444. 2011. 05. 017]

[68] Xu Y C,Monasse P,Géraud T,et al. Tree-based Morse regions: a topological approach to local feature detection[J]. IEEE Transactions on Image Processing,2014,23( 12) : 5612-5625. [DOI: 10. 1109 /TIP. 2014. 2364127]

[69] Korman S,Reichman D,Tsur G,et al. FasT-Match: fast affine template matching[C]/ /Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland,OR: IEEE,2013: 2331-2338. [DOI: 10. 1109 /CVPR. 2013. 302]

[70] Jia D,Cao J,Song W D,et al. Colour FAST ( CFAST) match: fast affine template matching for colour images[J]. Electronics Letters,2016,52( 14) : 1220-1221. [DOI: 10. 1049 /el. 2016. 1331]

[71] Jia D,Yang N H,Sun J G. Template selection and matching algorithm for image matching[J]. Journal of Image and Graphics, 2017,22( 11) : 1512-1520. [贾迪,杨宁华,孙劲光. 像对匹 配的模 板 选 择 与 匹 配[J]. 中国图象图形学报,2017, 22( 11) : 1512-1520.][DOI: 10. 11834 /jig. 170156]

[72] Dekel T,Oron S,Rubinstein M,et al. Best-buddies similarity for robust template matching[C]/ /Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE,2015: 2021-2029. [DOI: 10. 1109 /CVPR. 2015. 7298813]

[73] Oron S,Dekel T,Xue T F,et al. Best-buddies similarity—robust template matching using mutual nearest neighbors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018,40 ( 8 ) : 1799-1813. [DOI: 10. 1109 /TPAMI. 2017. 2737424]

[74] Wang G,Sun X L,Shang Y,et al. A robust template matching algorithm based on best-buddies similarity[J]. Acta Optica Sinica,2017,37( 3) : 274-280. [王刚,孙晓亮,尚洋,等. 一种 基于最佳相似点对的稳健模板匹配算法[J]. 光 学 学 报, 2017, 37 ( 3 ) : 274-280.] [DOI: 10. 3788 /aos201737. 0315003]

[75] Talmi I,Mechrez R,Zelnik-Manor L. Template matching with deformable diversity similarity[C]/ /Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI: IEEE,2017: 1311-1319. [DOI: 10. 1109 /CVPR. 2017. 144]

[76] Talker L,Moses Y,Shimshoni I. Efficient sliding window computation for NN-based template matching[C]/ /Proceedings of the 15th European Conference on Computer Vision. Munich,Germany: Springer,2018: 409-424. [DOI: 10. 1007 /978-3-030- 01249-6_25]

[77] Korman S,Soatto S,Milam M. OATM: occlusion aware template matching by consensus set maximization[C]/ /Proceedings of 2018 IEEE /CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City,UT: IEEE,2018. [DOI: 10. 1109 /CVPR. 2018. 00283]

[78] Kat R,Jevnisek R J,Avidan S. Matching pixels using co-occurrence statistics[C]/ /Proceedings of 2018 IEEE /CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE,2018. [DOI: 10. 1109 /CVPR. 2018. 00188]

[79] Han X F,Leung T,Jia Y Q,et al. MatchNet: unifying feature and metric learning for patch-based matching[C]/ /Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston,MA: IEEE,2015: 3279-3286. [DOI: 10. 1109 /CVPR. 2015. 7298948]

[80] Zagoruyko S,Komodakis N. Learning to compare image patches via convolutional neural networks[C]/ /Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston,MA: IEEE,2015: 4353-4361. [DOI: 10. 1109 / CVPR. 2015. 7299064]

[81] Fan D Z,Dong Y,Zhang Y S. Satellite image matching method based on deep convolution neural network[J]. Acta Geodaetica et Cartographica Sinica,2018,47( 6) : 844-853. [范大昭,董 杨,张永生. 卫星影像匹配的深度卷积神经网络方法[J]. 测绘 学 报,2018,47 ( 6 ) : 844-853.] [DOI: 10. 11947 /j. AGCS. 2018. 20170627]

[82] Balntas V,Johns E,Tang L L,et al. PN-Net: conjoined triple deep network for learning local image descriptors[EB /OL]. [2018-08-09]https: / /arxiv. org / pdf /1601. 05030. pdf.

[83] Yang T Y,Hsu J H,Lin Y Y,et al. DeepCD: learning deep complementary descriptors for patch representations[C]/ /Proceedings of 2017 IEEE International Conference on Computer Vision. Venice,Italy: IEEE,2017: 3334-3342. [DOI: 10. 1109 / ICCV. 2017. 359]

[84] Tian Y R,Fan B,Wu F C. L2-Net: deep learning of discriminative patch descriptor in Euclidean space[C]/ /Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,HI: IEEE,2017: 6128-6136. [DOI: 10. 1109 /CVPR. 2017. 649].

本文仅做学术分享,如有侵权,请联系删文。

栏目热文

场景化有什么意思(什么是场景化方案)

场景化有什么意思(什么是场景化方案)

当前,“学习强国”在党员群众圈里风生水起,好看、好评,更重要的是“好多人”把“好多时间”放在了这里。这可以看作是新时期的...

2022-10-25 22:27:56查看全文 >>

场景交互是什么意思(场景构建是什么意思)

场景交互是什么意思(场景构建是什么意思)

穿梭虚拟与现实,新一代交互基础设施长啥样?“互联”时代下,“光场交互”成为一种全新的场景交互通信技术。光场交互技术无需通...

2022-10-25 22:31:14查看全文 >>

场景化人才是什么意思(什么是可视化人才)

场景化人才是什么意思(什么是可视化人才)

“人才对于企业的真正需求到底是什么?”好文6039字 | 10分钟阅读来源:华夏基石管理评论(ID:guanlizhis...

2022-10-25 22:38:31查看全文 >>

支付场景的七大维度(支付行业各种场景业务)

支付场景的七大维度(支付行业各种场景业务)

摘要: 数研所合作机构与数字人民币产业图景一览 报告摘要:2、在投放层,央行和指定商业银行构建相应的数字人民币管理系统...

2022-10-25 22:43:29查看全文 >>

产品场景化(产品的场景化设计)

产品场景化(产品的场景化设计)

本文从体验出发,结合AARRR(用户增长)模型进行阐述,从七个角度来分析产品即场景,一起来看看~产品即场景,而场景化核心...

2022-10-25 22:26:39查看全文 >>

什么是场景构建(什么是场景化)

什么是场景构建(什么是场景化)

场景构建是一个很火暴的概念,每个人都有自己的理解。所谓的场景构建,就是让用户在什么情况下使用你这款产品。简单来说就是你的...

2022-10-25 22:31:03查看全文 >>

长寿花老桩太高了怎么修剪(长寿花10年老桩造型)

长寿花老桩太高了怎么修剪(长寿花10年老桩造型)

大家好,我是植物草虫,爱绿化爱养花,长寿花是重要的年宵花卉,每年冬季都是畅销季,当然现在几乎是大棚苗,养护难度大易掉花苞...

2022-10-25 22:48:22查看全文 >>

长寿花病虫害图片及治疗(长寿花病虫害识别图解)

长寿花病虫害图片及治疗(长寿花病虫害识别图解)

自从入秋到现在,关于长寿花的各个话题,总是会在小美文章评论里面出现。比如说最近花友问的比较多的:长寿花得了白粉病该怎么办...

2022-10-25 22:43:22查看全文 >>

长寿花修剪图解详解(长寿花修剪图片大全图解)

长寿花修剪图解详解(长寿花修剪图片大全图解)

长寿花是一种在冬天开花的植物,这是非常少见的,所以每到过年前后,很多人都喜欢在家里买上几盆长寿花,在装点新年。可是过完年...

2022-10-25 23:16:21查看全文 >>

长寿花老桩光杆了怎么办(长寿花老桩不长咋办)

长寿花老桩光杆了怎么办(长寿花老桩不长咋办)

长寿花可是人们喜爱的盆栽花卉,它的花期特别长,是冬天开花的主导。在阳台上多养上几盆各种颜色的长寿花,可是冬天里一道亮丽的...

2022-10-25 23:07:02查看全文 >>

文档排行