• 724.46 KB
  • 2022-04-22 13:43:28 发布

基于视觉词和海明距离优化机制的相似图片检索系统的研究.pdf

  • 12页
  • 当前文档由用户上传发布,收益归属用户
  1. 1、本文档共5页,可阅读全部内容。
  2. 2、本文档内容版权归属内容提供方,所产生的收益全部归内容提供方所有。如果您对本文有版权争议,可选择认领,认领后既往收益都归您。
  3. 3、本文档由用户上传,本站不保证质量和数量令人满意,可能有诸多瑕疵,付费之前,请仔细先通过免费阅读内容等途径辨别内容交易风险。如存在严重挂羊头卖狗肉之情形,可联系本站下载客服投诉处理。
  4. 文档侵权举报电话:19940600175。
'˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cn基于视觉词和海明距离优化机制的相似图片检索系统的研究庄煌,魏翼飞,宋梅北京邮电大学电子工程学院,北京100876摘要:图片相似检索是通过输入一张查询图,在数据库中基于内容的“语义”,检索出和输入图片相似的图片。本文提出了一种新颖的方法,用于基于视觉词的相似图片检索系统的设计和研究。检索系统主要包含三个部分,分别是图片特征提取,构建视觉词的索引和查询优化机制。图片特征提取过程主要基于快速鲁棒特征提取算法,构建索引主要基于K-Means聚类算法和视觉词袋模型,而查询过程中,本文结合了TF-IDF算法和海明码的使用进行图片查询优化。基于该检索系统的研究,设计了仿真实验。实验数据来自于开源图像数据库,该实验结果说明了设计的系统具备更好的检索的准确率。关键词:基于内容的图像相似检索,快速鲁棒特征提取,视觉词袋,K-Means聚类,海明码中图分类号:TP302.1VisualWordBasedSimilarImageRetrievalOptimizationByHammingDistanceZHUANGHuang,WEIYi-Fei,SONGMeiSchoolofElectronicEngineering,BeijingUniversityofPostsandTelecommunications,Beijing100876Abstract:Inthispaperwepresentanewmethodforvisualwordbasedsimilarimageretrievalbycomparingcontentofaqueryimagewithimagesstoredinadatabase.Theretrievalconsistsofthreemainsteps:featureextraction,indexingandqueryoptimization.ThefeatureextractionstepisbasedonSURFalgorithm.Forindexing,weusetheK-MeansalgorithmandtheBag-of-Visual-Wordsmodel.Thelaststepisverysigni cantandweassociateTF-IDFwithHammingDistancetoquery.Ourmethodistestedonthehighlydiverseopeningimagesandhasprovedabetterretrievalaccuracybasedontheexperimentalresults.Keywords:Content-BasedImageRetrieval(CBIR),Speed-UpRobustFeatures(SURF),Bag-of-Visual-Words(BoVW),K-Means,HammingDistanceCodeFoundations:theNationalNaturalScienceFoudationofChina(No.61571059)AuthorIntroduction:ZhuangHuang,male,SchoolofElectronicEngineering,BeijingUniversityofPostsandTelecommunications,Beijing100876,P.R.China.Correspondenceauthor:ZhuangHuang,E-mail:18523919519@163.com.WeiYiFei,male,associateprofessor,majorresearchdirection:GreenCommunication.E-mail:weiyifei@bupt.edu.cn.SongMei,female,professor,majorresearchdirection:MobileInternet.E-mail:songm@bupt.edu.cn.-1- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cn0IntroductionImagesareomnipresentandimagedatahasincreasedexplosivelyintheInternet.Butretrievingadesiredimagewithinalargescalecollectionwiththousandsofimagesisastressfultask.Inherentabilityofcomputerstodealwithvisualdatamakesithardtoachievehuman-likematchingandclassi cation.Computershavetorelyonsomekindofvisualglobal(e.g.color)orlocalfeatures(e.g.localinterestpoints).Apartfromvariousfeatures,content-basedretrievalsystemsareimplementedbasedonmanyalgorithmsindi erentstages.EarlyCBIRsystemmadeuseoflowlevelfeaturessuchascolorandtexture.SomeearlyworksincludetheworksbyM.J.SwainandD.H.Ballard[1]inwhichtheycomeupwiththeconceptofcolorhistogramaswellasintroducedtheconceptofhistogramintersectiondistancemetrictomeasurethedistancebetweenthehistogramofimages.However,lowlevelfeaturesaresensitivetofactorssuchasrotationandillumination.Andtherestillexistsa‘semanticgap’[2]betweenlowlevelfeaturesandthehumanrichsemantics[3]owingtothedi erencebetweencomputersandhumanbrainseventhoughmanye ortsaretriedto xthat.Untilalgorithms(e.g.SIFTandSURF)whichextractthelocalinterestfeaturesofimagesappearedinrecentyears,associatedwithmachinelearningalgorithms,CBIRsystemmakesitbettertoretrieveimagesinbigdataonahigh-levelsemanticsconceptwherefeaturesareexpressedmoreclosertohumansemantics.Inourresearch,tomakeitpossibletosearchecientlyandaccuratelyforparticularvisualcontent,weutilizedasophisticatedwayofimagefeatureextractionandindexingusingtheSURFalgorithm[4]andtheBag-of-Visual-Words(BoVW)model[5]whichwillbefurtherexplainedinChapter1.Thispaperisorganizedasfollows:Chapter1describesthemainalgorithmsused.Chapter2describeshowthesystemwedesignisbuiltandhowitworks.InChapter3,webuildthestepsofexperimentanddiscusstheresultoftheexperiment.Finally,Chapter4drawsconclusionfromalloftheexperiments.1ALGORITHMS1.1Speeded-UpRobustFeatures(SURF)Speed-UpRobustFeaturesis rstproposedbyHerbertBayasnovelscale-invariantandrotation-invariantinterestpointdetectoranddescriptor,whichiswidelyusedinthe eldofimageprocessingandrecognition.SURFdetectsthelocalinterestpointforeachimageandprovidesforsteadyinvarianceofscaleandrotation.LocalfeaturesdetectedbySURFismainlycalculatedbasedontheHessianMatrixcon-structedbyDi erence-of-Gaussian(DoG,afeaturedetectionoperatorbasedontheLaplacian-2- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cnoperator).Buttoacceleratewell,itprocessesusingintegralimageandboxwave lterandusestheresponseobtainedbyHessianMatrixtocomputethepointcombinedwith3Dnon-maximalsuppressionmethod.ThedescriptorsusesadistributionofHaar-waveletresponsesaroundtheinterestpoint’sneighborhood.TheSURFalgorithmissimilartotheSIFTalgorithm(Scaleinvariantfeatureextraction,putforwardbyDavidG.Lowe)[6],butmaindi erenceistheimplementationofscalespace.SURFbuildsthespaceonthecontrarybykeepingthesizeoftheinputimageconstant,in-creasingtheGaussiankernelsize.BecausetheSURFindi erentscalesofthesamplinglayeronlyneedstocalculateonce,notastheSIFTasrepeatedcalculations,speedgreatlyimproves.SURFdescriptoronlyhas64dimensions,whichmakestheprocessingofstorageandcomputa-tionsimpler.Someofthecomparativepaperssuchas[7]havestatedthatSURFperformswellintermsofresultandcomputationaltime,thuswechoseitasourfeatureextractor.SURFhas4majorstepsasexplainedin[4]and[8]:(1)ComputingIntegralImage,(2)Fast-HessianDetector:TheHessian,ConstructingtheScale-Space,AccurateInterestPointLocalization,(3)InterestPointDescriptor:OrientationAssignment,DescriptorComponents,(4)Generatingvectorsdescribingtheinterestpoints.1.2Bag-of-Visual-Words(BoVW)ModelTheBag-of-Visual-Words(BoVW)modelisimprovedbasedontheBoW(whichwasorig-inallyproposedasatextdocumentretrievalalgorithm).In[6],Itwas rstcarriedoutinclusteringSIFTfeaturesforobjectrecognition,andKeGaoet.al.[9]usedthismodele-cientlybuiltindexingforextractedfeatures.Speci cally,aSURFfeaturemeansapointina64-dimensionrealspace,andanimagecontainsuncertainquantitativeSURFfeatures.ThemodelconsidersaSUFRfeatureasawordwhichisserializedbynumber,soanimagecanbeconsideredasadocumentwhichcontainsmanywords,andclusterallwordsofmanyimagestoformalimitedworddictionary.Thisisthebasicideaofvisualwordsbasedimagesimilarretrievaltechnology.Ingeneral,themodelhasthefollowing3majorsteps:-3- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cn(1)Extractfeaturesofalargenumberofimagesandgenerateclusteringcentroids.(2)Transferthosecentroidsintoserialnumberwords,collectandstorethemtocreateaglobaldictionary.(3)Createaninvertedindexforimagestobestoredinthedatabaseand ndtheoccurrencesofeachvisualwordofthequeryintheindexeddatabase.1.3K-MeansClusteringK-Meansisanunsupervisedheuristicclusteringalgorithm,whichwas rstproposedin1967byMacQueen[10]andimprovedin1979byHartigan[11].Ingeneral,theideaofthebasicK-meansalgorithmistodividetheNpointsoftheD-dimensionalspaceintoKclusters.Thealgorithmrequiresonlyoneinputasaparameter,namely,thenumberofclusterclustersK.K-meansisaneasy-to-implementandfastclusteringalgorithmthatiswidelyusedinthe eldofcomputervision.Thebasicstepsarepresentedbelow:Algorithm1K-MeansAlgorithmn1:INPUT:X=x1;x2;:::;xn2R;kn2:OUTPUT:CEN=cen1;cen2;:::;cenk2R3:L=l1;l2;:::;ln4:CEN:=Rand();5:forxi2Xdo6:li:=ArgMinDist(xi;cenj);j2f1;:::;kg;7:endfor8:ch:=false;9:whilech=truedo10:forcenj2CENdo11:UpdateCluster(ceni);12:endfor13:forxi2Xdo14:minDistance:=ArgMinDist(xi;cenj);j2f1;:::;kg;15:ifminDistance6=lithen16:li:=minDistance;17:ch:=true;18:endif19:endfor20:endwhile-4- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cn2PROPOSEDSYSTEMFRAMEWORKInthissection,wewillintroduceoursystem.Mostretrievalsystemsaredividedintotwoparts,namely,featureextractionandindexing.Themainpurposeofretrievalsystemistobuildanindexthatallowstoretrievesimilarimages.Therearemanyapproachestoretrievalsystem,neverthelesstheyallusethecommonschema.Therefore,itseemsreasonabletoaddanadditionalcomponentinsomeformofevolutionarycomputation.Ourproposedmethodextendsthecommonsystemframeworkandisbasedon3mainsteps:imagefeatureextraction(SURFalgorithm),indexing(performedbyK-Means)andquery(op-timizedbyHammingDistance).ThepresentedmethodisbasedonacommonCBIRapproachwithextensionofHammingDistancecomputation.Thiscomputingcomponentassociatedwithembeddingcodesisimportantandanovelapproachtoimproveaccuracy.ThefollowingFig.1describesthemainschema.图1:RetrievalSystemFramework2.1FeatureExtractionInthissubsection,wedescribethefeatureextractionprocessing.AsisintroducedinChapter1,weonlyextractthemostdistinctive200SURFfeaturesofeachimage.Thisstageincludestwooperations:oneisweinputquantitiesofimagesforthedictionarywhileanotheriswealsoprocessthetestimagesassameasthetrainedimages.2.2IndexingWecreateavisualdictionarybeforeindexing.ThisstageisbasedonK-MeansAlgorithmandbyclusteringwecancreatealimiteddictionary.WeimprovethealgorithmbyreducingdimensionsasisshowninFig.2.-5- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cn图2:ClusteringASURFfeatureisa64-dimensionvector,wedivideitintotwoparts.Next,weclusterallppkeypointsrespectively,thuswecreatekcentroidsinthe rstpartandalsokcentroidsinthesecondpart(kisthetotalclusteringcentroids),anditisequivalenttokcentroidsthroughPermutationandCombination.AndThenwemapthesecentroidstovisualwordsandstorethem.Tab.1.showsthestorageform.SerialNumberTheFirst32-dimensionCentroidsTheSecond32-dimensionCentroids00.433,0.555,…0.444,0.543,…10.043,0.324,…0.082,0.672,….........表1:FormofCodeTableBasedonthedictionary,weprocesstheimageswhichwillbestoredinthedatabaseinthesameway.Atthesametime,wemaintaintheembeddedcodesbycomparingcomponentsbetweenSURFpointsandtheircorrespondingcentroids.Nowwemapthequeryimagetoacollectionofvisualwords.WerespectivelyclusterkeypointsofthequeryintothenearestcentroidsintheabovetableTab.1.Thenumberofpavisualwordisik+j.EachSURFfeatureischaracterizedasadigitofvisualwordandthequeryimageismappedtoacollectionconsistingofmanyvisualwords.Comparingvisualwordsofthequerywithimageinformationinthedatabase,webuildaninvertedindexbasedonvisualwordsthuswecanretrievesimilarimages.2.3QueryThequeryprocessisbasedonTD-IDFalgorithm.Afterindexing,weconstructthequeryphase,andquicklyretrieveimagesthatcontainthesamevisualwordsinthedatabase.However,thewordsofretrievedimagesexistweightdi erences.Therefore,weapplytheTF-IDFmethodtoquery,whichgiveshighscorestoimagesthatcontainsmorewords.UsingtheThresholdSegmentationbasedontheMaximumEntropyTheory,thesystemreturnssimilarimageswhichtheirTF-IDFscoresareuponacomputedparticularthresholdscore.-6- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cnTocomparesimilaritymoregranularlyandimproveaccuracy,weintroduceHammingDistanceandwealsohavemaintainedtheembeddedcodes.Thecodedescribesthespace-topologyrelationbetweenthepointandthecenter.Inthecaseof2-dimensionspace,weassumethatthecenteris(x1;y1),andakeypointis(x2;y2).Atwo-binarycodeisusedtorepresentthecode.Wede nethatwhenx1>y1,the rstbitis1,otherwiseis0;whenx2>y2,thesecondbitis1,otherwiseis0.Sotheembeddedcodemaybe00,01,10,or11,whichdescribesthedirectionofthepointapproachingtotheclusteringcenterinthefourquadrants.Weextendtheideato64-dimensionspace.Whencomparingvisualwords,weallowthecodesimilarity,whichiscalculatedbyHammingDistance,torepresentthespace-topologyrelationbetweenpointsandcentroids.Thesmallerthedistanceis,thecloserthepointistothecenter.Becauseof64-dimension,thedistancecomputedonlyrangesfrom0to64,andweset7distanceintervals:s0:[0;10],s1:[10;20],s2:[20;30],s3:[30;40]...s6:[60;70].WecomputeHammingDistancedforeverypoint.Wede necounti=0,i=f0;1;:::;7g,asoccurrencesofeveryinterval.Weaddcountiifd2si,i=f0;1;:::;7g.Foreveryimageinthedatabase,thereisastaticsnormalizeddistributionaboutHammingDistanceineveryinterval.Consideringthatthesmallerdis,theclosertheimagegettothequery,weallocategraduallyreducedweighttocounti.AttenuatedExpfunctionexactly tsitwell,sowemodifytheTF-IDFscore.Herewegivetheformula:XXicscore(q;d)/tf(t;d)idf(t)(countiei);c=f3;4;:::;7g(1)t2qi=1Here,eiisweightingcoecientinexponentialdecayform,alsonormalizedby:Xcbibiei=ec=ec;b=f0:1;1;10;100;1000g(2)i=1Fromthefomulas(1)(2),wegetthebestparameterb,cthroughthesimulation.Accuracyisimprovedgreatlywithouroptimalmethod,andweanalyzehowitimprovesinnextsection.3EXPERIMENTExperimentswerecarriedoutinJavaIDEonourownsoftware,writteninJavapro-gramminglanguage.WealsousedtheOpenCVlibrary.Thesimulationenvironmentisfullycustomizable.WetestedourprogramwiththehighlydiverseimagedatacomingfromKentuckyUniver-sity.Itconsistsof10200imagesinwhichtheyaredividedinto2550groups,inotherwords,thereare4imagesinarandomgroupandtheyareextremelysimilartoeachother.Meanwhile,classesofimagesarequitedistinctivesuchasShoes,Flowers,Toys,Books,Cellphones,Instru-ments,etc.,totally2550di erentcategories.Tocreateasemanticallyrichvisualdictionary,all-7- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cnimageswereusedfortraining,andwerandomlyselected100groupstoexperimentandanalyzeeciencyaccuracy.Accordingtoourproposedmethodsanddata,wesetupourexperimentalstepsbyfollow-ing:(1)Training.Extractkeypointsofallimagestocreateadictionary.Systemextractthe200mostdistinctivefeaturesofeachimageandthenclusterthesekeypoints.(2)Grouping.Thisstageispre-processingthesimulationdataforstatics.Asitallhas100groups,andeachimageinasamegroupissimilar,soweserializingthemfrom0to99,and4imagesinagroupareserializedform0to3.(3)Retrieving.Chooseoneimageineverygroup(totally100)asthe rsttestset,andstoretheremaining3imagestogether(totally300)totheindex.QueryimagesinturnandsystemreturnstherankingofTF-IDFscores.Andthenchooseanotherimageremainingasthesecondtestsetandstoreotherimagesinthesamewaythe rsttimedoes.Weexperimentinthiswayinturn,therearetotally400queries(eachselectedimagewillhaveacorrespondingresult).Forthepurposesoftheperformanceevaluationweusetwocommonmeasures:precisionandrecall.ThevisualrepresentationofthesemeasuresispresentedinFig.3.AI-groupofappropriateimages,whichshouldbereturned;RI-asetofreturnedimagesbythesystem;Rai-agroupofproperlyreturnimages(intersectionofAIandRI);iri-improperlyreturnedimages;anr-propernotreturnedimages;inr-impropernotreturnedimages;图3:RetrievalSystemFrameworkThepresentedmeasuresallowtode neprecisionandrecallbythefollowingformulas:jraijprecision=(3)jrai+irij-8- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cnjraijrecall=(4)jrai+anrjForeach,AIshouldbe3,RIiscomputedbyThresholdSegmentationbasedonMaximumEntropyTheory,andwecalculateprecision,recall.Thenwede neaverageprecisionavgandrecallavgbellowassystemmainperformance:iPNallprecisioniiprecisionavg=(5)NalliPNallrecalliirecallavg=(6)NallHere,Nall=400istotalimages.Atthesametime,foreachresult,wede nedn1,n2andn3whichalsoshowthesystemperformance:N1(1)n1=N,N1isthetotalofeveryreturncontainingmorethanonesimilarimage,n1allre ectstheeciencyofretrievingmorethanoneimage.N2(2)n2=N,N2isthetotalofeveryreturncontainingmorethantwosimilarimages,n2allre ectstheeciencyofretrievingmorethantwoimages.N3(3)n3=N,N3isthetotalofeveryreturncontainingthreesimilarimages,n3re ectsthealleciencyofcompletelymatchingretrieval.AsaresultofusingK-Meansalgorithm,thecentroidparameterkhasagreate ectonretrievalaccuracy.Sowechooseclusteringratesrangingfrom1:475to1:775toanalyzehowtheratesa ectsaccuracyintheconditionwenormallyretrieveandwithouroptimalmethod.InFig.4,asbincreases,precisionavg,recallavg,n1,n2,n3curvesallincreasegenerallyand nallydroptendingtosmoothness.Aswecanseethatbnearlyequalsto10,thusthecurvesarrivethemaxpointespeciallytheprecisionavgandrecallavgcurves.InFig.5,n1,n2,n3curvesseemrisingtoastablestandard.Cencentratedmoreontheprecisionavgandrecallavgcurves,we ndtheprecisionavgcurvedropsafterc=4,becauseprecisionavgisthesameimportancewithrecallavgforthesystem,associatedwithFig.4,wechooseparametersb=10,c=4inordertomakeitbesttoanalyzerelationsofratesandprecisionavg,recallavg.Fig.6describesthetendencyofrateandprecisionavgcurves.Asraterise(thatis,kgoesdown),bothgoupandthentendtodescendontheconditionsofnormalretrievalandwithouroptimalmethod.It"sobviousourmethodhasagreatimprovementonprecisionavg(nearlyincreasingby25%).InFig.7,weeasilylearnrelationbetweenrateandrecallavg.Thevariationtrendofthetwocurvesisroughlysimilar.Comparingtothenormal,recallavgincreasesbynearly6%.AssociatedwithFig.6,whenrate1:625,precisionavgandrecallavg-9- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cn图4:Relationofparamb,precisionavg,图5:Relationofparamc,precisionavg,recallavg,n1,n2,n3recallavg,n1,n2,n3图6:Relationofrate,precisionavg图7:Relationofrate,recallavgalmostarrivesmaxvalue,whichprovesthatourmethodise ectiveandimproveperformancesalot.InFig.8,wepresentthen1,n2,n3curveswhichalsoise ectedbyrate.Ascanbeseenclearlythatallofthemvaryinasimilartrend,neverthelessn1,n2isrelativehigher.Meanwhile,theyallmounttothetopwhenalsorate1:625.Wecanmakeaconclusionthattheclusteringrateisapproximately1:625,systemperformance(especiallyprecisionavg,recallavg)isthebest.InFig.9,wepresentsometypicalexperimentresultsfromsingleimagequery.Aswecansee,image3ingroup25hasretrievedallcorrectimagesandwecallitcompletelymatching.On veretrievedImages,image1ingroup29retrievedtwosimilarimages.Image1ingroup44retrievedcorrectimagesaddingasimilarimagewhichisnotinthesamegroup.Image1ingroup56onlyretrievedoneimagewhileotherarewrongforthereasonthatperhapstheycontainsimilarbackgroundsothatmakeitdiculttodetermine.Innexttworesults,we ndprecisionandrecallarerelativelower.For400queryimages,resultsaremorelikewelistin-10- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cn图8:Relationbetweenrateandn1,n2,n3图9:QueryResultsFig.9.Anyway,allresultsarerecordedandwecomputetheaccuracyprecisionavg,recallavgwiththem.4CONCLUSIONTheoptimalmethodisanovelapproachtovisualwordbasedsimilarretrievalsystem.Thee ectivenessofourmethodhasbeenprovedbytheperformedexperiments.OursystemframeworkisbasedonthecommonCBIRschema.TheexperimentalresultsprovedthattheusageofHammingDistanceandembeddedcodesprovidedimprovementforaccuracy.Andofcourse,oursystemisdesignedtobeappliedinthe eldofbigimagedatasecuritysuchasdetectingharmfulimages.However,thereisstillfurtherworkforimprovement.Forexample,wecouldchooseotherbetterfeatureextractorsorclusteringalgorithms,orwereplacetheglobaldictionarybydi-versedictionariesbasedontheGroupedBoVWmodel.Besides,wecouldalsoextracttwofeatures(e.g.colorfeaturesandSURFfeatures)anduseheuristicalgorithms(e.g.Di erentialEvolution)todecidewhichindexersuitsbetterbylearningautomatically.Allworkisaimedatimprovingaccuracy.However,consideringthatourprogramiswritteninJavaandtakesrelativelongertimetoachieve,thesolutionistorewritetheentiresysteminC++forreducingtimeconsumption.AcknowledgmentTheauthorswouldliketothankthereviewersfortheirdetailedreviewsandconstructivecomments,whichhavehelpedimprovethequalityofthispaper.ThisworkwassupportedbytheNationalNaturalScienceFoudationofChina(No.61571059).-11- ˖ڍመ੾᝶஠ڙጲhttp://www.paper.edu.cn参考文献(References)[1]MichaelJ.SwainandDanaH.Ballard.ColorIndexing[J].InternationalJournalofComputerVision,KluwerAcademicPublishers,pp.11-32,1991.[2]A.W.M.Smeulders,M.Worring,A.Gupta,R.Jain.Content-BasedImageRetrievalattheEndoftheEarlyYears[J].IEEETransactionofPatternAnalysisandMachineIntelligence,1984.[3]X.S.ZhouandT.S.Huang.CBIR:FromLow-LevelFeaturestoHighLevelSemantics[C].ProceedingsoftheSPIEImageandVideoCommunicationsandProceeding,Vol.3974,January,2000.[4]HerbertBay,TinneTuytelaars,LucVanGool.Speeded-UpRobustFeatures[M].ComputerVisionandImageUnderstanding(CVIU),Vol.110,No.3,pp.346-359,EECV,2008.[5]E.ValleandM.Cord.Advancedtechniquesincbir:localdescriptros,visualdictionariesandbagsoffeatures[C].ComputerGraphicsandImageProcessing(SIBGRAPITUTORIALS),pp.72-78,2009TutorialsoftheXXIIBrazilianSymposiumonIEEE,2009.[6]DavidG.Lowe.ObjectRecognitionfromLocalScale-InvariantFeatures[C].TheProceed-ingsoftheSeventhIEEEInternationalConferenceonComputerVision,Vol.2,pp.1150-1570,1999.[7]MayaDawood,CindyCapple,MaanE.EINajjar,MohamadKhalil,DenisPmrski.Harris,SIFT,andSURFFeaturesComparisonforVehicleLocalizationbasedonVirtual3DModelandCamera[C].3rdInternationalConferenceonImageProceedingsTheory,ToolsandApplications(IPTA),pp.307-312,October2012.[8]ChristopherEvans.NotesontheOpenSURFLibrary[C].UniversityofBristol,2009.[9]KeGao,ShouxunLin,YongdongZhang,ShengTang,HuaminRen.AttentionModelBasedSIFTKeypointsFiltrationforImageRetrieval[C].SeventhIEEE/ACISInternationalCon-ferenceonComputerandInformationScience,May,2008.[10]J.MacQueen.Somemethodsforclassi cationandanalysisofmultivariateobservations[C].Proceedingsofthe fthBerkeleysymposiumonmathematicalstaticsandprobability,Vol.1,No.14,pp.281-297,Oakland,CA,USA,1967.[11]J.A.HartiganandM.A.Wong.Algorithmas136:Ak-meansclusteringalgorithm[M].Appliedstatics,pp.100-108,1979.-12-'