International Journal of

ADVANCED AND APPLIED SCIENCES

EISSN: 2313-3724, Print ISSN: 2313-626X

Frequency: 12

line decor
  
line decor

 Volume 8, Issue 7 (July 2021), Pages: 97-105

----------------------------------------------

 Original Research Paper

 Title: Implementation of early and late fusion methods for content-based image retrieval

 Author(s): Ali Ahmed 1, *, Sara Mohamed 2

 Affiliation(s):

 1Faculty of Computer Science and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
 2Fuclty of Computer Science and Information Technology, Sudan University for Science and Technology, Khartoum, Sudan

  Full Text - PDF          XML

 * Corresponding Author. 

  Corresponding author's ORCID profile: https://orcid.org/0000-0002-8944-8922

 Digital Object Identifier: 

 https://doi.org/10.21833/ijaas.2021.07.012

 Abstract:

Content-Based Image Retrieval (CBIR) systems retrieve images from the image repository or database in which they are visually similar to the query image. CBIR plays an important role in various fields such as medical diagnosis, crime prevention, web-based searching, and architecture. CBIR consists mainly of two stages: The first is the extraction of features and the second is the matching of similarities. There are several ways to improve the efficiency and performance of CBIR, such as segmentation, relevance feedback, expansion of queries, and fusion-based methods. The literature has suggested several methods for combining and fusing various image descriptors. In general, fusion strategies are typically divided into two groups, namely early and late fusion strategies. Early fusion is the combination of image features from more than one descriptor into a single vector before the similarity computation, while late fusion refers either to the combination of outputs produced by various retrieval systems or to the combination of different rankings of similarity. In this study, a group of color and texture features is proposed to be used for both methods of fusion strategies. Firstly, an early combination of eighteen color features and twelve texture features are combined into a single vector representation and secondly, the late fusion of three of the most common distance measures are used in the late fusion stage. Our experimental results on two common image datasets show that our proposed method has good performance retrieval results compared to the traditional way of using single features descriptor and also has an acceptable retrieval performance compared to some of the state-of-the-art methods. The overall accuracy of our proposed method is 60.6% and 39.07% for Corel-1K and GHIM-10K datasets, respectively. 

 © 2021 The Authors. Published by IASE.

 This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

 Keywords: Content-based image retrieval, Feature extraction, Fusion method

 Article History: Received 6 January 2021, Received in revised form 31 March 2021, Accepted 17 April 2021

 Acknowledgment 

This research was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia. The authors, therefore, gratefully acknowledge the DSR for their technical and financial support.

 Compliance with ethical standards

 Conflict of interest: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

 Citation:

  Ahmed A and Mohamed S (2021). Implementation of early and late fusion methods for content-based image retrieval. International Journal of Advanced and Applied Sciences, 8(7): 97-105

 Permanent Link to this page

 Figures

 Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 

 Tables

 Table 1 Table 2 Table 3 Table 4 Table 5  

----------------------------------------------

 References (38)

  1. Abdelrahim AAA (2013). Fragment reweighting in ligand-based virtual screening. Ph.D. Dissertation, Universiti Teknologi Malaysia, Johor Bahru, Malaysia.   [Google Scholar]
  2. Ahmed A (2020). Implementing relevance feedback for content-based medical image retrieval. IEEE Access, 8: 79969-79976. https://doi.org/10.1109/ACCESS.2020.2990557   [Google Scholar]
  3. Ahmed A and Malebary S (2019). Feature selection and the fusion-based method for enhancing the classification accuracy of SVM for breast cancer detection. International Journal of Computer Science and Network Security, 19(11): 55-60.   [Google Scholar]
  4. Ahmed A and Malebary SJ (2020). Query expansion based on top-ranked images for content-based medical image retrieval. IEEE Access, 8: 194541-194550. https://doi.org/10.1109/ACCESS.2020.3033504   [Google Scholar]
  5. Ahmed A, Alhaj Alobeed M, and Osman Ibrahim A (2017). Multi-classifier method based on voting technique for mammogram image classification. Journal of Software Engineering and Intelligent Systems, 2(3): 280-285.   [Google Scholar]
  6. Ahmed A, Barukab OM, and Elsadig MA (2019). Heterogeneous multi-classifier method based on weighted voting for breast cancer detection. International Journal of Advances in Science Engineering and Technology, 7(4): 36-41.   [Google Scholar]
  7. Ahmed A, Saeed F, Salim N, and Abdo A (2014). Condorcet and borda count fusion method for ligand-based virtual screening. Journal of Cheminformatics, 6(1): 1-10. https://doi.org/10.1186/1758-2946-6-19   [Google Scholar] PMid:24883114 PMCid:PMC4026830
  8. Ahmed A, Salim N, and Abdo A (2013). Fragment reweighting in ligand-based virtual screening. Advanced Science Letters, 19(9): 2782-2786. https://doi.org/10.1166/asl.2013.5012   [Google Scholar]
  9. Alhassan AK and Alfaki AA (2017). Color and texture fusion-based method for content-based image retrieval. In the International Conference on Communication, Control, Computing and Electronics Engineering, IEEE, Khartoum, Sudan: 1-6. https://doi.org/10.1109/ICCCCEE.2017.7867649   [Google Scholar]
  10. Alsmadi MK (2020). Content-based image retrieval using color, shape and texture descriptors and features. Arabian Journal for Science and Engineering, 45(4): 3317-3330. https://doi.org/10.1007/s13369-020-04384-y   [Google Scholar]
  11. Atrey PK, Hossain MA, El Saddik A, and Kankanhalli MS (2010). Multimodal fusion for multimedia analysis: A survey. Multimedia Systems, 16(6): 345-379. https://doi.org/10.1007/s00530-010-0182-0   [Google Scholar]
  12. Barnard K and Forsyth D (2001). Learning the semantics of words and pictures. In the 8th IEEE International Conference on Computer Vision, IEEE, Vancouver, Canada, 2: 408-415. https://doi.org/10.1109/ICCV.2001.937654   [Google Scholar]
  13. Bhowmik N, González R, Gouet-Brunet V, Pedrini H, and Bloch G (2014). Efficient fusion of multidimensional descriptors for image retrieval. In the International Conference on Image Processing, IEEE, Paris, France: 5766-5770. https://doi.org/10.1109/ICIP.2014.7026166   [Google Scholar]
  14. Erkut U, Bostancıoğlu F, Erten M, Özbayoğlu AM, and Solak E (2019). HSV color histogram based image retrieval with background elimination. In the 1st International Informatics and Software Engineering Conference, IEEE, Ankara, Turkey: 1-5. https://doi.org/10.1109/UBMYK48245.2019.8965513   [Google Scholar]
  15. Houle ME, Ma X, Oria V, and Sun J (2017). Query expansion for content-based similarity search using local and global features. ACM Transactions on Multimedia Computing, Communications, and Applications, 13(3): 1-23. https://doi.org/10.1145/3063595   [Google Scholar]
  16. Ibrahim AO, Ahmed A, Azizah AH, Lashari SA, Alobeed MA, Kasim S, and Ismail MA (2018). An enhancement of multi classifiers voting method for mammogram image based on image histogram equalization. International Journal of Integrated Engineering, 10(6): 209-215. https://doi.org/10.30880/ijie.2018.10.06.030   [Google Scholar]
  17. Karamti H, Tmar M, Visani M, Urruty T, and Gargouri F (2018). Vector space model adaptation and pseudo relevance feedback for content-based image retrieval. Multimedia Tools and Applications, 77(5): 5475-5501. https://doi.org/10.1007/s11042-017-4463-x   [Google Scholar]
  18. Kuncheva LI (2014). Combining pattern classifiers: Methods and algorithms. John Wiley and Sons, Hoboken, USA. https://doi.org/10.1002/9781118914564   [Google Scholar]
  19. Latif A, Rasheed A, Sajid U, Ahmed J, Ali N, Ratyal NI, and Khalil T (2019). Content-based image retrieval and feature extraction: A comprehensive review. Mathematical Problems in Engineering, 2019: 9658350. https://doi.org/10.1155/2019/9658350   [Google Scholar]
  20. Li J and Wang JZ (2008). Real-time computerized annotation of pictures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(6): 985-1002. https://doi.org/10.1109/TPAMI.2007.70847   [Google Scholar] PMid:18421105
  21. Liu GH, Yang JY, and Li Z (2015). Content-based image retrieval using computational visual attention model. Pattern Recognition, 48(8): 2554-2566. https://doi.org/10.1016/j.patcog.2015.02.005   [Google Scholar]
  22. Maheshwary P and Srivastava N (2009). Prototype system for retrieval of remote sensing images based on color moment and gray level co-occurrence matrix. International Journal of Computer Science Issues, 3: 20-23.   [Google Scholar]
  23. Mistry YD (2020). Textural and color descriptor fusion for efficient content-based image retrieval algorithm. Iran Journal of Computer Science, 3(3): 169-183. https://doi.org/10.1007/s42044-020-00056-0   [Google Scholar]
  24. Piras L and Giacinto G (2017). Information fusion in content based image retrieval: A comprehensive overview. Information Fusion, 37: 50-60. https://doi.org/10.1016/j.inffus.2017.01.003   [Google Scholar]
  25. Rui Y, Huang TS, and Mehrotra S (1997). Content-based image retrieval with relevance feedback in MARS. In the International Conference on Image Processing, IEEE, Santa Barbara, USA, 2: 815-818. https://doi.org/10.1109/ICIP.1997.638621   [Google Scholar]
  26. Sclaroff S, La Cascia M, Sethi S, and Taycher L (1999). Unifying textual and visual cues for content-based image retrieval on the world wide web. Computer Vision and Image Understanding, 75(1-2): 86-98. https://doi.org/10.1006/cviu.1999.0765   [Google Scholar]
  27. Seetharaman K and Kamarasan M (2014). Statistical framework for image retrieval based on multiresolution features and similarity method. Multimedia Tools and Applications, 73(3): 1943-1962. https://doi.org/10.1007/s11042-013-1637-z   [Google Scholar]
  28. Shirkhorshidi AS, Aghabozorgi S, and Wah TY (2015). A comparison study on similarity and dissimilarity measures in clustering continuous data. PloS ONE, 10(12): e0144059. https://doi.org/10.1371/journal.pone.0144059   [Google Scholar] PMid:26658987 PMCid:PMC4686108
  29. Siegel S and Castellan Jr. NJ (1988). Nonparametric statistics for the behavioral sciences. McGraw-Hill, New York, USA.   [Google Scholar]
  30. Singh SM and Hemachandran K (2012). Image retrieval based on the combination of color histogram and color moment. International Journal of Computer Applications, 58(3): 27-34. https://doi.org/10.5120/9263-3441   [Google Scholar]
  31. Snoek CG, Worring M, Geusebroek JM, Koelma D, and Seinstra FJ (2005). On the surplus value of semantic video analysis beyond the key frame. In the International Conference on Multimedia and Expo, IEEE, Amsterdam, Netherlands: 399–402. https://doi.org/10.1109/ICME.2005.1521441   [Google Scholar]
  32. Sumathi T, Devasena CL, and Hemalatha M (2011). An overview of automated image annotation approaches. International Journal of Research and Reviews in Information Sciences, 1(1): 1-5.   [Google Scholar]
  33. Varish N, Pal AK, Hassan R, Hasan MK, Khan A, Parveen N, and Memon I (2020). Image retrieval scheme using quantized bins of color image components and adaptive tetrolet transform. IEEE Access, 8: 117639-117665. https://doi.org/10.1109/ACCESS.2020.3003911   [Google Scholar]
  34. Veltkamp RC and Tanase M (2002). A survey of content-based image retrieval systems. In: Marques O and Furht B (Eds.), Content-based image and video retrieval: 47-101. Springer, Boston, USA. https://doi.org/10.1007/978-1-4615-0987-5_5   [Google Scholar]
  35. Weszka JS, Dyer CR, and Rosenfeld A (1976). A comparative study of texture measures for terrain classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-6(4): 269-285. https://doi.org/10.1109/TSMC.1976.5408777   [Google Scholar]
  36. Yue J, Li Z, Liu L, and Fu Z (2011). Content-based image retrieval using color and texture fused features. Mathematical and Computer Modelling, 54(3-4): 1121-1127. https://doi.org/10.1016/j.mcm.2010.11.044   [Google Scholar]
  37. Zenggang X, Zhiwen T, Xiaowen C, Xue-min Z, Kaibin Z, and Conghuan Y (2019). Research on image retrieval algorithm based on combination of color and shape features. Journal of Signal Processing Systems, 93: 139–146. https://doi.org/10.1007/s11265-019-01508-y   [Google Scholar]
  38. Zhou XS and Huang TS (2002). Unifying keywords and visual contents in image retrieval. IEEE Multimedia, 9(2): 23-33. https://doi.org/10.1109/93.998050   [Google Scholar]