Volume 16, Issue 2 (4-2024)                   itrc 2024, 16(2): 34-44 | Back to browse issues page


XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Taheri F, Rahbar K, Beheshtifard Z. Image Retrieval with Missing Regions by Reconstructing and Concatenating Content and Semantic Features. itrc 2024; 16 (2) :34-44
URL: http://journal.itrc.ac.ir/article-1-573-en.html
1- Department of Computer Engineering, South Tehran Branch, Islamic Azad University, Tehran, Iran
2- Department of Computer Engineering, South Tehran Branch, Islamic Azad University, Tehran, Iran , kambiz.rahbar@gmail.com
3- Department of Computer Engineering, South Tehran Branch, Islamic Azad University, Tehran, Iran
Abstract:   (2859 Views)

In recent years, the performance of deep neural networks in improving the image retrieval process has been remarkable. Utilizing deep neural networks; however, leads to poor results in retrieving images with missing regions. The operators’ dysfunctions, who consider the relationship between the image pixels, statistically extract incomplete information from an image, which in turn reduces the number of image features and or leads to features' inaccurate identification. An attempt has been made to eliminate the problem of missing image information through image inpainting techniques; therefore, a content-based image retrieval method is proposed for images with missing regions. In this method, through image inpainting the crucial missing information is reconstructed. The image dataset is being queried to find similar samples. For this purpose, a two-stage inpainting framework based on encoder-decoder is used in the image retrieval system. Also, the features of each image are extracted from the integration and concatenating of content and semantic features. Through using handcraft features such as color and texture image content information is extracted from the Resnet-50 deep neural network. Finally, similar images are retrieved based on the minimum Euclidean distance. The performance of the image retrieval model with missing regions is evaluated with the average precision criterion on the Paris 6K datasets. The best retrieval results are 60.11%, 50.14%, and 42.43% for retrieving the top one, five, and ten samples after reconstructing the image with the most missing regions with a destruction frequency of 6 Hz, respectively. 

Full-Text [PDF 1167 kb]   (464 Downloads)    
Type of Study: Research | Subject: Information Technology

References
1. H. Hu et al., “Content-based gastric image retrieval using convolutional neural networks,” Int. J. Imaging Syst. Technol., vol. 31, no. 1, pp. 439–449, Mar. 2021, doi: 10.1002/IMA.22470
2. O. Tursun, S. Denman, S. Sridharan, E. Goan, and C. Fookes, “An efficient framework for zero-shot sketch-based image retrieval,” Pattern Recognit., vol. 126, p. 108528, Jun. 2022, doi: 10.1016/J.PATCOG.2022.108528.
3. G. Zhao, M. Zhang, J. Liu, Y. Li, and J. R. Wen, “APGAN: Adversarial patch attack on content-based image retrieval systems,” Geoinformatica, 2020, doi: 10.1007/s10707-020-00418-7.
4. N. Ali, K. B. Bajwa, R. Sablatnig, and Z. Mehmood, “Image retrieval by addition of spatial information based on histograms of triangular regions,” Comput. Electr. Eng., vol. 54, pp. 539–550, Aug. 2016, doi: 10.1016/J.COMPELECENG.2016.04.002.
5. Suryanto, D. H. Kim, H. K. Kim, and S. J. Ko, “Spatial color histogram based center voting method for subsequent object tracking and segmentation,” Image Vis. Comput., vol. 29, no. 12, pp. 850–860, Nov. 2011, doi: 10.1016/J.IMAVIS.2011.09.008.
6. G. H. Liu and J. Y. Yang, “Content-based image retrieval using color difference histogram,” Pattern Recognit., vol. 46, no. 1, pp. 188–198, Jan. 2013, doi: 10.1016/J.PATCOG.2012.06.001
7. M. Salmi and B. Boucheham, “Gradual integration of local color information for image retrieval by content: Application to cell-CCV method,” Proc. - 2016 Glob. Summit Comput. Inf. Technol. GSCIT 2016, pp. 54–59, Jul. 2017, doi: 10.1109/GSCIT.2016.23
8. D. Srivastava, B. Rajitha, S. Agarwal, and S. Singh, “Pattern-based image retrieval using GLCM,” Neural Comput. Appl. 2018 3215, vol. 32, no. 15, pp. 10819–10832, Jul. 2018, doi: 10.1007/S00521-018-3611-1.
9. M. Garg and G. Dhiman, “A novel content-based image retrieval approach for classification using GLCM features and texture fused LBP variants,” Neural Comput. Appl. 2020 334, vol. 33, no. 4, pp. 1311–1328, Jun. 2020, doi: 10.1007/S00521-020-05017-Z.
10. J. Singh, A. Bajaj, A. Mittal, A. Khanna, and R. Karwayun, “Content Based Image Retrieval using Gabor Filters and Color Coherence Vector,” Proc. 8th Int. Adv. Comput. Conf. IACC 2018, pp. 290–295, Jul. 2018, doi: 10.1109/IADCC.2018.8692123.
11. F. Taheri, K. Rahbar, and P. Salimi, “Effective features in content-based image retrieval from a combination of lowlevel features and deep Boltzmann machine,” Multimed. Tools Appl. 2022, pp. 1–24, Aug. 2022, doi: 10.1007/S11042-022-13670-W.
12. S. R. Dubey, “A Decade Survey of Content Based Image Retrieval Using Deep Learning,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 5, pp. 2687–2704, May 2022, doi: 10.1109/TCSVT.2021.3080920.
13. P. Desai, J. Pujari, C. Sujatha, A. Kamble, and A. Kambli, “Hybrid Approach for Content-Based Image Retrieval using VGG16 Layered Architecture and SVM: An Application of Deep Learning,” SN Comput. Sci. 2021 23, vol. 2, no. 3, pp. 1–9, Mar. 2021, doi: 10.1007/S42979-02100529-4.
14. M. D. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks arXiv:1311.2901v3 [cs.CV] 28 Nov 2013,” Comput. Vision–ECCV 2014, vol. 8689, no. PART 1, pp. 818–833, 2014, doi: 10.1007/978-3319-10590-1_53.
15. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., 2015.
16. C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June-2015, pp. 1–9, Oct. 2015, doi: 10.1109/CVPR.2015.7298594.
17. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-December, pp. 770–778, Dec. 2016, doi: 10.1109/CVPR.2016.90
18. K. T. Ahmed, S. Jaffar, M. G. Hussain, S. Fareed, A. Mehmood, and G. S. Choi, “Maximum Response Deep Learning Using Markov, Retinal Primitive Patch Binding Volume 16- Number 2 – 2024 (34 -44) 42 with GoogLeNet VGG-19 for Large Image Retrieval,” IEEE Access, vol. 9, pp. 41934–41957, 2021, doi: 10.1109/ACCESS.2021.3063545.
19. B. Cao, A. Araujo, and J. Sim, “Unifying Deep Local and Global Features for Image Search,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12365 LNCS, pp. 726–743, 2020, doi: 10.1007/978-3-030-58565-5_43/COVER.
20. R. Yelchuri, J. K. Dash, P. Singh, A. Mahapatro, and S. Panigrahi, “Exploiting deep and hand-crafted features for texture image retrieval using class membership,” Pattern Recognit. Lett., vol. 160, pp. 163–171, Aug. 2022, doi: 10.1016/J.PATREC.2022.06.017
21. M. Alrahhal and Supreethi K.P., “Multimedia Image Retrieval System by Combining CNN With Handcraft Features in Three Different Similarity Measures,” Int. J. Comput. Vis. Image Process., vol. 10, no. 1, pp. 1–23, Jan. 2020, doi: 10.4018/IJCVIP.2020010101
22. M. P. Likowski, M. Smieja, L. Struski, and J. Tabor, “MisConv: Convolutional Neural Networks for Missing Data,” undefined, pp. 2917–2926, 2022, doi: 10.1109/WACV51458.2022.00297.
23. H. Khan, X. Wang, and H. Liu, “Handling missing data through deep convolutional neural network,” Inf. Sci. (Ny)., vol. 595, pp. 278–293, May 2022, doi: 10.1016/J.INS.2022.02.051.
24. S. A. Chavan and N. M. Choudhari, “Various approaches for video inpainting: A survey,” Proc. - 2019 5th Int. Conf. Comput. Commun. Control Autom. ICCUBEA 2019, Sep. 2019, doi: 10.1109/ICCUBEA47591.2019.9129266.
25. O. Elharrouss, N. Almaadeed, S. Al-Maadeed, and Y. Akbari, “Image Inpainting: A Review,” Neural Process. Lett. 2019 512, vol. 51, no. 2, pp. 2007–2028, Dec. 2019, doi: 10.1007/S11063-019-10163-0.
26. Y. Zhang, F. Ding, S. Kwong, and G. Zhu, “Feature pyramid network for diffusion-based image inpainting detection,” Inf. Sci. (Ny)., vol. 572, pp. 29–42, Sep. 2021, doi: 10.1016/J.INS.2021.04.042
27. T. Xu, T. Z. Huang, L. J. Deng, X. Le Zhao, and J. F. Hu, “Exemplar-based image inpainting using adaptive twostage structure-tensor based priority function and nonlocal filtering,” J. Vis. Commun. Image Represent., vol. 83, p. 103430, Feb. 2022, doi: 10.1016/J.JVCIR.2021.103430.
28. B. Ceulemans, S. P. Lu, G. Lafruit, P. Schelkens, and A. Munteanu, “Efficient MRF-based disocclusion inpainting in multiview video,” Proc. - IEEE Int. Conf. Multimed. Expo, vol. 2016-August, Aug. 2016, doi: 10.1109/ICME.2016.7553000
29. M. Shroff and M. S. R. Bombaywala, “A qualitative study of Exemplar based Image Inpainting,” SN Appl. Sci., vol. 1, no. 12, pp. 1–8, Dec. 2019, doi: 10.1007/S42452-0191775-7/TABLES/1.
30. H. Shao and Y. Wang, “Generative image inpainting with salient prior and relative total variation,” J. Vis. Commun. Image Represent., vol. 79, p. 103231, Aug. 2021, doi: 10.1016/J.JVCIR.2021.103231.
31. X. Zhang et al., “DE-GAN: Domain Embedded GAN for High Quality Face Image Inpainting,” Pattern Recognit., vol. 124, p. 108415, Apr. 2022, doi: 10.1016/J.PATCOG.2021.108415.
32. M. A. Hedjazi and Y. Genc, “Efficient texture-aware multi-GAN for image inpainting,” Knowledge-Based Syst., vol. 217, p. 106789, Apr. 2021, doi: 10.1016/J.KNOSYS.2021.106789.
33. X. Zhang et al., “Face inpainting based on GAN by facial prediction and fusion as guidance information,” Appl. Soft Comput., vol. 111, p. 107626, Nov. 2021, doi: 10.1016/J.ASOC.2021.107626.
34. Y. Zeng, Y. Gong, and J. Zhang, “Feature learning and patch matching for diverse image inpainting,” Pattern Recognit., vol. 119, p. 108036, Nov. 2021, doi: 10.1016/J.PATCOG.2021.108036.
35. N. Farajzadeh and M. Hashemzadeh, “A deep neural network based framework for restoring the damaged persian pottery via digital inpainting,” J. Comput. Sci., vol. 56, p. 101486, Nov. 2021, doi: 10.1016/J.JOCS.2021.101486
36. L. Liu and Y. Liu, “Load image inpainting: An improved U-Net based load missing data recovery method,” Appl. Energy, vol. 327, p. 119988, Dec. 2022, doi: 10.1016/J.APENERGY.2022.119988
37. H. H. Bu, N. C. Kim, K. W. Park, and S. H. Kim, “Content-based image retrieval using combined texture and color features based on multi-resolution multi-direction filtering and color autocorrelogram,” J. Ambient Intell. Humaniz. Comput. 2019, pp. 1–9, Oct. 2019, doi: 10.1007/S12652-019-01466-0.
38. Y. Ma, X. Liu, S. Bai, L. Wang, D. He, and A. Liu, “Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation,” IJCAI Int. Jt. Conf. Artif. Intell., vol. 2019-August, pp. 3123–3129, 2019, doi: 10.24963/IJCAI.2019/433.
39. N. Alajlan, M. S. Kamel, and G. H. Freeman, “Geometry-based image retrieval in binary image databases,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 6, pp. 1003–1013, Jun. 2008, doi: 10.1109/TPAMI.2008.37.
40. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004, doi: 10.1109/TIP.2003.819861.
41. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: A feature similarity index for image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378–2386, Aug. 2011, doi: 10.1109/TIP.2011.2109730.
42. G. Jang, J. woo Lee, J. G. Lee, and Y. Liu, “Distributed fine-tuning of CNNs for image retrieval on multiple mobile devices,” Pervasive Mob. Comput., vol. 64, Apr. 2020, doi: 10.1016/J.PMCJ.2020.101134

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.