Enhancement of Illumination scheme for Adult Image Recognition

  • Sasan Karamizadeh Iran Telecommunication Research Center (ITRC) Information and Communications Technology Research Institute, Tehran, Iran
  • Abouzar Arabsorkhi Faculty member of Iran Telecommunication Research Center (ITRC) Information and Communications Technology Research Institute, Tehran, Iran
Keywords: adult image, illumination, fuzzy deep neural network segmentation, Histogram truncation and stretching, DCTII, AlexNet, Convolutional


Biometric-based techniques have emerged as the most promising option for individual recognition. This task is still a challenge for computer vision systems. Several approaches to adult image recognition, which include the deep neural network and traditional classifier, have been proposed. Different image condition factors such as expressions, occlusion, poses, and illuminations affect the facial recognition system. A reasonable amount of illumination variations between the gallery and probe images need to be taken into account in adult image recognition algorithms. In the context of adult image verification, illumination variation plays a vital role and this factor will most likely result in misclassification. Different architectures and different parameters have been tested in order to improve the classification’s accuracy. This proposed method contains four steps, which begin with Fuzzy Deep Neural Network Segmentation. This step is employed in order to segment an image based on illumination intensity. Histogram Truncation and Stretching is utilized in the second step for improving histogram distribution in the segmented area. The third step is Contrast Limited Adaptive Histogram Equalization (CLAHE). This step is used to enhance the contrast of the segmented area. Finally, DCT-II is applied and low-frequency coefficients are selected in a zigzag pattern for illumination normalization. In the proposed method, AlexNet architecture is used, which consists of 5 convolutional layers, max-pooling layers, and fully connected layers. The image is passed through a stack of convolutional layers after fuzzy neural representation, where we used filter 8 × 8. The convolutional stride is fixed to 1 pixel. After every convolution, there is a subsampling layer, which consists of a 2×2 kernel to do max pooling. This can help to reduce the training time and compute complexity of the network. The proposed scheme will be analyzed and its performance in accuracy and effectiveness will be evaluated. In this research, we have used 80,400 images, which are imported from two datasets - the Compaq and Poesia datasets - and used images found on the Internet.


Download data is not yet available.

Author Biographies

Sasan Karamizadeh, Iran Telecommunication Research Center (ITRC) Information and Communications Technology Research Institute, Tehran, Iran

He received his M.Sc. and Ph.D. degrees in Computer Science from the Universiti Teknologi Malaysia(UTM), in 2012 and 2017 respectively. He has  a post-doctoral certificate at the Iran Telecommunication Research Center. Image processing and face recognition are his  primary fields of interest and he has published several papers in international journals and at conferences.

Abouzar Arabsorkhi, Faculty member of Iran Telecommunication Research Center (ITRC) Information and Communications Technology Research Institute, Tehran, Iran

He has received Ph.D. degrees at the University of Tehran  in the field of Information Systems Management. He is a faculty member and the director of  the Network and System Security Assessment Unit at the Information and Communications Technology Research Institute. Over the past few years, he has been involved in security management and planning, security architecture, risk management, security assessment and prototype certification, and the design and implementation of specialized security labs. The internet security of objects is one of his research interests. During the past 10 years, he has been teaching in the field of information systems and E-commerce security.


[1] H. Wang and A. Fan, "Pornographic information of Internet views detection method based on the connected areas," in Seventh International Conference on Electronics and Information Engineering, 2017, pp. 1032228-1032228-6. [2] Y. Wang, X. Jin, and X. Tan, "Pornographic image recognition by strongly-supervised deep multiple instance learning," in Image Processing (ICIP), 2016 IEEE International Conference on, 2016, pp. 4418-4422. [3] Hooman, A., Marthandan, G., Yusoff, W.F.W., Omid, M. and Karamizadeh, S., 2016. Statistical and data mining methods in credit scoring. The Journal of Developing Areas, 50(5), pp.371-381. [4] A. Adnan and M. Nawaz, "RGB and Hue Color in Pornography Detection," in Information Technology: New Generations, ed: Springer, 2016, pp. 1041-1050. [5] F. Nian, T. Li, Y. Wang, M. Xu, and J. Wu, "Pornographic image detection utilizing deep convolutional neural networks," Neurocomputing, vol. 210, pp. 283-293, 2016. [6] Karamizadeha, S., Mabdullahb, S., Randjbaranc, E., & Rajabid, M. J. (2015). A review on techniques of illumination in face recognition. Technology, 3(02), 7983. [7] N. Brancati, G. De Pietro, M. Frucci, and L. Gallo, "Dynamic Colour Clustering for Skin Detection Under Different Lighting Conditions," in International Conference on Pattern Recognition and Information Processing, 2016, pp. 27-35. [8] C. Srisa-an, "A Classification of Internet Pornographic Images," International Journal of Electronic Commerce Studies, vol. 7, p. 95, 2016. [9] Karamizadeh, S., & Arabsorkhi, A. (2018, January). Methods of Pornography Detection. In Proceedings of the 10th International Conference on Computer Modeling and Simulation (pp. 33-38). ACM. [10] Z. Chen, C. Liu, F. Chang, X. Han, and K. Wang, "Illumination processing in face recognition," International Journal of Pattern Recognition and Artificial Intelligence, vol. 28, p. 1456011, 2014. [11] Karamizadeh, S., Abdullah, S. M., Halimi, M., Shayan, J., & javad Rajabi, M. (2014, September). Advantage and drawback of support vector machine functionality. In Computer, Communications, and Control Technology (I4CT), 2014 International Conference on (pp. 63-65). IEEE. [12] Z. Sufyanu, F. S. Mohamad, A. A. Yusuf, and M. B. Mamat, "Enhanced Face Recognition Using Discrete Cosine Transform," Engineering Letters, vol. 24, 2016. [13] Karamizadeh, F., 2015. Face Recognition by Implying Illumination Techniques–A Review Paper. Journal of Science and Engineering, 6(01), pp.001-007. [14] Karamizadeh, S., Cheraghi, S. M., & MazdakZamani, M. (2015). Filtering based illumination normalization techniques for face recognition. Indonesian Journal of Electrical Engineering and Computer Science, 13(2), 314-320. [15] Z. Liu, H. Zhao, J. Pu, and H. Wang, "Face recognition under varying illumination," Neural Computing and Applications, vol. 23, pp. 133-139, 2013. [16] R. Gross, S. Baker, I. Matthews, and T. Kanade, "Face recognition across pose and illumination," in Handbook of Face Recognition, ed: Springer, 2011, pp. 197-221.
[17] Y. Noh, D. Koo, Y.-M. Kang, D. Park, and D. Lee, "Automatic crack detection on concrete images using segmentation via fuzzy C-means clustering," in Applied System Innovation (ICASI), 2017 International Conference on, 2017, pp. 877-880. [18] S. S. M. Noor, K. Michael, S. Marshall, and J. Ren, "Hyperspectral Image Enhancement and Mixture DeepLearning Classification of Corneal Epithelium Injuries," Sensors, vol. 17, p. 2644, 2017. [19] K. Choudhary and N. Goel, "A review on face recognition techniques," in International Conference on Communication and Electronics System Design, 2013, pp. 87601E-87601E-10. [20] D. Hwang and K. Lee, "Robust Skin Area Detection Method in Color Distorted Images," 한국산학기술학회 논문지 , vol. 18, pp. 350-356, 2017. [21] C. Munck, N. Betrouni, and S. Mordon, "Illumination profile characterization of a light applicator," Photodiagnosis and Photodynamic Therapy, vol. 17, pp. A75-A76, 2017.
How to Cite
Karamizadeh, S., & Arabsorkhi, A. (2018, August 11). Enhancement of Illumination scheme for Adult Image Recognition. International Journal of Information & Communication Technology Research, 9(4), 50-56. Retrieved from http://journal.itrc.ac.ir/index.php/ijictr/article/view/279