Volume 15, Issue 1 (Special Issue on AI in ICT 2023)                   2023, 15(1): 56-62 | Back to browse issues page

XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Najafi-Lapavandani F, Shirali-Shahreza M H. Humor Detection in Persian: A Transformers-Based Approach. International Journal of Information and Communication Technology Research 2023; 15 (1) : 6
URL: http://ijict.itrc.ac.ir/article-1-561-en.html
1- Faculty of Mathematics & Computer Science Amirkabir University of Technology Tehran, Iran
2- Faculty of Mathematics & Computer Science Amirkabir University of Technology Tehran, Iran , hshirali@aut.ac.ir
Abstract:   (827 Views)
Humor is a linguistic device that can make people laugh, and in the case of expressing opinions, it can transform a phrase's polarity. Humorous sentences presenting ideas and criticism, occasionally using informal forms, have made their way to social media platforms like Twitter in almost every domain. Persian speakers likewise express their opinions through humorous tweets on Twitter. As one of the early efforts for detecting humor in Persian, the current research proposes a model by fine-tuning a transformer-based language model on a Persian humor detection dataset. The proposed model has an accuracy of 84.7% on the test set. Moreover, This research introduced a dataset of 14,946 automatically-labeled tweets for humor detection in Persian.
Article number: 6
Full-Text [PDF 758 kb]   (514 Downloads)    
Type of Study: Research | Subject: Information Technology

References
1. [1] "Humor | English meaning." Cambridge Dictionary. https://dictionary.cambridge.org/dictionary/english/humor (last accessed February 2023).
2. [2] "Irony | English meaning." Cambridge Dictionary. https://dictionary.cambridge.org/dictionary/english/irony (last accessed February 2023).
3. [3] R. Mihalcea and C. Strapparava, "Making computers laugh: Investigations in automatic humor recognition," in Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, 2005, pp. 531-538. [DOI:10.3115/1220575.1220642]
4. [4] R. Mihalcea and S. Pulman, "Characterizing humour: An exploration of features in humorous texts," in International Conference on Intelligent Text Processing and Computational Linguistics, 2007: Springer, pp. 337-347. [DOI:10.1007/978-3-540-70939-8_30]
5. [5] D. Yang, A. Lavie, C. Dyer, and E. Hovy, "Humor recognition and humor anchor extraction," in Proceedings of the 2015 conference on empirical methods in natural language processing, 2015, pp. 2367-2376. [DOI:10.18653/v1/D15-1284]
6. [6] Y. Raz, "Automatic humor classification on Twitter," in Proceedings of the NAACL HLT 2012 student research workshop, 2012, pp. 66-70.
7. [7] R. Zhang and N. Liu, "Recognizing humor on twitter," in Proceedings of the 23rd ACM international conference on conference on information and knowledge management,2014, pp. 889-898. [DOI:10.1145/2661829.2661997]
8. [8] L. De Oliveira and A. L. Rodrigo, "Humor detection in yelp reviews," Retrieved on December, vol. 15, p. 2019, 2015.
9. [9] R. Ortega-Bueno, C. E. Muniz-Cuza, J. E. M. Pagola, and P.Rosso, "UO UPV: Deep linguistic humor detection in Spanish social media," in Proceedings of the third workshop on evaluation of human language technologies for Iberian languages (IberEval 2018) co-located with 34th conference of the Spanish society for natural language processing (SEPLN 2018), 2018, pp. 204-213.
10. [10] P.-Y. Chen and V.-W. Soo, "Humor recognition using deep learning," in Proceedings of the 2018 conference of the north american chapter of the association for computational linguistics: Human language technologies, volume 2 (short papers), 2018, pp. 113-117. [DOI:10.18653/v1/N18-2018]
11. [11] O. Weller and K. Seppi, "Humor detection: A transformer gets the last laugh," arXiv preprint arXiv:1909.00252, 2019. [DOI:10.18653/v1/D19-1372]
12. [12] I. Annamoradnejad and G. Zoghi, "Colbert: Using bert sentence embedding for humor detection," arXiv preprint arXiv:2004.12765, 2020.
13. [13] A. Reyes, P. Rosso, and T. Veale, "A multidimensional approach for detecting irony in twitter," Language resources and evaluation, vol. 47, no. 1, pp. 239-268, 2013. [DOI:10.1007/s10579-012-9196-x]
14. [14] F. Barbieri, H. Saggion, and F. Ronzano, "Modelling sarcasm in twitter, a novel approach," in proceedings of the 5th workshop on computational approaches to subjectivity, sentiment and social media analysis, 2014, pp. 50-58. [DOI:10.3115/v1/W14-2609] [PMID]
15. [15] A. Rajadesingan, R. Zafarani, and H. Liu, "Sarcasm detection on twitter: A behavioral modeling approach," in Proceedings of the eighth ACM international conference on web search and data mining, 2015, pp. 97-106. [DOI:10.1145/2684822.2685316]
16. [16] D. Bamman and N. Smith, "Contextualized sarcasm detection on twitter," in proceedings of the international AAAI conference on web and social media, 2015, vol. 9, no. 1, pp.574-577. [DOI:10.1609/icwsm.v9i1.14655]
17. [17] B. C. Wallace and E. Charniak, "Sparse, contextually informed models for irony detection: Exploiting user communities, entities and sentiment," in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),2015, pp. 1035-1044. [DOI:10.3115/v1/P15-1100]
18. [18] J. Pennington, R. Socher, and C. D. Manning, "Glove: Global vectors for word representation," in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532-1543. [DOI:10.3115/v1/D14-1162]
19. [19] A. Kumar, S. R. Sangwan, A. Arora, A. Nayyar, and M. Abdel-Basset, "Sarcasm detection using soft attention-based bidirectional long short-term memory model with convolution network," IEEE access, vol. 7, pp. 23319-23328,2019. [DOI:10.1109/ACCESS.2019.2899260]
20. [20] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, "Enriching word vectors with subword information," Transactions of the association for computational linguistics, vol. 5, pp. 135-146, 2017. [DOI:10.1162/tacl_a_00051]
21. [21] A. Ghosh and T. Veale, "Fracking sarcasm using neural network," in Proceedings of the 7th workshop on computational approaches to subjectivity, sentiment and social media analysis, 2016, pp. 161-169. [DOI:10.18653/v1/W16-0425] []
22. [22] M. Zhang, Y. Zhang, and G. Fu, "Tweet sarcasm detection using deep neural network," in Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: technical papers, 2016, pp. 2449-2460.
23. [23] B. Felbo, A. Mislove, A. Søgaard, I. Rahwan, and S. Lehmann, "Using millions of emoji occurrences to learn anydomain representations for detecting sentiment, emotion and sarcasm," arXiv preprint arXiv:1708.00524, 2017. [DOI:10.18653/v1/D17-1169]
24. [24] "SemEval." https://semeval.github.io/ (last accessed February 2023).
25. [25] I. A. Farha, S. V. Oprea, S. Wilson, and W. Magdy, "SemEval-2022 Task 6: iSarcasmEval, Intended Sarcasm Detection in English and Arabic," in Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval2022), 2022, pp. 802-814. [DOI:10.18653/v1/2022.semeval-1.111]
26. [26] Z. B. Nezhad and M. A. Deihimi, "Sarcasm detection in Persian," Journal of Information and Communication Technology, vol. 20, no. 1, pp. 1-20, 2021. [DOI:10.32890/jict.20.1.2021.6249]
27. [27] P. Golazizian, B. Sabeti, S. A. A. Asli, Z. Majdabadi, O.Momenzadeh, and R. Fahmi, "Irony detection in Persian language: A transfer learning approach using emoji prediction," in Proceedings of The 12th Language Resources and Evaluation Conference, 2020, pp. 2839-2845.
28. [28] "Kaggle." https://www.kaggle.com/ (last accessed February 2023) .
29. [29] SemEval. "iSarcasmEval: Intended Sarcasm Detection In English and Arabic."https://sites.google.com/view/semeval2022-isarcasmeval (last accessed February 2023).
30. [30] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.Volume 15- Number 1 - 2023 (56 -62) 61
31. [31] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert:Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
32. [32] M. Farahani, M. Gharachorloo, M. Farahani, and M. Manthouri, "Parsbert: Transformer-based model for persian language understanding," Neural Processing Letters, vol. 53, no. 6, pp. 3831-3847, 2021. [DOI:10.1007/s11063-021-10528-4]
33. [33] A. Conneau et al., "Unsupervised cross-lingual representation learning at scale," arXiv preprint arXiv:1911.02116, 2019.
34. [34] G. Lample and A. Conneau, "Cross-lingual language model pretraining," arXiv preprint arXiv:1901.07291, 2019.
35. [35] T. Ranasinghe, S. Gupte, M. Zampieri, and I. Nwogu, "Wlvrit at hasoc-dravidian-codemix-fire2020: Offensive language identification in code-switched youtube comments," arXiv preprint arXiv:2011.00559, 2020.
36. [36] J. Wei et al., "Finetuned language models are zero-shot learners," arXiv preprint arXiv:2109.01652, 2021.

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.