Supervisors: Nicolas Zampieri (PhD student) nicolas.zampieri@inria.fr
Irina Illina (MdC, HDR) , illina@loria.fr
Dominique Fohr (CR CNRS), dominique.fohr@loria.fr
Team and lab: MULTISPEECH, LORIA
Duration: 5-6 months.
Motivation and context
The United Nations defines hate speech as “any type of communication through speech, writing or behavior, which denigrates a person or group based on who they are, i.e. their religion, ethnicity, nationality, or another identity factor.” We are interested in hate speech posted on social networks. With the expansion of social networks (Twitter, Facebook, etc.), the number of messages posted every day has increased dramatically. It is very difficult and expensive to process the millions of content posted every day in order to remove hateful content. Thus, automatic methods are required to moderate the influx. Automatic hate speech detection is a difficult task in the field of natural language processing (NLP) [ZIF21]. With the appearance of transformer-based language models like BERT [DCLT19], new state-of-the-art models have emerged for hate speech detection like HateBERT [CBMG21]. Current NLP models rely strongly on efficient learning algorithms. We are particularly interested in one of them: contrastive learning. Contrastive learning is employed to learn an embedding space such that pairs of similar sentences have close representations. [RA21] provide a summary of different models based on contrastive learning in language processing.
Goals and Objectives
The goal of this internship is to study contrastive learning in the context of hate speech detection. We believe that using this methodology will make the models more effective. Our model learns to estimate whether two sentences have the same sentiment or not. Based on the first model, the intern will explore other approaches to contrastive learning, such as SimCSE [GYC21] or Dual Contrastive Learning [CZZM22] models. The studied methods will be validated on several datasets to assess the robustness of the approach. In our team, we have several labeled corpora from social networks.
The internship work plan is as follows: at the beginning, the student will conduct a state-of-the-art study on recent developments in hate speech detection and contrastive learning in NLP. The student will implement the selected methods. Finally, the performance of the different implemented methods will be evaluated on several hate speech corpora and compared to the state-of-the-art.
The student intern will join the MULTISPEECH team. The team provides access to the computational resources (GPU, CPU, and datasets) in order to carry out the research.
Required Skills
The candidate should have an experience with Deep Learning, including good practice in Python and an understanding of deep learning libraries like Keras, Pytorch, or Tensorflow.
Please, apply by sending your CV and grades directly to supervisors.
References
[CBMG21] Tommaso Caselli, Valerio Basile, Jelena Mitrovic, and Michael Granitzer. HateBERT: ´ Retraining BERT for abusive language detection in English. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 17–25, Online, August 2021. Association for Computational Linguistics.
[CZZM22] Qianben Chen, Richong Zhang, Yaowei Zheng, and Yongyi Mao. Dual contrastive learning: Text classification via label-aware data augmentation. CoRR, abs/2201.08702, 2022.
[DCLT19] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pretraining of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019.
[GYC21] Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2021.
[RA21] Nils Rethmeier and Isabelle Augenstein. A primer on contrastive pretraining in language processing: Methods, lessons learned and perspectives. CoRR, abs/2102.12982, 2021.
[ZIF21] Nicolas Zampieri, Irina Illina, and Dominique Fohr. Multiword expression features for automatic hate speech detection. In Elisabeth Métais, Farid Meziane, Helmut Horacek, and Epaminondas Kapetanios, editors, Natural Language Processing and Information Systems, pages 156–164, Cham, 2021. Springer International Publishing.