The task consists on:
- The recognition of the targets, that can be either specific users or groups of women (binary classification)
- Identification of the type of misogyny against women (multi-class classification). A tweet must be classified as belonging to one of the following categories:
- Stereotype & Objectification: a widely held but fixed and oversimplified image or idea of a woman; description of women’s physical appeal and/or comparisons to narrow standards.
- Dominance: to assert the superiority of men over women to highlight gender inequality.
- Derailing: to justify woman abuse, rejecting male responsibility; an attempt to disrupt the conversation in order to redirect women’s conversations on something more comfortable for men.
- Sexual Harassment & Threats of Violence: to describe actions as sexual advances, requests for sexual favours, harassment of a sexual nature; intent to physically assert power over women through threats of violence.
- Discredit: slurring over women with no other larger intention.
Publication
Elisabetta Fersini, Paolo Rosso, Maria Anzovino (2018) Overview of the Task on Automatic Misogyny Identification at IberEval 2018. Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018).
Competition
Language
Spanish
English
NLP topic
Abstract task
Dataset
Year
2018
Publication link
Ranking metric
Macro F1
Task results
System | MacroF1 Sort ascending |
---|---|
14-exlab.c.run2 | 0.4461 |
14-exlab.c.run3 | 0.4458 |
14-exlab.c.run4 | 0.4442 |
SB.c.run4 | 0.4410 |
14-exlab.c.run1 | 0.4405 |