Tasks
A task is an activity proposed with the purpose of solving a specific NLP problem, generally within the framework of a competition. Below is information about NLP tasks in Spanish from 2013 to the present.
Non-contextual classification of offensive comments IberLEF 2021
- NLP topic: hate detection
- Dataset: MeOffendES
- Forum: IberLEF
- Competition: MeOffendES: Detection of offensive language
- Domain:
- Language(s): Spanish
Detection of toxicity level IberLEF 2021
- NLP topic: hate detection
- Dataset: NewsCom-TOX
- Forum: IberLEF
- Competition: DETOXIS: Detection of toxicity
- Domain:
- Language(s):
- NLP topic: hate detection
- Dataset: MeOffendES
- Forum: IberLEF
- Competition: MeOffendES: Detection of offensive language
- Domain:
- Language(s):
Humor logic mechanism classification IberLEF 2021
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberLEF
- Competition: Detecting, Rating and Analyzing Humor in Spanish
- Domain:
- Language(s): Spanish
Sexism identification IberLEF 2021
- NLP topic: hate detection
- Dataset: EXIST-2021-ES
- Forum: IberLEF
- Competition: EXIST: Sexism detection in Twitter
- Domain:
- Language(s): Spanish, English
Aggresive language detection IberLEF 2020
- NLP topic: hate detection
- Dataset: Mexican Aggressiveness Corpus
- Forum: IberLEF
- Competition: MEX-A3T
- Domain:
- Language(s): Spanish (Mexico)
Irony detection IberLEF 2019
- NLP topic: processing humor
- Dataset: IDAT-SP-EU, IDAT-SP-MEX, IDAT-SP-CUBA
- Forum: IberLEF
- Competition: Irony Detection in Spanish Variants
- Domain:
- Language(s): Spanish (Cuba), Spanish (Mexico), Spanish (Spain)
Hate speech detection SEMEVAL 2019
- NLP topic: hate detection
- Dataset: HateEval-ES
- Forum: SEMEVAL
- Competition: SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter
- Domain:
- Language(s): Spanish, English
Irony detection IberLEF 2019
- NLP topic: processing humor
- Dataset: IDAT-SP-EU, IDAT-SP-MEX, IDAT-SP-CUBA
- Forum: IberLEF
- Competition: Irony Detection in Spanish Variants
- Domain:
- Language(s): Spanish (Cuba), Spanish (Mexico), Spanish (Spain)
Aggressive behaviour and target classification SEMEVAL 2019
- NLP topic: hate detection
- Dataset: HateEval-ES
- Forum: SEMEVAL
- Competition: SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter
- Domain:
- Language(s): Spanish, English
Humor detection IberLEF 2019
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberLEF
- Competition: HAHA 2019: Humor Analysis based on Human Annotation
- Domain:
- Language(s): Spanish
Irony detection IberLEF 2019
- NLP topic: processing humor
- Dataset: IDAT-SP-EU, IDAT-SP-MEX, IDAT-SP-CUBA
- Forum: IberLEF
- Competition: Irony Detection in Spanish Variants
- Domain:
- Language(s): Spanish (Cuba), Spanish (Mexico), Spanish (Spain)
Humor rating IberLEF 2019
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberLEF
- Competition: HAHA 2019: Humor Analysis based on Human Annotation
- Domain:
- Language(s): Spanish
Misogynistic behavior and target classification IberEVAL 2018
- NLP topic: hate detection
- Dataset: AMI-ES
- Forum: IberEVAL
- Competition: AMI: Automatic Misogyny Identification
- Domain:
- Language(s): Spanish, English
Humor detection IberEVAL 2018
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberEVAL
- Competition: Humor Analysis based on Human Annotation (HAHA)
- Domain:
- Language(s): Spanish
Aggresive language detection IberEVAL 2018
- NLP topic: hate detection
- Dataset: MEX-A3T-profiling
- Forum: IberEVAL
- Competition: MEX-A3T: Authorship and aggressiveness analysis in Mexican Spanish tweets
- Domain:
- Language(s): Spanish (Mexico)
Humor rating IberEVAL 2018
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberEVAL
- Competition: Humor Analysis based on Human Annotation (HAHA)
- Domain:
- Language(s): Spanish
Misogyny identification IberEVAL 2018
- NLP topic: hate detection
- Dataset: AMI-ES
- Forum: IberEVAL
- Competition: AMI: Automatic Misogyny Identification
- Domain:
- Language(s): Spanish, English
Pagination
If you have published a result better than those on the list, send a message to odesia-comunicacion@lsi.uned.es indicating the result and the DOI of the article, along with a copy of it if it is not published openly.