Tasks
A task is an activity proposed with the purpose of solving a specific NLP problem, generally within the framework of a competition. Below is information about NLP tasks in Spanish from 2013 to the present.
HUHU - Hurtful humour detection IberLEF 2023
- NLP topic: processing humor
- Dataset: HUHU 2023
- Forum: IberLEF
- Competition: HUHU: HUrtful HUmour - Detection of humour spreading prejudice on Twitter
- Domain: Social
- Language(s): Spanish
HUHU - Prejudice Target Detection IberLEF 2023
- NLP topic: processing humor
- Dataset: HUHU 2023
- Forum: IberLEF
- Competition: HUHU: HUrtful HUmour - Detection of humour spreading prejudice on Twitter
- Domain: Social
- Language(s): Spanish
JOKER: Pun detection in Spanish CLEF 2023
- NLP topic: processing humor
- Dataset: JOKER 2023 ES
- Forum: CLEF
- Competition: JOKER: Automatic Wordplay Analysis
- Domain: Social
- Language(s): Spanish
HUHU - Degree of prejudice prediction IberLEF 2023
- NLP topic: processing humor
- Dataset: HUHU 2023
- Forum: IberLEF
- Competition: HUHU: HUrtful HUmour - Detection of humour spreading prejudice on Twitter
- Domain: Social
- Language(s): Spanish
JOKER: Pun location in Spanish CLEF 2023
- NLP topic: processing humor
- Dataset: JOKER 2023 ES
- Forum: CLEF
- Competition: JOKER: Automatic Wordplay Analysis
- Domain: Social
- Language(s): Spanish
Humor logic mechanism classification IberLEF 2021
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberLEF
- Competition: Detecting, Rating and Analyzing Humor in Spanish
- Domain:
- Language(s): Spanish
Humor detection IberLEF 2021
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberLEF
- Competition: Detecting, Rating and Analyzing Humor in Spanish
- Domain:
- Language(s): Spanish
Fake news detection IberLEF 2021
- NLP topic: fake news detection
- Dataset: FakeDeS
- Forum: IberLEF
- Competition: FakeDeS: Fake news detection
- Domain: COVID, others
- Language(s): Spanish
Humor rating IberLEF 2021
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberLEF
- Competition: Detecting, Rating and Analyzing Humor in Spanish
- Domain:
- Language(s): Spanish
Worthiness estimation CLEF 2021
- NLP topic: fake news detection
- Dataset: CheckThat-ES
- Forum: CLEF
- Competition: CheckThat! Lab Task 1 on Check-Worthiness Estimation in Tweets and Political Debates
- Domain: Politics
- Language(s): Spanish, English
Humor detection IberLEF 2019
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberLEF
- Competition: HAHA 2019: Humor Analysis based on Human Annotation
- Domain:
- Language(s): Spanish
Irony detection IberLEF 2019
- NLP topic: processing humor
- Dataset: IDAT-SP-EU, IDAT-SP-MEX, IDAT-SP-CUBA
- Forum: IberLEF
- Competition: Irony Detection in Spanish Variants
- Domain:
- Language(s): Spanish (Cuba), Spanish (Mexico), Spanish (Spain)
Humor rating IberLEF 2019
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberLEF
- Competition: HAHA 2019: Humor Analysis based on Human Annotation
- Domain:
- Language(s): Spanish
Irony detection IberLEF 2019
- NLP topic: processing humor
- Dataset: IDAT-SP-EU, IDAT-SP-MEX, IDAT-SP-CUBA
- Forum: IberLEF
- Competition: Irony Detection in Spanish Variants
- Domain:
- Language(s): Spanish (Cuba), Spanish (Mexico), Spanish (Spain)
Irony detection IberLEF 2019
- NLP topic: processing humor
- Dataset: IDAT-SP-EU, IDAT-SP-MEX, IDAT-SP-CUBA
- Forum: IberLEF
- Competition: Irony Detection in Spanish Variants
- Domain:
- Language(s): Spanish (Cuba), Spanish (Mexico), Spanish (Spain)
Negation cue identification IberLEF 2019
- NLP topic: processing negation
- Dataset: NEGES
- Forum: IberLEF
- Competition: NEGES 2019 Task: Negation in Spanish
- Domain:
- Language(s): Spanish
Humor rating IberEVAL 2018
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberEVAL
- Competition: Humor Analysis based on Human Annotation (HAHA)
- Domain:
- Language(s): Spanish
Humor detection IberEVAL 2018
- NLP topic: processing humor
- Dataset: HAHA
- Forum: IberEVAL
- Competition: Humor Analysis based on Human Annotation (HAHA)
- Domain:
- Language(s): Spanish
If you have published a result better than those on the list, send a message to odesia-comunicacion@lsi.uned.es indicating the result and the DOI of the article, along with a copy of it if it is not published openly.