Detecting semantically ambiguous and complex entities in short and low-context settings (like media titles, products, and groups) in 11 languages (including Spanish) in both monolingual and multi-lingual scenarios.
Publication
Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022. SemEval-2022 Task 11: Multilingual Complex Named Entity Recognition (MultiCoNER). In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 1412–1437, Seattle, United States. Association for Computational Linguistics.
Language
Spanish
English
NLP topic
Abstract task
Dataset
Year
2022
Publication link
Ranking metric
F1
Task results
System | Precision | Recall | F1 Sort ascending | CEM | Accuracy | MacroPrecision | MacroRecall | MacroF1 | RMSE | MicroPrecision | MicroRecall | MicroF1 | MAE | MAP | UAS | LAS | MLAS | BLEX | Pearson correlation | Spearman correlation | MeasureC | BERTScore | EMR | Exact Match | F0.5 | Hierarchical F | ICM | MeasureC | Propensity F | Reliability | Sensitivity | Sentiment Graph F1 | WAC | b2 | erde30 | sent | weighted f1 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DAMO-NLP | 0.8994 | ||||||||||||||||||||||||||||||||||||
USTC-NELSLIP | 0.8544 | ||||||||||||||||||||||||||||||||||||
Infrrd.ai | 0.7526 | ||||||||||||||||||||||||||||||||||||
MaChAmp | 0.7520 |