Area Keywords at ARR

At present, ARR offers 26 areas, including 2 new areas in April 2024: we are adding the Language Modeling area, and Human-Centered NLP area, in response to many requests.

Choosing the right area may be tricky for the authors, and to aid them, in this page we provide the keywords associated with ARR areas. This list is based on ACL’23 list, and we expect it to change over time. It is not meant to be exhaustive, but it should still give some idea of what is the focus of different areas. In the future, they can be used to analyze the submission volume for different subtopics (so that we could tell which areas are growing and may need their own areas), and for paper-reviewer matching.

How does one choose the right area, given that there are so many overlaps? Here are a few examples:

  • Should a question answering resource paper go to the resource area or the question answering area? Generally, if there is a task-specific area, it takes both modeling and resource papers, but if your paper is very centered on data collection methodology, you might still want to consider the resource area.
  • What about a paper on multilingual generation? It could go for the generation area if the focus is on generation strategies, or to the multilingualism area - if the focus is on those languages.
  • What about work in low-resource settings - is it efficiency or multilingualism? If the focus is on the training, and the low-resource setting is simulated from a well-resourced language, then we’d suggest the efficiency area. If you are contributing resources, analysis, or new solutions for languages that haven’t had that kind of thing before, consider the multilingualism area.

As of April 2024, the areas are:

  • Computational Social Science and Cultural Analytics: human behavior analysis; stance detection; frame detection and analysis; hate-speech detection; misinformation detection and analysis; psycho-demographic trait prediction; emotion detection and analysis; emoji prediction and analysis; language/cultural bias analysis; human-computer interaction; sociolinguistics; NLP tools for social analysis; quantitative analyses of news and/or social media;
  • Dialogue and Interactive Systems: spoken dialogue systems; evaluation and metrics; task-oriented; human-in-the-loop; bias/toxicity; factuality; retrieval; knowledge augmented; commonsense reasoning; interactive storytelling; embodied agents; applications; multi-modal dialogue systems; grounded dialog; multilingual / low resource; dialogue state tracking; conversational modeling;
  • Discourse and Pragmatics: anaphora resolution; coreference resolution; bridging resolution; coherence; cohesion; discourse relations; discourse parsing; dialogue; conversation; discourse and multilinguality; argument mining; communication;
  • Efficient/Low-Resource Methods for NLP: quantization; pruning; distillation; parameter-efficient-training; data-efficient training; data augmentation; NLP in resource-constrained settings;
  • Ethics, Bias, and Fairness: data ethics; model bias/fairness evaluation; model bias/unfairness mitigation; ethical considerations in NLP applications; transparency; policy and governance; reflections and critiques;
  • Generation: human evaluation; automatic evaluation; multilingualism; efficient models; few-shot generation; analysis; domain adaptation; data-to-text generation; text-to-text generation; inference methods; model architectures; retrieval-augmented generation; interactive and collaborative generation;
  • Human-Centered NLP: human-in-the-loop; human-AI interaction; user-centered design; value-centered design; human factors in NLP; participatory/community-based NLP; values and culture; human-centered evaluation
  • Information Extraction: named entity recognition and relation extraction; event extraction; open information extraction; knowledge base construction; entity linking/disambiguation; document-level extraction; multilingual extraction; zero/few-shot extraction;
  • Information Retrieval and Text Mining: passage retrieval; dense retrieval; document representation; hashing; re-ranking; pre-training; contrastive learning;
  • Interpretability and Analysis of Models for NLP: adversarial attacks/examples/training; calibration/uncertainty; counterfactual/contrastive explanations; data influence; data shortcuts/artifacts; explanation faithfulness; feature attribution; free-text/natural language explanations; hardness of samples; hierarchical & concept explanations; human-subject application-grounded evaluations; knowledge tracing/discovering/inducing; probing; robustness; topic modeling;
  • Language Modeling: pre-training; prompting; scaling; sparse models; retrieval-augmented models; continual learning; security and privacy; red teaming; applications; robustness; fine-tuning;
  • Linguistic Theories, Cognitive Modeling, and Psycholinguistics: linguistic theories; cognitive modeling; computational psycholinguistics;
  • Machine Learning for NLP: graph-based methods; knowledge-augmented methods; multi-task learning; self-supervised learning; contrastive learning; generative models; data augmentation; word embeddings; structured prediction; transfer learning / domain adaptation; representation learning; generalization; few-shot learning; reinforcement learning; optimization methods; continual learning; adversarial training; meta learning; causality; graphical models; human-in-the-loop / active learning;
  • Machine Translation: automatic evaluation; biases; domain adaptation; efficient inference for MT; efficient MT training; few-shot/zero-shot MT; human evaluation; interactive MT; MT deployment and maintenance; MT theory; modeling; multilingual MT; multimodality; online adaptation for MT; parallel decoding/non-autoregressive MT; pre-training for MT; scaling; speech translation; switch-code translation; vocabulary learning;
  • Multilingualism and Cross-Lingual NLP: code-switching; mixed language; multilingualism; language contact; language change; linguistic variation; cross-lingual transfer; multilingual representations; multilingual pre-training; multilingual benchmarks; multilingual evaluation; dialects and language varieties; less-resourced languages; endangered languages; indigenous languages; minoritized languages; language documentation; resources for less-resourced languages; software and tools;
  • Multimodality and Language Grounding to Vision, Robotics and Beyond: vision language navigation; cross-modal pretraining; image text matching; cross-modal content generation; vision question answering; cross-modal application; cross-modal information extraction; cross-modal machine translation; automatic speech recognition; spoken language understanding; spoken language translation; spoken language grounding; speech and vision; QA via spoken queries; spoken dialog; video processing; speech technologies; multimodality;
  • NLP Applications: educational applications, GEC, essay scoring; hate speech detection; multimodal applications; code generation and understanding; fact checking, rumor/misinformation detection; healthcare applications, clinical NLP; financial/business NLP; legal NLP; mathematical NLP; security/privacy; historical NLP; knowledge graphs;
  • Phonology, Morphology, and Word Segmentation: morphological inflection; paradigm induction; morphological segmentation; subword representations; chinese segmentation; lemmatization; finite-state morphology; morphological analysis; phonology; grapheme-to-phoneme conversion; pronunciation modeling;
  • Question Answering: commonsense QA; reading comprehension; logical reasoning; multimodal QA; knowledge base QA; semantic parsing; multihop QA; biomedical QA; multilingual QA; interpretability; generalization; reasoning; conversational QA; few-shot QA; math QA; table QA; open-domain QA; question generation;
  • Resources and Evaluation: corpus creation; benchmarking; language resources; multilingual corpora; lexicon creation; automatic creation and evaluation of language resources; NLP datasets; automatic evaluation of datasets; evaluation methodologies; evaluation; datasets for low resource languages; metrics; reproducibility; statistical testing for evaluation;
  • Semantics: Lexical and Sentence-Level: polysemy; lexical relationships; textual entailment; compositionality; multi-word expressions; metaphor; lexical semantic change; word embeddings; lexical resources; paraphrase recognition; textual entailment; natural language inference; semantic textual similarity; phrase/sentence embedding; paraphrasing; text simplification; word/phrase alignment;
  • Sentiment Analysis, Stylistic Analysis, and Argument Mining: argument mining; stance detection; argument quality assessment; rhetoric and framing; argument schemes and reasoning; argument generation; style analysis; style generation; applications;
  • Speech Recognition, Text-to-Speech and Spoken Language Understanding: automatic speech recognition; speech technologies; spoken dialog; spoken language grounding; speech and vision; spoken language translation; spoken language understanding; QA via spoken queries;
  • Summarization: extractive summarisation; abstractive summarisation; multimodal summarization; multilingual summarisation; conversational summarization; query-focused summarization; multi-document summarization; long-form summarization; sentence compression; few-shot summarisation; architectures; evaluation; factuality;
  • Syntax: Tagging, Chunking and Parsing: chunking, shallow-parsing; part-of-speech tagging; dependency parsing; constituency parsing; deep syntax parsing; semantic parsing; syntax-to-semantic interface; optimized annotations or data set for morpho-syntax related tasks; parsing algorithms (symbolic, theoretical results); grammar and knowledge-based approaches; multi-task approaches (large definition); massively multilingual oriented approaches; low-resources languages pos tagging, parsing and related tasks; morphologically-rich languages pos tagging, parsing and related tasks;
  • Special Theme Track: this is conference-specific, it is usually described in the CFP for each conference.