Publications


Work in progress

  1. Caio Corro & Emile Chapuis. Weakly supervised Fenchel-Young losses: a framework for constructing loss functions for learning from partial labels.

Preprint

  1. Emile Chapuis and over 30 co-authors. NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation. [pdf] [code]

Conferences

  1. Emile Chapuis*, Pierre Colombo*, Matthieu Labeau, and Chloé Clavel. Code-switched inspired losses for spoken dialog representations. EMNLP 2021. [pdf]
  2. Pierre Colombo, Emile Chapuis, Matthieu Labeau, and Chloé Clavel. Improving Multimodal fusion via Mutual Dependency Maximisation. EMNLP 2021. [pdf]
  3. Emile Chapuis*, Pierre Colombo*, Matteo Manica, Matthieu Labeau, and Chloé Clavel. Hierarchical pre-training for sequence labelling in spoken dialog. Findings of EMNLP 2020. [pdf]
  4. Pierre Colombo*, Emile Chapuis*, Matteo Manica, Emmanuel Vignon, Giovanna Varni, and Chloé Clavel. Guiding attention in sequence-to-sequence models for dialogue act prediction. AAAI 2020. [pdf]

* denotes equal contribution, order is usually played with a dice.