Open access
Autor(in)
Datum
2023Typ
- Doctoral Thesis
ETH Bibliographie
yes
Altmetrics
Abstract
Deep neural network architectures have led to remarkable achievements in the area of natural language processing (NLP) in recent years. Through scaling up the model size and self-supervised pre-training on the vast amount of textual data available on the internet, generalization and complex reasoning capabilities have been unlocked, even when provided with a small number of specific examples. However, most progress in NLP has been made based on a static learning paradigm where models are trained once on a fixed dataset to learn a specific skill and remain fixed after that. In this thesis, we turn our attention to interactive agents for NLP, i.e., language-based models that engage with a dynamic environment or user. Across three different application areas, (i) text-based games, (ii) query reformulation, and (iii) conversation, we investigate and develop agents interacting with different forms of adaptive environments.
The thesis is structured into three parts, reflecting the three application areas. In the first part, we develop a deep reinforcement learning (RL) agent for text-based games that generalizes across families of games that are similar in structure but with new objects and instructions.
The second part focuses on query reformulation, which we approach from two angles. First, we consider the learning to search problem where an agent is trained to interact with an information retrieval (IR) system using natural language. Observing the IR component's results, it adapts the initial user query and collects an improved set of evidence documents. Within this setting, we develop two agents learning successful interactive search strategies: one model trained by pure reinforcement learning and the other through (self-) supervised learning. In the subsequent chapter, we turn our attention to neural retrieval models and develop agents for interactive query suggestions. To this end, we train a query decoder model that, given a point in the shared paragraph-query embedding space, generates the corresponding query in textual form. We employ this decoder to generate a synthetic dataset of directional query refinements, which we use to train a powerful reformulation model.
In the last part of the thesis, we propose different approaches to developing conversational agents. We suggest modularizing the architecture of dialogue models to output intermediate text sequences on which subsequent modules are conditioned. First, we show that generating the knowledge output as an intermediate step before the dialogue response can increase knowledge utilization and factual correctness in open-domain dialogue. Next, we develop a single model that sequentially generates (i) a search engine query, (ii) a knowledge output, and (iii) a final response. We show that it outperforms previous state-of-the-art dialogue models on knowledge-grounded conversation and, applied to topical prompt completions, improves upon models with a vastly larger number of parameters. Finally, we explore improving dialogue models after deployment and propose an objective that allows iteratively training a language model on binary labeled examples of its generations. Mehr anzeigen
Persistenter Link
https://doi.org/10.3929/ethz-b-000613568Publikationsstatus
publishedExterne Links
Printexemplar via ETH-Bibliothek suchen
Beteiligte
Referent: Hofmann, Thomas
Referent: Sachan, Mrinmaya
Referent: Ciaramita, Massimiliano
Referent: Weston, Jason
Verlag
ETH ZurichThema
Natural Language ProcessingOrganisationseinheit
09462 - Hofmann, Thomas / Hofmann, Thomas
ETH Bibliographie
yes
Altmetrics