Show simple item record

dc.contributor.author
Amrhein, Chantal
dc.contributor.author
Schottmann, Florian
dc.contributor.author
Sennrich, Rico
dc.contributor.author
Läubli, Samuel
dc.contributor.editor
Rogers, Anna
dc.contributor.editor
Boyd-Graber, Jordan
dc.contributor.editor
Okazaki, Naoaki
dc.date.accessioned
2024-04-30T15:17:40Z
dc.date.available
2024-04-22T06:30:53Z
dc.date.available
2024-04-30T15:17:40Z
dc.date.issued
2023-07
dc.identifier.isbn
978-1-959429-72-2
en_US
dc.identifier.other
10.18653/v1/2023.acl-long.246
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/669691
dc.description.abstract
Natural language generation models reproduce and often amplify the biases present in their training data. Previous research explored using sequence-to-sequence rewriting models to transform biased model outputs (or original texts) into more gender-fair language by creating pseudo training data through linguistic rules. However, this approach is not practical for languages with more complex morphology than English. We hypothesise that creating training data in the reverse direction, i.e. starting from gender-fair text, is easier for morphologically complex languages and show that it matches the performance of state-of-the-art rewriting models for English. To eliminate the rule-based nature of data creation, we instead propose using machine translation models to create gender-biased text from real gender-fair text via round-trip translation. Our approach allows us to train a rewriting model for German without the need for elaborate handcrafted rules. The outputs of this model increased genderfairness as shown in a human evaluation study.
en_US
dc.language.iso
en
en_US
dc.publisher
Association for Computational Linguistics
en_US
dc.title
Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting Model
en_US
dc.type
Conference Paper
ethz.book.title
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers
en_US
ethz.pages.start
4486
en_US
ethz.pages.end
4506
en_US
ethz.event
61st Annual Meeting of the the Association-for-Computational-Linguistics (ACL 2023)
en_US
ethz.event.location
Toronto, Canada
en_US
ethz.event.date
July 9-14, 2023
en_US
ethz.identifier.wos
ethz.publication.place
Stroudsburg, PA
en_US
ethz.publication.status
published
en_US
ethz.date.deposited
2024-04-22T06:30:57Z
ethz.source
WOS
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2024-04-30T15:17:41Z
ethz.rosetta.lastUpdated
2024-04-30T15:17:41Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Exploiting%20Biased%20Models%20to%20De-bias%20Text:%20A%20Gender-Fair%20Rewriting%20Model&rft.date=2023-07&rft.spage=4486&rft.epage=4506&rft.au=Amrhein,%20Chantal&Schottmann,%20Florian&Sennrich,%20Rico&L%C3%A4ubli,%20Samuel&rft.isbn=978-1-959429-72-2&rft.genre=proceeding&rft_id=info:doi/10.18653/v1/2023.acl-long.246&rft.btitle=Proceedings%20of%20the%2061st%20Annual%20Meeting%20of%20the%20Association%20for%20Computational%20Linguistics,%20Volume%201:%20Long%20Papers
 Search print copy at ETH Library

Files in this item

FilesSizeFormatOpen in viewer

There are no files associated with this item.

Publication type

Show simple item record