Skip to content

This is a project for DS-GA 1012 Natural Language Understanding and Computational Semantics at New York University. The paper is accepted to appear at ACL2019 student research workshop.

Notifications You must be signed in to change notification settings

sueqian6/ACL2019-Reducing-Gender-Bias-in-Word-Level-Language-Models-Using-A-Gender-Equalizing-Loss-Function

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

66 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reducing-Gender-Bias-in-Language-Models

Introduction

This is a project for DS-GA 1012 Natural Language Understanding and Computational Semantics at New York University. Our purpose is to reduce gender bias in language models. It has been accepted to appear at ACL2019 student research workshop. We have also been invited to present at the 1st workshop on gender bias in NLP at ACL2019.

Paper link: https://arxiv.org/abs/1905.12801

Dataset

The dataset we use is Daily Mails. https://cs.nyu.edu/~kcho/DMQA/ We use 5% of the whole dataset. The subsample can be found in Data/Sample Stories.

Authors

  • Yusu Qian *

  • Urwa Muaz *

  • Ben Zhang

  • Jae Won Hyun

  • '*' denotes equal contribution

Acknowledgments

  • Hat tip to anyone whose code was used
  • Thanks to Professor Sam Bowman and Shikha Bordia who gave us lots of advice on the project and paper
  • Thanks to Tian Liu and Qianyi Fan for prof reading our paper.

About

This is a project for DS-GA 1012 Natural Language Understanding and Computational Semantics at New York University. The paper is accepted to appear at ACL2019 student research workshop.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published