Skip to main content

Artificial Intelligence for Social Good: When Machines Learn Human-like Biases from Data

Developing machine learning methods theoretically grounded in implicit social cognition reveals that unsupervised machine learning captures associations, including human-like biases, objective facts, and historical information, from the hidden patterns in datasets. Machines that learn representations of language from corpora embed biases reflected in the statistical regularities of language. Similarly, image representations in computer vision contain biases due to stereotypical portrayals in vision datasets. On the one hand, principled methodologies for measuring associations in artificial intelligence provide a systematic approach to study society, language, vision, and learning. On the other hand, these methods reveal the potentially harmful biases in artificial intelligence applications built on general-purpose representations. As algorithms are accelerating consequential decision-making processes ranging from employment and university admissions to law enforcement and content moderation, open problems remain regarding the propagation and mitigation of biases in the expanding machine learning pipeline.


Room
409