
A rogue computer with evil intent is a fictional trope as old as computers. But today, as people around the globe turn to artificial intelligence (AI) for information and guidance, the question of whether a machine can learn morality is very real.
Researchers at the University of Washington are exploring this question through a collaboration between the Institute for Leaning & Brain Sciences (I-LABS) in the College of Arts & Sciences and the Paul G. Allen School of Computer Science & Engineering in the College of Engineering.
“A large part of human life is making decisions that involve morals and values,” says Andrew Meltzoff, I-LABS co-director and professor of psychology. “As humans work more closely with machines and computer agents, our lives become more intertwined with AI making suggestions for us on decision-making. A key question becomes whether AI can learn to make value-laden decisions, considering values and norms related to the user’s culture.”
Meltzoff, who specializes in child development, explains that humans begin to develop their moral code from early childhood, through interactions with their family and community. He wondered if AI could similarly become culturally attuned and learn values and norms by observing and interacting with live human beings, as a child does.
With this in mind, he collaborated with Rajesh Rao and Katharina Reinecke in the Allen School to attempt to train an AI system to value altruism. Rao is the CJ and Elizabeth Hwang Professor in the Allen School and Department of Electrical and Computer Engineering. Reinecke is a professor and the associate director of research and communication in the Allen School.
Altruism as a Dataset
The altruism project builds on previous research Meltzoff conducted with former postdoctoral researcher Rodolfo Cortes Barragan (now an assistant professor of psychology at San Diego State University) concerning young children’s development of altruistic behavior. Meltzoff and Barragan found that children as young as 19 months old show signs of altruism, with parenting beliefs and cultural background being key factors in developing altruistic behaviors.
The researchers also found that children raised in Latino American culture demonstrated an extraordinary amount of generosity, sharing, and caring as compared to others. Past research has shown that Latino Americans* tend to prioritize group harmony and the welfare of others over personal gain, whereas white, non-Latino Americans tend to prioritize autonomy and personal achievement.

Those cultural differences played a key role in Meltzoff, Rao, and Reinecke’s project to train AI to value altruism. Three hundred participants were recruited for the project; 110 were Latino American, the rest were white non-Latino American. All were asked to play an online game that, unbeknownst to them, was designed with opportunities to make altruistic decisions.
In the game, two chefs compete for points by preparing as many meals as possible within a limited time. The human participants were unaware that their competing chef was a bot, controlled by a computer program. The bot chef was stationed farther away from the ingredients and would occasionally ask the human chef to help them by handing over ingredients. Many participants did help despite the time it took from their own tasks, thus preventing them from racking up points — a “cost” of helping. As anticipated from past research, the altruistic responses were far more pronounced among the Latino American participants.
As humans work more closely with machines and computer agents, ...a key question becomes whether AI can learn to make value-laden decisions, considering values and norms related to the user’s culture.
Using data from the game, the researchers then created four training datasets for the AI model: an altruistic dataset, a non-altruistic dataset, a Latino American dataset, and a white, non-Latino American dataset. They trained the AI using Inverse Reinforcement Learning (IRL), a technique that enables AI to learn a new skill from human demonstrations, with positive or negative values assigned to particular features of the demonstrations. By using data from individuals making explicit value judgments, they sought to emulate the way a child learns through observation and interactions with people in their culture.
“After we trained the AI, we had the AI play the same game the participants played,” says Meltzoff. “It turned out that the AI that trained on the Latino and altruistic datasets were more generous — which suggests that AI can learn altruistic-like values and behaviors if given data from altruistic humans to begin with. Our group is thinking about the idea that in the future the “values” expressed by an AI will depend on the culture in which it is reared, who ‘raises’ it.”
In an upcoming paper, Meltzoff, Rao, Reinecke, and co-authors also note that AI trained on a particular group’s dataset not only acquired that group’s altruistic behaviors but also could generalize to novel scenarios requiring new altruistic judgments, going beyond the game on which they had been trained.
An Interdisciplinary Center for Globally Beneficial AI
Meltzoff is quick to point out that this research is intended only as an initial “proof of concept,” to explore whether AI can implicitly learn cultural values. Translating that on a broader scale will be challenging, given the diverse values of different communities around the world. But he believes sensitivity to those culturally typical values will make AI a more effective tool for users.
“Most AI is trained on data from North America and Europe,” says Meltzoff. “People around the world want to use AI to help them, but the approaches and values in their local culture may not be the same as those embodied in the AI’s training set. It can be disconcerting, even anxiety producing, when there’s a significant discrepancy between your values and the answers and recommendations you get from AI.”
An example might be teachers in Seattle and East Asia asking AI for suggestions to improve their interactions with students. The AI responses are likely to be similar, but perhaps they should not be, since student expectations and the role of the mentor in the two cultures are quite different.
The researchers hope that companies recognize the benefits of developing culturally attuned AI, both for users and for the tech industry. Work in this area is expected to ramp up at the UW thanks to the newly launched Center for Globally Beneficial AI, with Reinecke as director. Meltzoff and Kurtis Heimerl, associate professor in the Allen School, are co-associate directors. The Center aims to design equitable, responsive AI technologies for cultures and communities around the world.
“I think the University of Washington is very much on the cutting edge of this work,” Meltzoff says. “With our expertise in Arts & Sciences, the Allen School, Foster School of Business, Jackson School of International Studies, iSchool, and other units across the UW campuses, we have the people and ideas to make meaningful advances in globally beneficial AI. The UW values interdisciplinary work, and this is a perfect instance of how such collaborations can accelerate progress in newly emerging fields.”
*The study uses the term “Latino” to denote anyone born in or with ancestors from Latin America, regardless of gender.
More Stories

Capturing the Sounds of Campus
With "University of Washington Soundscape," ethnomusicology and international studies major Leo Freedman has created an audio experience of the UW campus.

Art Meets Technology at SPAM New Media Festival
Art meets technology at the SPAM New Media Festival, an exhibition of creative works that use technology in unexpected ways. The event will be held September 12-14.

Meet Our 2025 Graduate Medalists
Meet the four graduating students selected by the College of Arts & Sciences as 2025 Graduate Medalists for their accomplishments.