Campus News

Harvard professor discusses ethical challenges facing AI

Believe it or not, one of the “most intelligent” machines out there is the one that cleans your house while you’re doing something else, said Barbara Grosz, Higgins Professor of Natural Sciences at Harvard University.

Many people may not realize Roomba is an example of artificial intelligence, said Grosz, who gave the Phi Beta Kappa Visiting Scholar Lecture March 27. But the field is actually much broader than the average person might know, encompassing everything from the voice-activated technology in a smartphone to trading algorithms guiding decisions on Wall Street to the recommendations you see when you turn on Netflix or visit Facebook.

“There are now a range of kinds of applications, which you see in daily life,” said Grosz, an AI expert whose contributions include establishing the research field of computational modeling of discourse and developing collaborative multi-agent systems for human-computer communication. “Interpreting languages, learning, drawing inferences, making decisions and acting on those decisions—these are all things that systems that have some AI component in them are able to do.”

Grosz joked that she never expected the field of AI to become so intriguing that someone outside the sciences would be interested in learning more on the topic, but shows like HBO’s Westworld and the creation of intelligent personal assistants, like Amazon’s Alexa and Apple’s Siri, have helped thrust it into the mainstream.

But with the proliferation of smart devices and gadgets comes serious ethical concerns, Grosz said.

For instance, Mattel’s Hello Barbie, a smart doll designed for young children, has a button children press to have “conversations” with Barbie. The doll has basic scripts programmed into its technology and fills in the blanks of those templates based on what the child tells it, eventually tailoring responses as it learns likes and dislikes. But the child’s sentences are recorded and analyzed in the cloud.

“Is it ethical to encourage a child to confide in a computer system? To trust the advice it will give a child?” Grosz asked.

Such toys aren’t meant to completely replace human interaction, but in the case of Westworld and even the smart Barbie, they give people the opportunity to mistreat systems that are meant to be representative of people, she said.

AI also is used in some cases to predict criminal behavior, with technology that uses statistics to figure out who is most likely to commit a crime, repeat offend or be the victim of a crime. But that technology should be used as a source of supplemental information to inform decision-making, not to actually make decisions itself, Grosz said.

“We should be complementing people, not replacing them, because all systems are likely to encounter conditions or questions their creators didn’t anticipate,” Grosz said. “To be truly smart, they need to work well (and) they need to complement people.”