Researchers created a machine-learning algorithm that allowed robots to “intuit” how to behave like humans in certain situations, like being quiet in a library.
Humans know to silence a ringing phone when they’re in a quiet library, and to say “thank you” after someone lends a helping hand. Now, robots will learn this etiquette as well, thanks to a research project that aims to teach robots manners.
By teaching robots such social norms , researchers think the machines could more seamlessly interact with humans. The initial stages of the project were recently completed by a team of researchers funded by the Defense Advanced Research Projects Agency (DARPA) — a branch of the U.S. Department of Defense dedicated to the development of new military technologies.
The researchers studied how humans recognize and react to social norms, and developed a machine-learning algorithm that allows a robot to learn these “manners” by drawing on human data. [Superintelligent Machines: 7 Robotic Futures ]
According to the researchers, an artificially intelligent (AI) system could eventually “intuit” how to behave in certain situations, just as people do.
“If we’re going to get along as closely with future robots, driverless cars, and virtual digital assistants in our phones and homes as we envision doing so today, then those assistants are going to have to obey the same norms we do,” Reza Ghanadan, DARPA program manager, said in a statement
Using the library example, the researchers said a current AI phone-answering system would not automatically respond by silencing the phone’s ring or speaking quietly.
This is just one in a long list of social and ethical norms that machines need to be taught, but that humans learn from childhood. People can also learn new norms more easily because they already have a complex network of norms, according to the researchers.
“That’s something we are all familiar with, since ‘normal’ people detect norm violations very quickly,” Ghanadan said. “The uncertainty inherent in these kinds of human data inputs make machine learning of human norms extremely difficult.”
Thus far, the DARPA project has succeeded in providing a framework for such machine learning, but there is more work to be done, the researchers said. Ghanadan added that a robot would have to have the capacity to learn, represent, activate and apply the large number of norms people expect others to obey.
By creating a framework for developing and testing these complex “manners” algorithms, the researchers said the project could help accelerate machines’ ability to learn and mimic humans on their best behavior.
Original article on Live Science .
Comments are closed.