How humans are teaching AI to become better at second-guessing

  Last updated March 26, 2020 at 8:24 am

Topics:  

Researchers are teaching AI systems the subtleties of human behaviour so they can better predict what we want.


AI systems_robotics_artificial intelligence

While AI systems are being taught to act like humans, working independently is still a long way away. Credit: sompong_tom




Why This Matters: Properly designed, artificial intelligence could assist humans not as tools, but as partners.




One of the holy grails in the development of artificial intelligence (AI) is giving machines the ability to predict intent when interacting with humans.


We humans do it all the time and without even being aware of it: we observe, we listen, we use our past experience to reason about what someone is doing, why they are doing it to come up with a prediction about what they will do next.


At the moment, AI may do a plausible job at detecting the intent of another person (in other words, after the fact). Or it may even have a list of predefined, possible responses that a human will respond within a given situation. But when an AI system or machine only has a few clues or partial observations to go on, its responses can sometimes be a little…robotic.


Humans and AI systems


Lina Yao from UNSW Sydney is leading a project to get AI systems and human-machine interfaces up to speed with the finer nuances of human behaviour.


She says the ultimate goal is for her research to be used in autonomous AI systems, robots and even cyborgs, but the first step is focused on the interface between humans and intelligent machines.


“What we’re doing in these early phases is to help machines learn to act like humans based on our daily interactions and the actions that are influenced by our own judgment and expectations – so that they can be better placed to predict our intentions,” she says.




Deeper: In a World of Algorithms, What Is The Importance of Being Human?




“In turn, this may even lead to new actions and decisions of our own, so that we establish a cooperative relationship.”


Yao would like to see awareness of less obvious examples of human behaviour integrated into AI systems to improve intent prediction.


Things like gestures, eye movement, posture, facial expression and even micro-expressions – the tell-tale physical signs when someone reacts emotionally to a stimulus but tries to keep it hidden.


This is a tall order, as humans themselves are not infallible when trying to predict the intention of another person.


“Sometimes people may take some actions that deviate from their own regular habits, which may have been triggered by the external environment or the influence of another person’s actions,” she says.


All the right moves


Nevertheless, making AI systems and machines more finely tuned to the ways that humans initiate an action is a good start. To that end, Yao and her team are developing a prototype human-machine interface system designed to capture the intent behind human movement.




Also: The artificial intelligence future is on the edge




“We can learn and predict what a human would like to do when they’re wearing an EEG [electroencephalogram] device,” Yao says.


“While wearing one of these devices, whenever the person makes a movement, their brainwaves are collected which we can then analyse.


“Later we can ask people to think about moving with a particular action – such as raising their right arm. So not actually raising the arm, but thinking about it, and we can then collect the associated brain waves.”



Yao says recording this data has the potential to help people unable to move or communicate freely due to disability or illness. Brain waves recorded with an EEG device could be analysed and used to move machinery such as a wheelchair, or even to communicate a request for assistance.


“Someone in an intensive care unit may not have the ability to communicate, but if they were wearing an EEG device, the pattern in their brainwaves could be interpreted to say they were in pain or wanted to sit up, for example,” Yao says.


“So an intent to move or act that was not physically possible, or not able to be expressed, could be understood by an observer thanks to this human-machine interaction. The technology is already there to achieve this, it’s more a matter of putting all the working parts together. ”


AI systems and humans could be partners for life


Yao says the ultimate goal in developing AI systems and machines that assist humans is for them to be seen not merely as tools, but as partners.


“What we are doing is trying to develop some good algorithms that can be deployed in situations that require decision making,” she says.


“For example, in a rescue situation, an AI system can be used to help rescuers take the optimal strategy to locate a person or people more precisely.”




Also: New drone tech can tell the living from the dead in disaster zones




“Such a system can use localisation algorithms that use GPS locations and other data to pinpoint people, as well as assessing the window of time needed to get to someone, and making recommendations on the best course of action.”


“Ultimately a human would make the final call, but the important thing is that AI is a valuable collaborator in such a dynamic environment. This sort of technology is already being used today.”


But while working with humans in partnership is one thing; working completely independently of them is a long way down the track. Yao says autonomous AI systems and machines may one day look at us as belonging to one of three categories after observing our behaviour: peer, bystander or competitor. While this may seem cold and aloof,  Yao says these categories may dynamically change from one to another according to their evolving contexts. And at any rate, she says, this sort of cognitive categorisation is actually very human.


“When you think about it, we are constantly making these same judgments about the people around us every day,” she says.




Teach This: Education Resource – The Future of Artificial Intelligence




More Like This


Artificial intelligence and machine learning


Machine learning – the revolution has arrived




About the Author

UNSW Newsroom
The latest and best news from the University of New South Wales.

Published By

Featured Videos

Placeholder
Big Questions: Cancer
Placeholder
A future of nanobots in 180 seconds
Placeholder
Multi-user VR opens new worlds for medical research
Placeholder
Precision atom qubits achieve major quantum computing milestone
Placeholder
World's first complete design of a silicon quantum computer chip
Placeholder
Micro-factories - turning the world's waste burden into economic opportunities
Placeholder
Flip-flop qubits: a whole new quantum computing architecture
Placeholder
Ancient Babylonian tablet - world's first trig table
Placeholder
Life on Earth - and Mars?
Placeholder
“Desirable defects: Nano-scale structures of piezoelectrics” – Patrick Tung
Placeholder
Keeping Your Phone Safe from Hackers
Placeholder
Thru Fuze - a revolution in chronic back pain treatment (2015)
Placeholder
Breakthrough for stem cell therapies (2016)
Placeholder
The fortune contained in your mobile phone
Placeholder
Underwater With Emma Johnston
Placeholder
Flip-flop qubits: a whole new quantum computing architecture
Placeholder
The “Dressed Qubit” - breakthrough in quantum state stability (2016)
Placeholder
Pinpointing qubits in a silicon quantum computer (2016)
Placeholder
How to build a quantum computer in silicon (2015)
Placeholder
Quantum computer coding in silicon now possible (2015)
Placeholder
Crucial hurdle overcome for quantum computing (2015)
Placeholder
New world record for silicon quantum computing (2014)
Placeholder
Quantum data at the atom's heart (2013)
Placeholder
Towards a quantum internet (2013)
Placeholder
Single-atom transistor (2012)
Placeholder
Down to the Wire (2012)
Placeholder
Landmark in quantum computing (2012)
Placeholder
1. How Quantum Computers Will Change Our World
Placeholder
Quantum Computing Concepts – What will a quantum computer do?
Placeholder
Quantum Computing Concepts – Quantum Hardware
Placeholder
Quantum Computing Concepts – Quantum Algorithms
Placeholder
Quantum Computing Concepts – Quantum Logic
Placeholder
Quantum Computing Concepts – Entanglement
Placeholder
Quantum Computing Concepts - Quantum Measurement
Placeholder
Quantum Computing Concepts – Spin
Placeholder
Quantum Computing Concepts - Quantum Bits
Placeholder
Quantum Computing Concepts - Binary Logic
Placeholder
Rose Amal - Sustainable fuels from the Sun
Placeholder
Veena Sahajwalla - The E-Waste Alchemist
Placeholder
Katharina Gaus - Extreme Close-up on Immunity
Placeholder
In her element - Professor Emma Johnston
Placeholder
Martina Stenzel - Targeting Tumours with Tiny Assassins
Placeholder
How Did We Get Here? - Why are we all athletes?
Placeholder
How Did We Get Here? - Megafauna murder mystery
Placeholder
How Did We Get Here? - Why are we so hairy?
Placeholder
How Did We Get Here? - Why grannies matter
Placeholder
How Did We Get Here? - Why do only humans experience puberty?
Placeholder
How Did We Get Here? - Evolution of the backside
Placeholder
How Did We Get Here? - Why we use symbols
Placeholder
How Did We Get Here? - Evolutionary MasterChefs
Placeholder
How Did We Get Here? - The Paleo Diet fad
Placeholder
How Did We Get Here? - Are races real?
Placeholder
How Did We Get Here? - Are We Still Evolving?
Placeholder
How Did We Get Here? - Dangly Bits
Placeholder
Catastrophic Science: Climate Migrants
Placeholder
Catastrophic Science: De-Extinction
Placeholder
Catastrophic Science: Nuclear Disasters
Placeholder
Catastrophic Science: Storm Surges
Placeholder
Catastrophic Science: How the Japan tsunami changed science
Placeholder
Catastrophic Science: How the World Trade Centre collapsed
Placeholder
Catastrophic Science: Bushfires