Last updated October 31, 2018 at 12:04 pm
Artificial intelligence is going to be one of the biggest changes to the world as we know it and it has only just begun.
From work, to home life, to how we get around, artificial intelligence, or AI, is expected to revolutionise all our lives. Its impact on human history will be as profound as the arrival of internet or the industrial revolution.
As with all landmark technologies, however, there are significant issues from how we introduce AI into our lives, how far we develop it, and the inevitable questions about how our data is used.
According to experts, the answers to those questions and our future is not a question of technology, but of humanity itself.
The future of AI
One of the key fields of artificial intelligence is machine learning. While it is a simple concept – giving machines a library of data, and then letting them learn by themselves – it is an incredibly complex thing to pull off, even if the payoffs are massive.
In medical applications, machine learning has been used to teach computers to diagnose skin cancer. AI algorithms’ success rates have been on par with trained dermatologists when diagnosing carcinomas from photographs. Similarly, AI tools have been developed which allow imaging and analysis of the retina using only a phone and a special lens to peer into a patient’s eye.
In a recent study led by Google, images of blood vessels in retinas were analysed by an algorithm using machine learning to teach itself how to diagnose cardiovascular disease risk. It was the first time anyone had been able to predict cardiovascular risk factors from retinal images.
The chance to increase the speed, ease and potentially accuracy of medical diagnoses will hopefully lead to less time in hospitals, better medical decisions, and better access for patients, AI experts say.
In particular, it could revolutionise health care in developing countries, and areas where traditional approaches to medicine may be limited, such as rural and remote locations, disaster relief, and even battlefields.
Beyond medicine, AI is expected to become a vital component of autonomous cars, security, scientific research, and home life. However, that has some worried about the impact that will have on our lives.
The Terminator can’t stack your plates
Of course there are concerns often raised about the march of AI. Privacy is one of the main ones, along with the misuse of AI by people with nefarious intentions.
“Everybody’s imagining that the Terminator is around the corner, and I’m still emptying my own dishwasher,” says Anton van den Hengel, who leads a world-leading group of artificial intelligence researchers at the University of Adelaide.
Fresh from defeating teams from academia and industry groups like Google at artificial intelligence competitions, Van Den Hengel still sees AI as incredibly primitive – adept at one task but completely unable to do anything apart from the one it is trained for.
“Actually what I want isn’t a robot to empty my dishwasher. I want a robot that will empty the dishwasher and hang the washing on the line, and tell me if I’ve left the lights on, remind me where I put the keys, and plant this plant in the garden next to the others.
“We are almost as far from that dream as we ever were.”
He says that despite all the progress we’ve made in robotics and AI, that challenge is still incredibly difficult to introduce that flexibility and multi-skilling.
The specificity and difficulty of Artificial Intelligence is ultimately it’s failing at the moment.
“Turns out it’s easy to make a program that can beat a human at chess,” says van den Hengel. “But no amount of shouting at that program will make it play checkers.”
“There’s a bunch of things you can ask a five-year-old to do that we’re still 50 years away from having a robot do,” he says.
That is not to say it is outside the realms of possibility.
A recent report published on the pre-print site ArXiv by a group of concerned experts highlighted some of the malicious uses for AI. In The Malicious Use of AI: Forecasting, Prevention, and Mitigation, they envision a possible assassination by an automated cleaning bot infiltrating a government building. Blending in with other bots of the same make, it goes about its day-to-day tasks of cleaning until a face-recognition algorithm spots the target person. Moving in, it then triggers a hidden bomb on board.
While it sounds science fiction, the authors hope that highlighting the possibilities will lead to new approaches to prevent such things. For example, they suggest ensuring policymakers and companies make robot-operating software unhackable, imposing security restrictions on some research, and expanding laws and regulations governing AI development.
These calls have also been made by a group of roboticists, tech entrepreneurs and scientists including the late Stephen Hawking, who implored the UN to ban autonomous killer robots.
The contrast between perception and reality
There is, however, a gap between the perceived ease of AI and the reality, say researchers.
While some suggest that AI-created fakes will make us question what is real and what is not in the future, others are not so sure.
Buzzfeed recently teamed up with an AI team to create a convincing video of Barack Obama speaking someone else’s words.
This sparked panic that this technology could be used to create videos of anyone saying anything, or falsely incriminating themselves in scandal.
However, van den Hengel says what wasn’t seen was the millions of hours work required to produce the one-minute video.
Ellen Broad from CSIRO’s Data 61 agrees.
AI demonstrations may look effortless, she says, but what is not recognised is the difficulty and time required to make the video. While the final product seemed convincing, it required a massive undertaking to bring to fruition.
“But the public perception is that it is effortless and easy. And there is an enormous gulf of understanding between those two.”
For Van den Hengel the Obama deep fake video was nothing new. With enough time and money, current non-AI technology could have produced the same result.
“The truth is that AI doesn’t really do much that humans can’t. It might do it faster, and it might do it cheaper,” he says. But in the case of the Obama video, current methods probably would have produced the same result easier and cheaper.
For Broad, the fake video raises an entirely different question for the future.
“The worry is not that we’re going to see lots and lots of deep fake videos emerging – it is very complicated to produce one – but that it has become a convenient way for politicians, for certain elements of the media, to just deny any content they disagree with or that they think presents them in an unflattering light.”
AI reveals more about humans than expected
Even when put to an everyday use, AI sometimes reveals a question that you might not want answered.
Broad says human bias can creep into AI programming. In a project to use machine learning to select ideal job candidates, unintentional biases were created towards men.
“You have to make certain assumptions and decisions as the designer about what good looks like. And you can make that explicitly… or it might just be inadvertent, in that you have fed your system with information that exhibits a certain world view without thinking about it,” she says.
In creating the algorithm, the designers used previous CVs from job applicants, which happened to be mostly men applying for technical roles.
“We know from sociology and studies of gender behaviours that men and women write CV’s very differently.
“So a system trained to identify good and bad candidature from a collection of CVs will naturally move towards whatever the dominant trends are, and learn to preference what may be male attributes,” says Broad.
“The designers may never have intended that to happen.”
Van den Hengel agrees, and points to the issues in the US where an AI-based program, COMPAS, is used to predict re-offending during sentencing decisions in US courts.
“It turns out if you pretty much just change someone’s ethnicity from white American to African-American, you wind up with a higher prediction of probability of re-offending,” says Van den Hengel.
“As a result, African-Americans got longer sentences out of this process.”
Many people blamed the AI for this, ignoring what it was telling them about their society and the history of racism embedded in the US court system.
“What the AI did was look at the data, analyse the data and make a prediction on the basis of that data. The horror in that story is that that’s the truth that comes out of that data, and they need to stand up and have good hard look at themselves and what’s happening in their society.
“If you embody racism in AI or in a human being, the problem is the racism, not the mechanism.”
Regulation, but not too much
Van den Hengel accepts that regulations surrounding AI will be required to encompass the new things that the technology can do.
“But we need to regulate the behaviour we do or don’t want to happen, rather than the technology by which it’s achieved,” he says.
But the question of regulation is a thorny one. By over-regulating, or careless regulation, we risk putting Australia at a disadvantage going forward.
“All we have to gain by careless legislation is define ourselves out of this economy,” says Van den Hengel.
Unless we participate in the development of the technology, and its implementation t, we won’t have a say in how it’s used, he says.
“If we don’t participate in this process, what we want to happen with this will be completely irrelevant. We’ll just wind up importing the technology, or exporting our data to the people who hold the technology. We need to actively participate.”
“We’re right on the cusp. We’re doing very well, we’re competing with the best in the world. The question is whether we participate in this coming economy, or regulate and watch it all go past”
However, it all comes back to human responsibilities.
“As these systems become part of our lives, we have to think about our responsibilities,” says Broad.
To her it comes back to a question of humans. The decisions we make that allow AI to become integrated with our lives, and alter our lives, is ultimately up to us, as a society, to make.
“How far will we go allowing ourselves to let AI shape our lives?” she asks.
While others fear for the impact AI will have on our lives and society, it will inevitably be due to a decision made to allow that to occur.
“Who are you going to pick to put in command of those controls?” asks van den Hengel.