Opinion: The consequences of our blind faith in Artificial Intelligence are catching up to us

Opinion: The consequences of our blind faith in Artificial Intelligence are catching up to us

Close
The consequences of our blind faith in Artificial Intelligence are catching up to us
Slowly but surely, machine learning has been creeping into and helping to shape public policy – in healthcare, policing, probation services and other areas. But are we ignoring crucial questions about this technology?
 
Click to follow
The Independent Voices
There is growing enthusiasm for Artificial Intelligence (AI) and its capacity to drastically transform business performance and streamline outcomes in public services.
As great as that hunger for innovation sounds, however, in reality, pivots towards AI are typically coupled with a serious lack of understanding of the dangers and limitations of the new technology.
We’ll tell you what’s true. You can form your own view.
From 15p €0.18 $0.18 USD 0.27 a day, more exclusives, analysis and extras.
Subscribe now
Authorities especially, are beginning to get carried away with the potential of AI. But are they considering and introducing sufficient measures to avoid harm and injustice?
Organisations across the globe have been falling over themselves to introduce AI to projects and products. From facial and object recognition in China  to machines that can diagnose diseases more accurately than doctors in America , AI has reached the UK’s shores and grown exponentially in the past few years.
Shape
In pictures: Artificial intelligence through history
1/7 Boston Dynamics
Boston Dynamics describes itself as 'building dynamic robots and software for human simulation'. It has created robots for DARPA, the US' military research company
2/7 Google's self-driving cars
Google has been using similar technology to build self-driving cars, and has been pushing for legislation to allow them on the roads
3/7 DARPA Urban Challenge
The DARPA Urban Challenge, set up by the US Department of Defense, challenges driverless cars to navigate a 60 mile course in an urban environment that simulates guerilla warfare
4/7 Deep Blue beats Kasparov
Deep Blue, a computer created by IBM, won a match against world champion Garry Kasparov in 1997. The computer could evaluate 200 million positions per second, and Kasparov accused it of cheating after the match was finished
5/7 Watson wins Jeopardy
Another computer created by IBM, Watson, beat two champions of US TV series Jeopardy at their own game in 2011
6/7 Apple's Siri
Apple's virtual assistant for iPhone, Siri, uses artificial intelligence technology to anticipate users' needs and give cheeky reactions
7/7 Kinect
Xbox's Kinect uses artificial intelligence to predict where players are likely to go, an track their movement more accurately
1/7 Boston Dynamics
Boston Dynamics describes itself as 'building dynamic robots and software for human simulation'. It has created robots for DARPA, the US' military research company
2/7 Google's self-driving cars
Google has been using similar technology to build self-driving cars, and has been pushing for legislation to allow them on the roads
3/7 DARPA Urban Challenge
The DARPA Urban Challenge, set up by the US Department of Defense, challenges driverless cars to navigate a 60 mile course in an urban environment that simulates guerilla warfare
4/7 Deep Blue beats Kasparov
Deep Blue, a computer created by IBM, won a match against world champion Garry Kasparov in 1997. The computer could evaluate 200 million positions per second, and Kasparov accused it of cheating after the match was finished
5/7 Watson wins Jeopardy
Another computer created by IBM, Watson, beat two champions of US TV series Jeopardy at their own game in 2011
6/7 Apple's Siri
Apple's virtual assistant for iPhone, Siri, uses artificial intelligence technology to anticipate users' needs and give cheeky reactions
7/7 Kinect
Xbox's Kinect uses artificial intelligence to predict where players are likely to go, an track their movement more accurately
Predictably, this new era of technological innovation, exciting as it is, also raises serious ethical questions, especially when applied to the most vulnerable in society.
My own PhD research project involves developing a system of early detection of depressive disorders in prisoners, as well as analysing the ethical implications of using algorithms to diagnose something as sensitive as mental health issues in a vulnerable group. Essentially, I am asking two questions: “can I do it?” and “should I do it?”
Most engineers and data scientists have been working with a powerful tool called machine learning – which offers fancier and more accurate predictions than simple statistical projections. They are a commonly used type of algorithms – like the one  Netflix employs to recommend shows to its users, or the ones that make you see “relevant” ads wherever you go online. More sophisticated systems such as computer vision, used in facial recognition, and natural language processing, used in virtual assistants like Alexa and Siri, are also being developed and tested at a fast pace.
Slowly but surely, machine learning has also been creeping into and helping to shape public policy – in healthcare, policing, probation services and other areas. But are crucial questions being asked about the ethics of using this technology on the general population?
Imagine the potential cost of being a “false positive” in a machine’s prediction about a key aspect of life. Imagine being wrongly earmarked by a police force as someone likely to commit a crime based on an algorithm’s learned outlook of a reality it doesn’t really “understand”. Those are risks we might all be exposed to sooner than we think.
For instance, West Midlands Police recently announced the development of a system called NAS (National Analytics Solution): a predictive model to "guess" the likelihood of someone committing a crime.
This initiative fits into the National Police Chiefs Council’s push to introduce data-driven policing, as set out intheir plan for the next 10 years, Policing Vision 2025. Despite concerns expressed by an ethics panel from the Alan Turing Institute in a recent report , which include warnings about "surveillance and autonomy and the potential reversal of the presumption of innocence," West Midlands Police are pressing on with the system.
Similarly, the National Offender Management Service’s (NOMS) OAsys tool, used to assess the risk of recidivism in offenders, has been increasingly relying on automation for its decisions, although human input still takes precedent in decisions.
The trend, however, as seen in the American justice system , is to move away from requiring human insight and allowing machines to make decisions unaided. But can data – raw, dry, technical information about a human being’s behaviour – be the sole indicator used to predict future behaviour?
A number of machine learning academics and practitioners have recently raised the issue of bias in algorithm’s “decisions,” and rightly so. If the only data available to “teach” machines about reoffending consistently points to offenders from different ethnicities, for instance, being more likely to enter the criminal justice system, and to stay in it, it is possible that a machine would calculate that as a universal truth to be applied to any individual that fits the demographic, regardless of context and circumstances.
Support free-thinking journalism and subscribe to Independent Minds
The lack of accountability is another conundrum afflicting the industry, since there is no known way for humans to analyse the logic behind an algorithm’s decision – a phenomenon known as “black box” – so “tracing” a possible mistake in a machine’s prediction and correcting it is difficult.
It is clear that algorithms cannot as yet act as a reliable substitute for human insight, and are also subject to human bias at the data collection and processing stages. Even though machine learning has been used successfully in healthcare, for example, where algorithms are capable of quickly analysing heaps of data, spotting hidden patterns and diagnosing diseases more accurately than humans, machines lack the insight and contextual knowledge to predict human behaviour.
It is key that the ethical implications of using AI are not overlooked by industry and government alike. As they rush off to enter the global AI race as serious players, they must not ignore the potential human cost of bad science.
Thais Portilho is a postgraduate researcher in criminology and computer science at the University of Leicester
We’ll tell you what’s true. You can form your own view.
At The Independent, no one tells us what to write. That’s why, in an era of political lies and Brexit bias, more readers are turning to an independent source. Subscribe from just 15p a day for extra exclusives, events and ebooks – all with no ads.

Images Powered by Shutterstock