Sneak peak at the new gotcha! homepage!See More arrow right

Why you should worry about the ethics of artificial intelligence?

Why you should worry about the ethics of artificial intelligence?

The discriminatory biases of the algorithms, the invasion of privacy, the risks of facial recognition and the regulation of human-machine relations are challenges that AI needs to face. However, the interests of governments and large companies often prevail over good practices.

Artificial intelligence (AI) is no longer a science fiction thing, it is everywhere. Your bank uses it to know if it is going to give you a credit or not and the ads you see on your social networks come out of a classification carried out by an algorithm, which has microsegmented and ‘decided’ if it shows you offers of wrinkle creams or high-end cars. Facial recognition systems, which use airports and security forces, are also based on this technology.

“The machines do not have a generalist intelligence, nor have we achieved that they have common sense, but they do have specific intelligences for very specific tasks, which exceed the efficiency of human intelligence,” explains Sinc Carles Sierra, director of the Research Institute of Artificial Intelligence (IIIA) of the CSIC.

Therefore — he adds — “AI has enormous potential in the improvement of industrial processes, the design of new medicines or to achieve greater precision in medical diagnosis, to name a few examples.”

The data is the new oil But apart from a breakthrough, AI is now a huge business, estimated at about 190,000 million dollars (about 170,000 million euros) by 2025, including hardware, software and services around technology. The data is now considered as the new oil.

This very appetizing business is disputed, among others, by technology giants such as Amazon Google, Facebook, Microsoft and IBM, “whose commercial interests often prevail over ethical considerations,” says Sierra.

Many of these firms — he points out — “are now creating ethical committees in the field of AI, but they have done so more in a reactive than proactive way,” following criticism of the inappropriate use of AI in areas related to privacy. of users or use without proper supervision of some applications.

According to Sinc Carme Artigas, big data expert and ambassador in Spain of the Women in Data Science program at Stanford University, an example of these controversial uses was carried out by Microsoft when he decided to launch his Tay bot. This AI-based chatbot “was surfing on Twitter alone and after a few hours he began posting racist and misogynist tweets because he had taken the best of each house in this social network.” At 16 hours after launch, the firm had to deactivate it.

“What happens — says Artigas — is that when an artificial intelligence system is not supervised, there is a risk that there will be no filter, and that is what happened with this bot.

AI ethics is an issue that is now in an incipient phase of development and will have to face significant challenges. One of them, in the opinion of this expert, is what he calls the “dictatorship of algorithms.”

For example — he points out — the classification algorithms, what they do “is microsegmenting people, that is, classifying them by their behavior, which can lead, if not regulated or if the process is not transparent, to the end they limit people their options to choose freely. ”

“Imagine,” adds Artigas, “that an algorithm microsegments someone as a person of low average income, deduces that you will never be able to buy a Ferrari or a Porch and, therefore, in the ads you will never be shown a range car high because he knows he can’t afford it. This is an example that may seem unimportant, but we should ask ourselves if it is ethical not to present something to people even to dream because it has already been preclassified. ”

Another relevant issue that causes serious bias problems “is that, as machine learning algorithms are fed with historical data, we run the risk of perpetuating the prejudices of the past in the future.” Artigas speaks to illustrate this aspect of “typical crime studies in the United States, which suggest that African-American people are more likely to commit crimes.”

The algorithm — he continues — “has been trained with millions of data from 30 years ago that showed that if you were African American, you were more likely to go to jail. It also happens to us with gender biases. If we start from historical data, the algorithm will continue to reproduce the classic discrimination problems, ”he emphasizes.

Along the same lines, Isabel Fernández, general director of Applied Intelligence at Accenture, spoke in an interview with Sinc about the need for a protocol that regulates biases in AI. “I have no doubt that this will have to be regulated. It is no longer just about good practices. Just as it happens in an operating room to ensure that it is clean, I think there has to be a protocol or accreditation to avoid data bias, ”he stressed.

According to Carme Artigas, there is another great ethical requirement that should be requested from any company or organization that works with AI and that is situated around transparency and what is called explainability. This is, he explains, “if the bank denies you a credit because, according to the algorithm, you are not eligible, you have the right to have the entity explain why and what are the criteria for discarding.”

What happen? That in the processes that follow the algorithms, especially in those of deep learning, it is not very well known what is between the inputs and outputs, warns Artigas.

A program based on this type of deep learning algorithm is Google’s Alpha Zero Go, which has not only learned to play Go — an ancient oriental game considered a great challenge for AI — but has discovered new abstract strategies through itself. But even experts don’t know how these algorithms work.

The black boxes of the algorithms

“This opacity is what is called black boxes of the algorithms,” says Sinc Aurélie Pols, data protection officer (DPO) for the mParticle firm and privacy consultant.

“In these black boxes the input and processing are not always clear or explainable. These opaque results can have consequences on people’s lives and may not be aligned with their values or their options, ”says Pols.

Patrick Riley, a computer scientist at Google, also abounded in this same idea in an article in the journal Nature last July. “Many of these machine learning algorithms are so complicated that it is impossible to inspect all parameters or reason about how the inputs have been manipulated. As these algorithms begin to be applied more and more widely, the risks of misinterpretations and conclusions, and wasted scientific efforts will multiply, ”Riley warned.

To all these reflections, problems related to the protection of personal data are added. In AI “it is important that the data models used to power these systems and their treatment respect the privacy of users,” says Carme Artigas.

In Europe — he says — “we have the General Data Protection Regulation, but there are countries like China, which is currently leading this business, where there is no same sensitivity as in European society regarding privacy. There, for example, in the areas of surveillance and image recognition there is no clipper. And this can mark different speeds of technology development, but personal data is something that, from the social point of view, must be protected, ”he emphasizes.

Artigas also refers to another of the old ethical challenges linked to AI, which is how to regulate the new relationships between humans and machines. “If you use a parallel as the EU has done in Asimov’s robotics laws and translate them into regulations, it tells you, for example, that you should not establish emotional relationships with a robot. And this is in contradiction with some applications of social robots, which are used precisely to cause emotions in people with autism or neurodegenerative diseases, since this linkage has proven beneficial and positive in therapies. ”

To summarize, this expert points out that “much remains to be done in terms of legislation and in the analysis of the ethical repercussions of artificial intelligence.” What we must achieve — he adds — “is that there is transparency and that companies and governments inform us about what they do with our data and for what.”

For his part, Ramón López de Mántaras, a CSIC research professor at IIIA, spoke at a recent conference about the importance of applying the principle of prudence in the development of artificial intelligence. “You do not have to launch happily to deploy applications without having previously been well verified, evaluated and certified,” he stressed.

This principle is one of the highlights of the Barcelona Declaration, promoted by López de Mántaras and other experts, which includes a manifesto that aims to serve as a basis for the development and proper use of AI in Europe.

An example of the application of this principle — he pointed out — “has been carried out by the city of San Francisco, whose authorities have decided to ban facial recognition systems. Something that I congratulate because it is a technology with many failures, which can end up having tremendous repercussions in people’s lives when used by governments or security forces. ” A recent example of this use is the one carried out by the police with protesters from the Hong Kong revolts, which has been widely criticized.

Microsoft has also rethought the use of this technology. According to Sinc Tim O’Brien, responsible for ethics in AI at the firm, “a year ago we proposed the need for government regulation and responsible measures by the industry to address the problems and risks associated with facial recognition systems “

O’Brien believes that “there are beneficial uses, but there are also substantial risks in the applications of these systems and we need to address them to ensure that people are treated fairly, that organizations are transparent in the way they use them and account for Your results. It is also necessary to ensure that all scenarios of use are legal and do not prevent the exercise of basic human rights, ”he notes.

Another of the problematic ethical aspects highlighted by López de Mántaras in his intervention is related to the use of autonomous weapons based on AI. “There are basic principles such as those of discernment and proportionality in the use of force that are already difficult for human beings to evaluate. I see that it is impossible for an autonomous system to take these principles into account, ”he said.

This National Research Prize wondered how an autonomous system “will be able to distinguish — for example — a soldier in an attitude of attack, surrender, or an injured one. It seems to me absolutely unworthy to delegate to a machine the ability and the decision to kill. ”

The scientist is a strong supporter of imbues ethical principles in technology design itself. “AI engineers should sign a kind of Hippocratic oath of good practice.” Like other experts, it was also favorable to boost the certification of algorithms to avoid bias. But, in his opinion, “this validation should be done by independent institutions or institutions. I don’t care if Google certifies its own algorithms, it should be something external. ”

According to López de Mántaras, “fortunately there is a growing awareness of the ethical aspects of AI, not only at the state or EU level, but also by companies. Hopefully it’s not all makeup, ”he concluded.

Images Powered by Shutterstock