Keio University

[Special Feature: AI Society and Public Space] The Impact of Human-Evaluating AI on Interpersonal Relationships and Its Ethical Implications

Participant Profile

  • Minao Kukita

    Associate Professor, Graduate School of Informatics, Nagoya University. Specialization: Philosophy of Language, Philosophy of Technology

    Minao Kukita

    Associate Professor, Graduate School of Informatics, Nagoya University. Specialization: Philosophy of Language, Philosophy of Technology

2019/02/05

1. Introduction

In recent years, as artificial intelligence (AI) has been put into practical use in various situations, concerns about its social impact have been rising. Along with this, discussions regarding the ethics of artificial intelligence have become active both domestically and internationally, involving various stakeholders (1). Issues raised include the safety and controllability of AI, transparency, accountability, its impact on inequality and fairness, human rights, and human dignity. In addition to these issues, this article focuses on the potential impact that the use of AI may have on interpersonal relationships and its ethical implications.

2. What is Artificial Intelligence?

To consider the ethical issues of artificial intelligence i), it is necessary to characterize AI, even if only roughly. However, this is a rather difficult task. John McCarthy and others, who founded the research field of "artificial intelligence," characterized the challenge of AI as "making machines behave in a way that would be considered intelligent if a human showed the same behavior." However, what we consider "intelligent" is not always clear. Furthermore, there are behaviors that are not considered "intelligent" for humans but are regarded as significant achievements in the field of AI. Examples include recognizing human faces or grasping and picking up objects.

Characterizing artificial intelligence based on the concept of "intelligence" runs into the extremely difficult question of "what is intelligence?" or "what does it mean to be intelligent?" Following Jerry Kaplan (2), we will avoid such questions here and consider artificial intelligence simply as "the continuous progress of automation." When viewed this way, the ethical problems of AI can be understood as "ethical problems newly arising from the automation of things that were not previously automated."

However, if we characterize AI in this way, the examples are too diverse to discuss collectively. Therefore, this article focuses specifically on "systems that automatically evaluate, judge, and classify humans."

3. AI that Evaluates People

One of the technologies driving the current third AI boom is machine learning, represented by deep learning. Machine learning typically finds subtle patterns in large amounts of data that humans cannot find, and identifies, classifies, and categorizes subjects. This technique has made it possible to automate identification tasks, such as distinguishing images containing cats from those that do not. While humans can distinguish images of cats, AI sometimes demonstrates superior identification capabilities. For example, in games like Shogi or Go, AI has become able to evaluate a given board state more accurately than professional human players.

However, the area where current machine learning exerts the most power (and brings the greatest benefit to those who use it) is in the categorization and behavioral prediction of humans. Knowing with high accuracy what kind of people have what kind of needs or preferences, and what kind of behavioral tendencies they have, is extremely important in business. Big data and AI have proven to be extremely useful technologies for this purpose. With the spread of the internet and mobile technologies such as smartphones, vast amounts of machine-readable data about all of people's online activities, and increasingly their offline activities, are being acquired, recorded, and stored. With the development of machine learning techniques, it has become possible to extract people's needs, preferences, and behavioral patterns from that data. Consequently, giant IT companies possessing massive amounts of data can take appropriate actions for appropriate targets at the appropriate timing. This has brought them enormous profits. For the first time in the half-century history of AI, it has become a technology that supports major success in business.

4. Problems with AI that Evaluates People

However, the evaluation and categorization of humans by AI is not only applied in marketing. AI is also used in recruitment and personnel evaluation by companies, and in the field of justice (police and courts). Those providing such services usually advertise that, unlike humans, AI is "unbiased," "not influenced by preferences," and "accurate." However, it has been pointed out that in reality, the preferences and biases of the humans who created the algorithms, as well as the biases in the data used for learning, are reflected (3).

The "Remote Risk Assessment (RRA)" system developed by AC Global Risk highlights various problems with human evaluation by AI (4). This system reportedly determines whether a person is dangerous based solely on the tone of their voice during a ten-minute conversation over the phone (answering set questions in their native language), rather than the content. As an answer to President Trump's demand to "vett immigrants thoroughly," AC Global Risk advertises RRA as the "monumental refugee crisis solution that America and other countries are currently facing." While AC Global Risk has refused to answer questions from The Intercept regarding the software's details, experts who reviewed public materials have called it "bullshit" and "bogus." Björn Schuller, an authority on speech emotion recognition, told The Intercept, "Giving the impression that you can detect lies from voice alone with any degree of accuracy is ethically problematic. If anyone advertises that they can do that, they themselves should be considered a risk." In US immigration inspections, speech patterns and appearance are used as pretexts for investigating or denying entry to people. Experts fear that RRA may "spread such biases as a routine and make them appear 'objective' at first glance."

However, RRA and similar human evaluation algorithms are currently used throughout society. In many cases, they are neither as accurate as advertised nor free from bias. The COMPAS system, used in the US to estimate the likelihood of recidivism and referenced during sentencing, was found to be no more accurate than random guesses by laypeople and possessed biases similar to humans (5). A recruitment evaluation system developed secretly by Amazon was found to have a bias that rated women lower and was scrapped because the development team could not address the issue (6). Despite these problems coming to light one after another, companies and governments remain enthusiastic about evaluating people with algorithms. This is because it offers a simple and efficient "solution" to complex and difficult problems.

5. The Other as Risk

Human evaluation algorithms classify and cluster people based on data. They then perform specific evaluations and labeling for individuals belonging to certain groups. Based on this inference, individuals judged to be high-risk are placed at a disadvantage, such as being passed over for hiring, given longer prison sentences, or denied entry. For example, members of a group evaluated to contain a high percentage of violent individuals are deemed likely to be violent and are excluded as a risk (Figure 1). The most extreme example is the "signature strike" used in the US "War on Terror." In countries like Afghanistan and Pakistan, the US uses data on people's age, behavioral patterns, location, and social networks to estimate whether a person is a terrorist and conducts drone strikes. Regardless of whether they actually have the intent or plan to attack the US, if they possess enough characteristics common to terrorists, they are targeted as terrorists (7).

At the root of this methodology (and spreading through society by practicing it) is a view that treats others as bundles of data that can be processed by machines like computers and smartphones, and views them only as potential losses or gains for oneself. Machine learning systems detect the possibility that a person with a complex combination of attributes is somehow "risky" from the vast amount of data overflowing on the web. People judged as "risky" are often discarded collectively for the sake of efficiency and under the name of a false "objectivity." Whether each individual is truly dangerous is never scrutinized because doing so is inefficient. Instead, it is more efficient, and therefore "rational," to discard everyone labeled as a risk. There, others are not treated as flesh-and-blood individuals but merely as data points. If the attributes referenced here are ethnicity, gender, or religion, it is criticized as discrimination and unfair. However, human evaluation systems based on big data are currently generating new seeds of discrimination at a staggering pace. Furthermore, as shown in the Amazon example in the previous section, automated human evaluation systems often slip old-fashioned discrimination into judgments without being noticed.

Figure 1: Unfair Inference

6. Technology as Media

There is a view that technology is a "medium" or "interface" between humans and the world. That is, humans perceive, recognize, and interpret the world through technology, and act upon the world through technology. In this sense, it can be said that technology is part of our cognitive and behavioral abilities. If so, a change in technology means a change in the way we perceive the world and a change in the way we act upon it. Generally, technology enables us to know the world better and to use this world efficiently.

Now that ICT is advancing and technology is acquiring high autonomy, our relationship with the environment and others is about to change significantly. Previously, to act better in the world, we needed to know more about the world and others. With the development of ICT, especially AI and robotics, we will be able to deal with the environment and others efficiently without knowing them in detail. ICT does not bring us information, but rather functions like a screen that prevents us from coming into contact with external information. In the future, our perception of the world and our actions upon it will increasingly depend on technology. However, at that time, the physical and psychological distance between us and the world, and between us and others, may expand indefinitely.

The ethical implications of this are significant. This is because psychological research has revealed that psychological distance affects our moral judgments and actions. For example, as psychological distance increases, we become less tolerant of others, and thinking based on self-interest is promoted.

As mentioned in the previous section, as human evaluation systems using AI are used in many social situations, we will increasingly view others as bundles of machine-readable data and think of them as sources of potential loss and gain for ourselves. However, this severely limits the human relationships that might be built with others. As the "Prisoner's Dilemma" game shows, it is difficult to bring a relationship that starts from self-interest and suspicion of the other party into a relationship of mutual trust and cooperation.

But interpersonal relationships are open to richer possibilities. Human character is not necessarily a fixed thing that can be measured objectively by a machine. It changes dynamically within human interactions. If a person is trusted, they make an effort to live up to that trust. In other words, the act of trusting the other person can be a factor that makes them truly trustworthy. Or, simple frequent face-to-face contact can lead to affection. And cooperative relationships based on trust and affection born in this way bring mutual benefits, which further promotes trust and affection. However, such a cycle that promotes good human relationships is unlikely to start from an automated evaluation system based on data available on the web.

Conclusion

The thoughtless use of big data and AI in evaluating humans may promote treating others as bundles of data that computers can process and viewing them more in terms of efficiency. Furthermore, AI can reflect the biases of the humans and society that developed it, having the effect of fixing and strengthening them. Additionally, big data and AI are creating new seeds of discrimination by negatively labeling specific groups.

Currently, AI is often used in ways that generate large profits by unfairly disadvantaging or exploiting the socially vulnerable. On the other hand, AI can also be utilized as a tool to help visualize and rescue suffering socially vulnerable people. When considering the application of AI, it is vital to think about for what purpose it was created, what side effects it brings, who it benefits, and who it tramples upon.

i) The term "artificial intelligence" is used both to refer to technical products and to the field of research and development of such products. This article also uses the term in such an ambiguous sense.

(1) Yuko Murakami, "The Present State of Ethics in Artificial Intelligence: The Significance of Philosophy of Technology and Ethics in R&D," IEICE Fundamentals Review, 11(3), pp. 155-163, 2018.

(2) J. Kaplan, Artificial Intelligence: What Everyone Needs to Know, Oxford University Press, 2015.

(3) Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, translated by Naoko Kubo, Intershift, 2018.

(4) A. Kofman, "Dangerous junk science of vocal risk assessment," The Intercept, November 25, 2018.

The Dangerous Junk Science of Vocal Risk Assessment

(5) Hirokazu Anase, "'AI Judges' Were Not Fair at All! The Poor Reality of Artificial Intelligence Trials," Rui Net, August 12, 2018.

http://www.rui.jp/ruinet.html?i=200&c=400&t=6&k=2&m=338047

(6) Jonggi Ha, "Thinking about the reason why Amazon's recruitment AI 'discriminated against women'," Forbes Japan, October 16, 2018.

Thinking about the reason why Amazon's recruitment AI 'discriminated against women' | Forbes JAPAN Official Site

(7) Minao Kukita, "The Logic and Ethics of Remote Warfare," α-Synodos, Vol. 257+258, 2018.

*Affiliations and titles are as of the time of publication.