Keio University

[Special Feature: AI Society and Public Space] Roundtable: How to Create a Free and Fair Society Amidst AI Networking

Participant Profile

  • Mitsuo Wakameda

    Senior Principal Specialist, Digital Trust Business Strategy Division, NEC Corporation; Director, Data Trading Alliance.

    After graduating from the Faculty of Letters at Sophia University, he joined NEC. He launched the company-wide big data business in 2013 and the Digital Trust Business Strategy Division in 2018. He is a joint researcher at the Keio University Global Research Institute (KGRI).

    Mitsuo Wakameda

    Senior Principal Specialist, Digital Trust Business Strategy Division, NEC Corporation; Director, Data Trading Alliance.

    After graduating from the Faculty of Letters at Sophia University, he joined NEC. He launched the company-wide big data business in 2013 and the Digital Trust Business Strategy Division in 2018. He is a joint researcher at the Keio University Global Research Institute (KGRI).

  • Fumiaki Kobayashi

    Member of the House of Representatives; Deputy Director of the Youth Division of the Liberal Democratic Party; Secretary-General of the Administrative Reform Promotion Headquarters.

    In the third and fourth reshuffled Abe Cabinets, he focused on radio waves, communications, information reform, and My Number policy as Parliamentary Vice-Minister for Internal Affairs and Communications and Parliamentary Vice-Minister of the Cabinet Office. He was first elected to the House of Representatives in 2012 after working for NTT DOCOMO.

    Fumiaki Kobayashi

    Member of the House of Representatives; Deputy Director of the Youth Division of the Liberal Democratic Party; Secretary-General of the Administrative Reform Promotion Headquarters.

    In the third and fourth reshuffled Abe Cabinets, he focused on radio waves, communications, information reform, and My Number policy as Parliamentary Vice-Minister for Internal Affairs and Communications and Parliamentary Vice-Minister of the Cabinet Office. He was first elected to the House of Representatives in 2012 after working for NTT DOCOMO.

  • Hiromi Arai

    Researcher, RIKEN Center for Advanced Intelligence Project

    Withdrew from the doctoral program at the Graduate School of Science and Engineering, Tokyo Institute of Technology, after completing the required credits. Ph.D. in Science. Assumed current position after serving as an Assistant Professor at the Information Technology Center, The University of Tokyo, among other roles. Also serves as a JST PRESTO Researcher. Specializes in privacy protection technology, data mining, etc.

    Hiromi Arai

    Researcher, RIKEN Center for Advanced Intelligence Project

    Withdrew from the doctoral program at the Graduate School of Science and Engineering, Tokyo Institute of Technology, after completing the required credits. Ph.D. in Science. Assumed current position after serving as an Assistant Professor at the Information Technology Center, The University of Tokyo, among other roles. Also serves as a JST PRESTO Researcher. Specializes in privacy protection technology, data mining, etc.

  • Kenji Yasuoka

    Faculty of Science and Technology Professor, Department of Mechanical Engineering

    Special Keio University alumni. Completed the doctoral program in the Major in Physics, Graduate School of Engineering, Nagoya University in 1997. Doctor of Engineering. Assumed current position in 2010. Deputy Director of the Keio University Global Research Institute (KGRI). Specializes in molecular dynamics and chemical physics.

    Kenji Yasuoka

    Faculty of Science and Technology Professor, Department of Mechanical Engineering

    Special Keio University alumni. Completed the doctoral program in the Major in Physics, Graduate School of Engineering, Nagoya University in 1997. Doctor of Engineering. Assumed current position in 2010. Deputy Director of the Keio University Global Research Institute (KGRI). Specializes in molecular dynamics and chemical physics.

  • Tatsuhiko Yamamoto (Moderator)

    Graduate School of Law Professor

    Keio University alumni (1999 Law, 2005 Ph.D. in Law). Ph.D in Law. Assumed current position after serving as an Associate Professor at the Faculty of Law, Toin University of Yokohama. Deputy Director of the Keio University Global Research Institute (KGRI). Specializes in Constitutional Law. Author of "Osoroshii Big Data" (Scary Big Data), "AI to Kenpo" (AI and the Constitution) (Editor), etc.

    Tatsuhiko Yamamoto (Moderator)

    Graduate School of Law Professor

    Keio University alumni (1999 Law, 2005 Ph.D. in Law). Ph.D in Law. Assumed current position after serving as an Associate Professor at the Faculty of Law, Toin University of Yokohama. Deputy Director of the Keio University Global Research Institute (KGRI). Specializes in Constitutional Law. Author of "Osoroshii Big Data" (Scary Big Data), "AI to Kenpo" (AI and the Constitution) (Editor), etc.

2019/02/05

AI Networking and the Current Situation in Japan

Yamamoto

Today, we will be discussing the relationship between a networked society incorporating Artificial Intelligence (AI) and public space with experts from various fields.

Defining "public space" is a difficult task in itself, but here, I am imagining a non-exclusive, inclusive space open for free communication. In that sense, the main points of this roundtable discussion are whether the progress of AI networking will bring about social "exclusion" or "inclusion," and in what direction Japan is attempting to steer in this regard.

For example, China is currently deploying "Skynet," a surveillance camera network using AI facial recognition technology. It can immediately identify who has ignored a traffic light. While some say this has improved public safety, negative aspects have also been pointed out, such as causing a chilling effect on political criticism and the further loss of free and open communication.

Furthermore, Sesame Credit, a credit reporting agency under the Alibaba Group in China, uses AI to "rate" an individual's social creditworthiness on a scale of 950 points based on electronic payment records, asset status, and social media friendships. This score—the credit score—is widely shared and used by both the public and private sectors. For those with high scores, it is very beneficial, as they can get mortgages at low interest rates, rent houses without deposits, or have better luck in matchmaking.

However, what cannot be ignored is the life of those with low scores. Not only do they find it harder to get loans or face handicaps in job hunting, but their freedom of movement is also effectively restricted, such as being unable to buy plane tickets or having difficulty obtaining foreign visas. If you have a low score, you may face discriminatory treatment and risk losing opportunities for social participation. Moreover, once a low score is assigned, one falls into a "negative spiral." This indicates the possibility that human rating by AI could trigger an unprecedented class society and create an exclusionary space that is the exact opposite of a "public space."

In this roundtable, I would first like to discuss the current situation in Japan. The government is promoting AI networking through initiatives like "Society 5.0," but I get the impression that the negative impacts on public space are not being discussed that much. Of course, the Ministry of Internal Affairs and Communications' "Draft AI Utilization Principles" and government discussions advocate for "human-centric" and "inclusive and diverse societies," which in themselves are highly commendable. However, I feel that specific discussions have not yet been fully fleshed out.

How does this situation compare to, for example, the EU or the United States? Mr. Arai, you often attend international conferences; what is your impression?

Arai

In conferences in technical fields such as machine learning, I perceive that interest in issues surrounding these AI applications is very high. Frequently, panel discussions are held by inviting people from various fields, such as sociology and industry, in a cross-disciplinary manner.

By comparison, I feel there are not as many actions taken in Japan.

Yamamoto

In Japan, there are cross-disciplinary meetings that exist in name only, but are you saying it is different from those?

Arai

For example, at international conferences, there is a slight difference in the level of excitement as a research subject, such as the number of papers on related topics in addition to symposiums. In the United States in particular, everyone has a high interest in "anti-discrimination."

Yamamoto

I suppose that is because of the issue of racial discrimination.

Arai

That's right. Interest on the corporate side is also high. At last year's FAT* (ACM Conference on Fairness, Accountability, and Transparency)—a conference that deals with fairness across various fields such as machine learning and law—there was a report that the identification accuracy of facial recognition apps for Black women was low. In response, companies reported that they had improved the accuracy. This is an example where companies responded to the actions of researchers.

Facial Matching and Its Risks

Yamamoto

I see. Mr. Wakameda, what is your perspective from a corporate standpoint?

Wakameda

It is true that in Japan, sensitivity toward human rights, such as racial discrimination, is not as high as in the West.

Microsoft recently published a proposal stating that "governments should tighten regulations" on facial recognition technology due to fears of promoting racial discrimination and violating privacy. Subsequently, Google announced that it would stop providing general-purpose APIs (Application Programming Interfaces) for facial recognition until challenges are resolved to avoid misuse. I feel there is a tendency for facial recognition technology to be highlighted a bit too much, though.

Yamamoto

What is the background behind the focus on facial matching?

Wakameda

In the United States, the reason it is highlighted is likely the high sensitivity toward racial discrimination against people of color, immigrants, and religious minorities. There is a great deal of concern about facial recognition technology—which allows for the mechanical identification of specific individuals in public spaces—especially regarding its use by law enforcement agencies.

In Europe, at matches of the UEFA Champions League, a world-famous soccer tournament, efforts were made to use facial recognition technology to find specific criminals in crowds, and they actually achieved results. This is a sophisticated use case where tens of thousands of pedestrians are photographed and matched by remote cameras, and it is a system that raises alerts based on the "probability of being the person."

Yamamoto

So it is strictly a probability.

Wakameda

I am not familiar with the details, but it seems the operation is not based on automatic identification, but rather on actions predicated on human eyes and human judgment. However, a human rights organization expressed human rights concerns regarding this case.

Yamamoto

Because it is judged by probability, there is always a risk of misidentification.

Wakameda

It is a fact that no matter how high the accuracy of a product, 100% accuracy is not always guaranteed due to various environmental conditions. It is required to correctly understand this point as a characteristic of facial recognition technology and to consider human rights, such as by devising operations to mitigate risks.

Regarding the use of facial recognition using cameras, there is another technical destiny. That is to temporarily acquire face data (codes that identify the person, called face feature values) not only for the target person to be matched but for all persons who enter the camera frame. Since the system matches against a database based on these identification codes, it is important to consider human rights and privacy after recognizing the characteristics of the technology, such as by incorporating a function to promptly delete the data of persons who are not matching targets.

Even if you say it is deleted promptly, the fact remains that face information of other subjects entering the camera frame is acquired. We must not neglect efforts to properly explain these technical constraints and risks in advance and to gain consensus on the balance with the benefits derived from them (for example, citizen safety).

The Need for Accountability

Wakameda

More than 20 years have passed since Dr. Ann Cavoukian proposed "Privacy by Design," but now that the value of trust is being emphasized once again, it can be said to be a guideline for action that should be referred to. Furthermore, to conduct economic activities globally, it is necessary to consider the impact not only on privacy but also on human rights in a broad sense.

Yamamoto

Recently, in the AI field, the term "Ethics by Design" (incorporating a certain sense of ethics into the design or design process of algorithms, etc.), which has a slightly broader scope than Privacy by Design, is also being frequently used internationally.

Wakameda

It is true that in Japan there are few actions like direct "anti-discrimination" regarding human rights, but there is a risk of "flaming" (online backlash) due to the way media reports as if behavior is being traced by AI or cameras, even if it is not necessarily used in that way. Vague anxiety leads to "flaming," and it seems there are situations where companies are hesitating somewhat excessively in response.

On the other hand, there are also scattered examples of near-miss incidents caused by a lack of understanding or awareness of points that should be considered for privacy.

Wakameda

Incidentally, Keidanren is also in the process of creating guidelines for becoming an AI-Ready business as part of its "AI Utilization Strategy." For example, regarding human resource development, it has been pointed out that there is a need for personnel who possess knowledge of ethics and human rights, rather than simply increasing the number of data scientists.

Yamamoto

Avoiding the risk of flaming sounds like a somewhat passive impression. Is the fact that companies are starting to consider the implementation of AI with consideration for "human rights" and "public interest" a matter of keeping pace with international trends? Or is there a more proactive reason?

Wakameda

Certainly, saying we want to avoid flaming risk is a stance from a corporate perspective. NEC has defined "NEC Safer Cities"—the use of ICT in public spaces and smart cities, which is the theme of this discussion—as one of its growth strategies.

Naturally, visualizing various types of information in public spaces is an important element, and expectations for sensor data, represented by cameras, are high. As a characteristic of cameras, cases where it is difficult to obtain explicit consent from data subjects will increase. Therefore, without a process of considering the most appropriate way of notification or disclosure according to the situation each time, business itself will no longer be viable in the first place.

Furthermore, companies and services that are superior in terms of accountability and transparency should be chosen, and I hope that a mechanism will be created where steady actions can be properly evaluated. At NEC, we recognize that the priority is high because it directly links to our business.

Between Technology and Utilization

Yasuoka

I am a technical person, but no matter how good something is technically, in the end, how humans use it—perhaps that is ethics—is a balance I believe is important.

From our perspective, we tend to start by piling up what can be done with technology. Especially in the case of AI, technology has led the way, and it has progressed significantly due to the evolution of computer GPUs (general-purpose arithmetic units). Breakthroughs like computers capable of parallel processing have emerged all at once. How do we balance technology and humans, and how do we ultimately use it? As a technical person, I feel we must discuss this properly.

Yamamoto

Are you saying that public interest and ethics must be discussed even at the development stage of technology? On the other hand, there is an argument that technology and utilization are separate. That technology is neutral, and the problem lies in how it is used. Previously, I think such a "technology/utilization" separation theory was strong.

Yasuoka

Certainly, until now, technology was technology, and we have worked on it with the idea of just creating something good. Of course, we consider costs and such, but I feel that ethical matters have tended to be put on the back burner.

Kobayashi

Basically, it has been that way for a long time. Traffic rules are created after the automobile is invented. As civilization progresses, the necessary sense of ethics emerges, and the mechanism of law to practice that ethics is built. I think this is the order.

I understand that the internet civilization has blossomed over the last 30 years. As technology advances, a sense of ethics regarding information and privacy that is completely different from before is emerging in each of us. Therefore, discussions are arising that it is about time to create standardized rules internationally. I think this is, in a sense, the orthodox order.

Yamamoto

On the other hand, there is an argument that it would be too late. For example, nuclear power can be energy or a bomb. It is dual-use. To say something a bit idealistic, the problem of nuclear weapons can be considered a consequence of not seriously considering this duality at the technology and development stage. One could argue that it is important for technology and ethics to be nurtured simultaneously, rather than "technology first, then ethics." Of course, "legal" regulations, which are different from ethics, should come later.

On the other hand, if you demand too much ethical content at the technical stage, it may hinder technological innovation, so there is naturally an argument that we should go "ethics-free." In fact, at the research stage, autonomy is constitutionally guaranteed under Article 23, "Academic Freedom."

However, I wonder. Yuval Noah Harari, author of the bestseller "Homo Deus," points out that AI and genetic engineering will cause an unprecedented transformation of society in human history, creating a super-stratified society divided into elites and a useless class. If AI has such an impact, is there not at least some need to discuss the duality of AI even at the technical stage?

Arai

I think there will be various discussions at the stage where it becomes a product for actual use. Regarding research, since engineers are working toward some goal they want to optimize, it is conceivable to incorporate a sense of ethics into that goal. I believe that coordination with society is indispensable in determining what kind of ethical sense to incorporate.

For example, in classification such as passing or failing an entrance exam, if a rule is established to keep the influence of the applicant's gender within a certain range, it is possible to create a passing standard that selects the most desirable candidates for the company under that constraint. Additionally, there is research on making prediction models described by complex rules as explainable as possible. I don't think what to set as a goal is something to be decided by engineers alone.

Yamamoto

So policy judgments are inevitable even in the design of AI. If so, it means that some kind of channel for dialogue with society is necessary.

Arai

That's right. As I mentioned earlier, I think there are various activities in academic societies and also in companies.

Yasuoka

I was originally in the field of science and gradually moved closer to engineering, and just now, an AI utilization project has started at Keio University Global Research Institute (KGRI), which is a cross-disciplinary place for humanities and sciences within the university.

How researchers who are not AI researchers like myself can use AI. We are now starting to provide a place for students to study such things so that they can go out into society and use them to be active. I hope it becomes a channel for dialogue by having everyone discuss the space between AI researchers and society.

Yamamoto

AI is not neutral either, so "dialogue" is very important.

What is a Japanese-style Data Economic Zone?

Kobayashi

When various things are born in a free world and reach a certain level of popularity, I think there is a timing when it is better to standardize them. In the last few years, things utilizing AI and data have rapidly emerged worldwide, and I think we have entered such a timing.

Domestic discussion is important, but we must also discuss it globally. Regarding data, there is an economic zone called GAFA (Google, Apple, Facebook, Amazon), an economic zone where the Chinese state has become the platform, and the EU economic zone. When Japan is asked what it will do, if we can properly propose a Japanese-style inclusive and highly reliable data economic zone starting from the individual to the world along with the utilization of AI and gain empathy, I think it will be a chance to break through the current Cold War not only for Japan but for the world.

Precisely because it hasn't been decided yet, I think it is very important for us to take it positively and go lead the discussion.

Arai

I would like to hear more about your idea of a Japanese-style data economic zone.

Kobayashi

First, in the world of GAFA, it is completely left to the private sector and is company-led. The feeling is that individuals go along with it because it becomes convenient, and it can't be helped to provide personal information in exchange. In the case of China, a certain kind of state coercion is at work.

In the case of Japan, the axis of judgment is placed on the individual, while on the other hand, everything is connected to the administration and companies via APIs, and we aim to be able to interact smartly under our own judgment freely. I think this economic zone will be built under a sense of trust among the three parties, which is different from the previous two.

Yamamoto

Institutionally, is it close to an information bank (information trust function) where you leave the operation and management of your information to a trusted third party?

Kobayashi

I think there is an information bank model, but what the government is currently discussing is a society where everything is exchanged smartly between the private sector, the administration, and individuals. For example, if we move, currently we have to go to Municipality A to withdraw our resident record and go to Municipality B to enter it. This would become such that if you go to Municipality B, it is automatically withdrawn from A, and furthermore, the power company is properly notified, and the bank account for transfers also changes with the person's consent.

Wakameda

Whether it is facial recognition technology or scoring by AI, use cases where one is identified as oneself or scored without one's knowledge will not be accepted. There is a high demand for services where the individual, not someone else, is the starting point—for example, "I want to prove my skills and experience" or "I want to receive services via face pass"— and I believe personal data will inevitably be entrusted to such services.

Even with the same technology, the difference between whether I am the starting point or someone else is doing it to me is important. As a polar opposite to the Chinese data utilization model exemplified by Mr. Yamamoto at the beginning, "person-centric" will likely be accepted as our country's data utilization model.

The key to this business model is not writing long risk-hedge phrases, but a UI (User Interface) that clearly conveys the purpose and risks of data utilization; it is exactly human-centered design itself.

Kobayashi

I agree with that, but when proposing it to the world, I think it's better not to intentionally call it Japanese-style. "Person-centric" is the guideline Japan should take, while China is "state-centric" and GAFA is "company-centric," right? So, I think we can organize it by saying the starting points are different.

What is "Person-Centric"?

Yamamoto

I am also involved in various government meetings, and in those, it is emphasized in various ways that it is "human-centric" AI utilization. But what this human-centric, "person-centric" specifically means has not yet been sufficiently boiled down.

It sounds right as a slogan, but what is it? For example, in the credit scoring mentioned at the beginning, is it "human-centric" to evaluate that person "accurately" and to properly evaluate that person's efforts? I say this because, in order to evaluate "accurately," it becomes necessary to seamlessly collect that person's behavioral records. That is ultimately a surveillance society, isn't it?

Then, is protecting that person's privacy "human-centric"? Or, by being "person-centric," does it mean that information the person does not want to release does not have to be given to the AI? In this way, if we emphasize the person's privacy and autonomy, holes will appear in the data, and the prediction accuracy of the AI will drop. Then, those who are good at "presentation" or "appearance" on the data might benefit, and people who have worked hard might lose out. In short, is it "human-centric" to sacrifice privacy and autonomy to give information seamlessly to AI and grasp that person accurately, or is it "human-centric" to sacrifice accuracy and emphasize privacy and autonomy?

At a recent OECD meeting, it was pointed out that privacy and fairness—that is, privacy and the accuracy of AI prediction—are actually in a trade-off relationship, and how to coordinate the two was discussed. Related to this, the question of whether AI should be allowed to read genetic information that the person cannot correct or change in order to increase prediction accuracy also becomes an issue.

Also, regarding the slogan "inclusive," looking at the situation in China, I am not without doubts as to whether it will really turn out that way. I wonder if it might instead become an exclusive society. If Japan sets "person-centric" as the goal, I think it will be important to prepare safeguards to prevent that from happening.

Kobayashi

The idea that "holes will appear" is also a view from a company-centric or state-centric perspective. But if we go with person-centric, as Mr. Yamamoto has said in various places, personal information is always about building a relationship of trust with the other party while the person is free to put information in or take it out. In fact, even now, we associate with people socially while saying things like "I graduated from such-and-such university" or "Actually, I had this failure."

In the end, I think it's a matter of who has the authority over that input and output. If this starting point is the "person," I don't think it will be viewed from the perspective of whether something is missing or not. From the person's perspective, it's just that what they have provided is registered.

Yamamoto

I see. It's the idea of PDS (Personal Data Store: a mechanism where one manages one's own information and decides how it is utilized). Exactly, you provide what you want to be evaluated on and keep other things closed.

Wakameda

Does "person-centric" not only refer to the starting point of data exchange, but also to mechanisms from the consumer's perspective, such as a process where a company evaluates you, a confirmation step is inserted, and if you are satisfied with it, you receive the service?

Yamamoto

Can that also work for recruitment and credit (granting credit at financial institutions, etc.)? In other words, in recruitment or credit, if the system is one where you only provide what you want to be seen, the prediction accuracy of the AI will surely drop. For example, if someone doesn't want to disclose their criminal record.

Kobayashi

I believe credit and recruitment should be considered separately. First, with credit, the individual wants to receive a service, so in exchange, they are asked to provide credit information to the company. Therefore, if you want to gain a certain level of credit, even now, you actually provide information such as how much debt you have or your family structure.

Also, regarding recruitment, there seems to be a misunderstanding that it becomes something special the moment AI gets involved. Humans also inevitably have biases due to the environment they grew up in. That's why recruitment interviews are conducted by multiple people. If AI evaluates using just one AI, bias will definitely emerge. However, if you line up various AIs with different biases, I think it actually becomes the same as what humans do.

Yamamoto

So the concrete form of being "human-centric" changes depending on the field. The last point is a frequently pointed out issue: what is the difference between traditional recruitment and recruitment using AI?

Arai

I think there are parts where the level of acceptance differs slightly between humans and AI. I also think there are cases where humans have simply been doing the same thing all along, and it has just been replaced by AI. For example, perhaps there is a misunderstanding because of a preconception that "AI is perfect."

Kobayashi

Expectations are too high, aren't they?

Arai

In information processing systems referred to as AI, we can incorporate rules to be followed and evaluate prediction accuracy, but the fear or backlash against AI may stem from it being misunderstood as something autonomous or superhuman. Also, the fact that testing methodologies for it as a product are immature or difficult for the general public to understand is a challenge for the development side.

If the human side is clear about wanting to use "these kinds of judgment criteria," I believe we can design AI to match that. So, I think it would be good if people from as many different fields as possible work on this with a common understanding.

The Use of Scoring and Accountability

Wakameda

There is technology that captures what a person looks at through eye movement—for example, identifying which books on a bookshelf they showed interest in—to infer that person's preferences. Knowing which books someone showed interest in at a bookstore might, depending on the case, reveal their thoughts or beliefs. A quick glance might reflect an inner self that even the person themselves isn't aware of, making it quite sensitive depending on how it's used. What if that were linked to your ID?

However, if it's technology that captures the eye movements of a train driver to see where they are focusing while driving, it leads to solving social issues, such as visualizing skills that should be passed on or preventing accidents caused by inattention. In other words, doesn't it depend on the definition of requirements based on purpose and ethics?

Scoring is the same. For example, if we score driving ability and can see how it differs from when someone was younger, or how it differs between yesterday and today, we can provide driving assistance accordingly.

It would be wonderful if, instead of a simple judgment like "We'll take away your license if you get dementia," the system accurately captured a person's driving history over a long period and judged, "You seem tired today, so let me compensate for this part."

Yamamoto

I see. That may be exactly what a "human-centric" use looks like. Instead of excluding people categorically and uniformly based on age or diagnosis, it predicts the specific characteristics and tendencies of that individual to take a personalized response. This seems like an implementation of AI that contributes to the "respect for the individual" mentioned in the Constitution. Of course, for that purpose, it is necessary to keep that individual's data for a long time, which may involve a trade-off with privacy. However, if we explain the scope of collection or take measures to prevent use for other purposes, that can be suppressed to some extent.

In corporate recruitment as well, people who were previously excluded categorically due to human bias might actually be included by using AI to diversify inputs. Regarding people with disabilities, traditional recruitment by humans might make hiring difficult because those elements come to the fore due to stereotyped images, but with AI, the element of disability can be objectified and relativized.

The challenge is accountability, I suppose. There will still be people who are not hired, and how can we explain it to them? Even in the era when humans did the hiring, the reasons for rejection were basically not explained, but because the input information used for evaluation was limited, it was a world where it was "unspoken but understood even without explanation."

However, as we move to AI and input information becomes diversified, it will become unclear which of one's actions led to the rejection. In one company's recruitment app, they even collect the finger movements of applicants when they answer questions. When that happens, the meaning of "not explaining" might change significantly from before. In the case of recruitment using AI, input information and algorithms become a black box, so those who are rejected are left at a loss, not knowing the reason. It is even possible for them to be fixed in the lower strata of society without the opportunity to climb back up. This is the so-called problem of "virtual slums."

When we talk about realizing an inclusive society through AI, I think a certain level of accountability is necessary. How is that looking from a technical standpoint?

Arai

For example, there is research on how to explain deep learning or complex models, but one of the challenges there is that what humans can understand when explaining is limited, so information has to be dropped.

Because of that, a gap emerges between the explanation and what is actually running, or conversely, it ends up lowering the accuracy of the model. When that happens, for example, in a for-profit company, can they tolerate lowering the accuracy, or is an explanation based on something with lowered accuracy correct?

Also, I think explanations can be made from various perspectives, but it's a matter of whether humans will accept them. Even if explained, it's possible that it contradicts the recipient's knowledge or intentions. In that case, can they accept or utilize the result? I think that is quite a difficult point.

Kobayashi

The world of politics also operates while balancing emotion and logic—precisely sentiment and reason. Whether you can get people to say, "Since that person says so, let's give it a try," is where a politician's ability is tested. When conflict occurs, can you get them to trust you and reach a compromise? I think the sense of conviction in finally persuading or encouraging someone is something that is difficult for anyone but a human to provide.

So, returning to the story of whether humans can accept AI's judgments, AI can present highly accurate analysis results based on big data such as images, but that alone can be hard to be convinced by or accept. I think that having a specialist—for example, a doctor in a medical setting or a HR representative for interview results—explain it is precisely how to manage sentiment and reason, and how technology and humans should interact.

Yamamoto

I see. The EU's GDPR (General Data Protection Regulation) also states that when hiring or loan decisions are made solely by AI judgments, the right to obtain human intervention and the right to receive an explanation of the significant parts of the judgment must be guaranteed.

To prevent an exclusionary society caused by AI, how Japan incorporates this EU "right to an explanation" seems like it will be an important point.

Is "AI Better"!?

Arai

Since the amount of information that needs to be handled is increasing, I believe there are scenes where using information technology is inevitable. In areas like medical diagnostic imaging, data is increasing too much, and the need for support on the ground through automatic data processing has been pointed out.

My view is that it's a good thing to incorporate AI that mimics professionals as diagnostic support. However, I believe it is the doctor's job to properly bring that to the patient.

Wakameda

I found it insightful that in the world of politics, the sense of trust and conviction—the idea that "if that person says so, it can't be helped"—is a measure of one's ability.

Similarly, as AI permeates society, I want to aim for a situation where the sense of trust and conviction in a digital society—such as "it's an AI service provided by NEC, so it should be fine"—becomes a differentiating factor as a result.

Yamamoto

So fostering that kind of trust will become a form of corporate value.

Wakameda

Just the other day, we held a symposium featuring the manga artist Mayumi Kurata, and she said she was an "AI underdog." When asked to speak about utopia and dystopia regarding AI without any preconditions, she commented, "There are all kinds of doctors, and I'm not happy that diagnosis results differ depending on whether they are a great doctor or not. I have expectations for AI that won't miss even the smallest lesion and will give the same appropriate diagnosis to everyone."

Also, regarding the use of AI in corporate recruitment and such, she said, "If it unearths possibilities that the person themselves hadn't even thought of, wouldn't that be a very good thing?"

Yamamoto

I also asked my seminar students recently whether they would want to be hired by AI or by a human when they go job hunting, and it was split exactly half and half.

In the future, a worldview is possible where being judged by AI is actually more trustworthy. That AI sees you more correctly.

Wakameda

But if it's the same algorithm, it's scary because it seems like every company will be full of the same stereotypical people.

Kobayashi

I think that's the kind of thing companies will end up considering. When I was in charge of HR recruitment during my time at Docomo, we would shift our goals—like wanting this kind of talent last year and that kind of talent this year—and change the interviewers. If we didn't, bias would creep in.

Arai

That's right. If we were to do that with AI, I think it could be done by designing it with instructions like, "set diversity to this level."

Yamamoto

If AI judgments work reasonably well by setting diversity parameters, wouldn't that lead to arguments that decision-making in politics or trials should also be replaced by AI? In that case, what is the meaning of "human-centric"?

Arai

Regarding trusting AI, humans are relatively lazy, and I think there's a stance where if things are handled well for them, they'll make decisions without checking the details of the explanation.

When obtaining consent, even if you show them a privacy policy or provide information about data use, they might just say they don't understand difficult topics. They listen well to things convenient for them, but inconvenient things don't easily enter their ears. Human decision-making is thought to be quite haphazard and ambiguous. How should we deal with this fluffy part of human decision-making?

Yamamoto

One could say it's also human to think, "AI is more correct, and leaving it to AI eliminates troublesome things, allowing me to live more comfortably." That's not the Arendtian "human" with a public spirit, but it's still a "human." Respecting others, debating, thinking, going to vote, and maintaining democracy is troublesome.

In that sense, an automated AI society like China's might also be called "human-centric."

Kobayashi

I called the last 30 years the Internet civilization, and the basic principle within Internet civilization is, after all, freedom. We want to be free.

But I think it's human nature to want to avoid making decisions that one doesn't have to make. However, people want to make the decisions they truly want to make for themselves. That should remain, so I think that's fine.

Yamamoto

Will that really remain? It's something we "should" do, but will we really "do" it?

Arai

In the early days of the Internet, there was certainly an atmosphere that things had become convenient with the creation of message boards and social networks, but after that, divisions on the web emerged and negative aspects began to show.

So, since many problems have become visible, I hope that to enjoy freedom, we can make things work by skillfully incorporating mechanisms for problem-solving that are different from the original design philosophy.

How to Design a Free Public Space

Yamamoto

Mr. Arai just mentioned "design." To maintain freedom, won't some kind of design become necessary?

Until now, freedom has probably centered on a laissez-faire, or passive, concept of freedom that excludes interference from others, whether through government regulation or technical regulation. But to maintain a democratic, open society and active freedom, elements of design-oriented intervention might become important.

If the path Japan should aim for is not the Chinese one, what kind of "design" or "mechanisms" will be necessary?

Wakameda

Before the technical talk, while there is the convenience of technologies like the Internet and AI, the new challenges they cause are, from the perspective of sustainability, a new social responsibility for companies. Looking at the past, the invention of the automobile brought about social issues like pollution and traffic accidents, but if you take early action against expected risks, such as developing hybrid cars, it is instead evaluated as a social activity and can even take off as a new business.

If we translate this to the digital age, if we invest in technologies that affect privacy, such as AI's inference of a person's inner thoughts or identification of individuals, we can gain the trust of consumers and opportunities for new business by also investing in technologies that protect privacy. In other words, it would mean designing services where individuals, companies, and society are all happy.

To give a simple example, if there is technology to find and distinguish people with high precision, for the purpose of sensing and analyzing the state of products on shelves in a supermarket, for instance, it could be designed to immediately delete data once it identifies a person—making it, so to speak, a sensor that doesn't recognize people and only analyzes the state of the shelves.

That is exactly the practice of the Privacy by Design concept. Furthermore, an approach called "Human Rights by Design," which looks at human rights as a whole beyond just privacy, has been proposed. I would especially like management to understand and practice this.

Ideally, companies that make such forward-looking investments in technology and service development, beyond immediate economic value, will be evaluated internationally.

Yamamoto

I completely agree. When Japan claims to be "human-centric" in a way that is different from China, the US, or Europe, I think it's important to actively promote the idea that "this technology will realize this kind of inclusive society," rather than just how to avoid the risk of public backlash.

Is there technology to realize a society that is more inclusive and ensures diversity than ever before?

Arai

Technically, the feasibility is sufficient, but whether society will accept it is also important.

Yamamoto

Does that mean companies won't use it after all?

Arai

I think that's a possibility. After all, companies are in trouble if things don't sell, so they have to listen to user demands.

AI can also adjust decision-making, but how to live with that is a problem on the human side. I think it would be good if discussions regarding the design of AI judgments could trigger further development of society's response.

Yamamoto

So in the end, it depends on the maturity of society. Changing the criteria for social evaluation of companies will also become important in the future.

However, if society hasn't caught up to that point at this stage, the nature of legal control also becomes an issue. Mr. Kobayashi, what are your thoughts on that?

Kobayashi

Before deciding what to do with AI, we need to decide what to do with Japan's future.

Japan is currently facing three major changes. First is population decline. Second is the arrival of the 100-year life era. Third is the overwhelming progress of technology. Since these are unavoidable changes, I think it's very important how we accept them and turn them into opportunities.

Based on this premise, regarding the use of AI, which is a symbol of technological progress, there is actually an opportunity specifically for Japan.

There is often talk about "jobs being taken by AI." Countries like China, India, and the US face unemployment risks as social unrest due to their large young populations, but Japan, which is under population decline, doesn't have that worry; rather, it is essential to promote its introduction socially.

However, as the word "okami" (the authorities) still exists in Japan, there is a psychological difficulty in moving unless the administration sets the rules.

That's why, regarding the nature of domestic regulation, it is desirable for the political and administrative sides to quickly establish rules with an eye toward the utilization of technology. We should take action as soon as possible using soft methods like guidelines and directives without waiting for legislation.

At the same time, we need to make "human-centric" data utilization rules the global standard so that the world is not dominated by a data economy where only China and GAFA have the advantage. To do that, we need to involve global multi-stakeholders, which is very difficult, but it's an opportunity precisely because there are no international standards now.

A Forum for Discussion on "Human-Centricity"

Yamamoto

I see. This university has organizations like KGRI that are responsible for promoting the integration of humanities and sciences and globalization. Using such organizations as a starting point, the university might be able to help with global index-making.

Yasuoka

Right now, AI is probably unfolding faster than everyone expects. Even at KGRI, where we are trying to work on this, it is important to create a place within the university where people doing various things can gather to discuss from more angles how to be useful for the next research and the next society, and to present or have dialogues. The university is a place where free and active discussion is allowed.

Kobayashi

As a platform or a provider of a forum, a university is a place where things can be said flatly and freely, and where academic backing can be provided. So I think it's very important to have working people come back to the university more and have various discussions.

However, just discussing won't change the world. We need to review the problems at our feet and produce concrete results.

For example, Japan has 1,718 municipalities (cities, towns, and villages) that hold a lot of important data, but 1,718 types of information systems are running, and the formats for administrative paperwork are also 1,718 different types. Even if we want to utilize the data, it's a very difficult situation. Furthermore, there are over 1,718 types of personal information protection ordinances under the Act on the Protection of Personal Information for handling that data.

I believe that solving this situation quickly and changing the scenery in front of us so that everyone's awareness changes and they start to act—thinking, "Something about our world has changed; what should we do to make more use of this?"—will truly lead to Japan moving forward.

Yamamoto

The "Next-Generation Medical Infrastructure Act" was enacted with the aim of turning medical information into big data, collecting and linking it, and using it for research in medical sciences. However, if the collection and storage systems and file formats differ at each medical institution, advanced information linkage becomes difficult. This is similar to the problem Mr. Kobayashi pointed out; standardization of systems is urgently needed. However, this part clashes with competition between vendors. In Japan, the relationship between the layer that standardizes and the layer that competes has not yet been sufficiently organized.

I also completely agree with the "human-centric" and "human-centric" ideas, but I think more attention needs to be paid to the fact that several trade-off relationships will emerge. The current relationship between standardization and competition is one example. Under China's "Digital Leninism," standardization is powerfully promoted by the government, so data gathers at a tremendous pace. That won't happen in Japan.

Besides that, as I have mentioned, privacy, AI prediction accuracy (correctness), transparency, and efficiency should all stand in trade-off relationships. I think it's necessary to bring "human-centricity" down to concrete discussions and meticulously debate these trade-off relationships.

For example, to increase AI prediction accuracy, one must be prepared to discard privacy to some extent. I feel that this kind of realistic discussion of value balancing does not yet exist in Japan.

Arai

There are stories of users providing personal data in exchange for free services or coupons. While individuals are free to disclose information about themselves, they may not fully understand how that data will be used or what the trade-offs are regarding potential privacy violations.

Wakameda

However, I don't think it's necessarily all bad, and it might not all be a trade-off. Regarding scoring, there could be services aiming for a "plus-sum" outcome—where people who couldn't raise funds based solely on traditional financial information can gain credit and new opportunities based on non-financial information like life logs, or discover potential they hadn't even noticed themselves.

And as our country experiences a super-aging society ahead of the rest of the world—symbolized by the phrase "the 100-year life era"—there is no doubt that "human-centric" utilization of personal data will become extremely important.

As stated in Society 5.0, if we are to be "human-centered," then in addition to deliberations by the government and corporations, more active participation from citizens is highly desirable.

To Realize a Fair Society

Yamamoto

Creating something like a basic law to a certain extent would be the best way to stimulate national debate. Currently, we are at the stage of publishing principles for AI utilization, and there is no movement toward legislation yet, is there?

Kobayashi

It is a matter of sequence; first, utilization progresses, and once an image of the new society becomes visible to everyone, the necessity of establishing rules will be discussed. I feel that we haven't shared that vision of society yet. In that regard, Yukichi Fukuzawa is a wonderful example, as he used illustrations in "Things Western (Seiyō Jijō)" to represent the cutting-edge technology of the time—such as "steam," "medicine," "electricity," and "telegraphy"—to share a vision of the future society with the Japanese people.

I believe in technology and consider it the best tool for realizing a fair society. Thanks to technology, histories are "visualized," effort is evaluated, and people can participate fairly in society regardless of where they live or whether they face disabilities or hardships.

For example, there are 1.2 million Japanese nationals living abroad. The voter turnout for these people in elections is a mere 2%. This is because it is very difficult to travel to the Japanese embassies or consulates in each country. We are currently working on legal frameworks to utilize technology so that online voting can be possible as early as the House of Councillors election four years from now.

Yasuoka

Most people probably don't really understand how they can use technology.

Kobayashi

That may be true. What past political administrations must reflect on is that they were not "human-centric."

We thought we were communicating by saying, "Citizens, we have made rules. Here you go," but originally, if we could have conveyed the background and the vision of the society we are aiming for—explaining that "this rule was actually made so that your life becomes like this"—it would be easier for people to have an image of their own way of life in the AI era.

Yamamoto

I believe AI is a technology with the potential to truly enable a fair and inclusive society depending on how it is used. The challenge is how to specifically demonstrate and bring out that potential without turning a blind eye to the risks.

While traditional Japanese society has championed "respect for the individual" and "equality" in the Constitution, in reality, it has not been able to fully realize these ideals. With the use of AI under the new era name, these ideals might finally be realized. We should acknowledge this honestly. However, unless we broadly discuss the specific and realistic direction in a way that integrates the humanities and sciences, Japanese society could turn into the exact opposite: an exclusionary, pre-established harmony-style surveillance society.

I hope this roundtable discussion serves as an opportunity for readers to realize that we are currently at that turning point.

Thank you all for the lively discussion today.

(Recorded on December 17, 2018)

*Affiliations and titles are as of the time this magazine was published.