Writer Profile

Fumio Shimpo
Faculty of Policy Management Professor
Fumio Shimpo
Faculty of Policy Management Professor
2023/06/05
1. Generative AI and Human Intelligence
John Stuart Mill argues that "the highest intellect which man can possibly acquire, is not to know one thing only, but to combine a minute knowledge of one or a few things with a general knowledge of many things" (Inaugural Address Delivered to the University of St. Andrews, J.S. Mill, translated by Issei Takeuchi, Iwanami Shoten, 2011, p. 28).
Many people who have used ChatGPT, a text-generating AI, are likely amazed by the naturalness of the responses, the smooth interaction, and the erudite content. However, while generative AI may appear to have acquired the highest intelligence, it is actually merely outputting answers by combining information related to the question from a large language model constructed by learning vast amounts of text data. In other words, generative AI is different from what Mill calls the highest intelligence.
Can human intelligence be enhanced by delegating the task of "combining general knowledge of many things" to generative AI, or will human intelligence decline as it is surpassed by AI's knowledge? The current situation, where people are becoming suspicious of generative AI due to its many uncertain and undetermined elements, has resulted in the emergence of sudden arguments for AI regulation.
2. From AI Boom to Practical Application
AI research and development has reached the present day through three booms: the 1950s, the 1980s, and since 2011. Until now, typical examples of AI utilization were those where AI was "embedded" in products or services, such as AI-equipped home appliances like vacuum cleaners, smart speakers, and self-driving cars, and opportunities to use "specialized AI" targeting specific fields or domains were common.
Generative AI can learn from large amounts of data to generate new data, and it demonstrates its power in generating diverse content such as images/videos, audio/music, text generation, and translation. Although it is "highly versatile AI," it is not what is called "Artificial General Intelligence (AGI)," but it gives a premonition of the dawn of AGI capable of demonstrating abilities closer to humans, and it cannot be denied that we have opened a Pandora's box toward its realization.
In previous AI booms, AI that recognized, identified, and inferred input information, such as voice input and image recognition, was mainstream. For example, the expected role of AI was to output accurate text matching the content of input voice or to identify and extract specific individuals' faces from a vast amount of images. On the other hand, generative AI also outputs text, but it generates diverse content as if a human were thinking or creating, depending on the input information or instructions. Instead of searching for a specific image, it creates a new one; instead of simply transcribing, it writes an essay.
Even when the practicality of AI was limited, fictional or virtual threats were often emphasized, such as AI developing in the future to become a threat to humanity like the Terminator in movies. The versatility of generative AI, which is moving beyond such a stage, will undoubtedly be a turning point for recognizing specific dangers and threats, along with the realization of AI's staggering usefulness.
3. Versatility of Generative AI and the Abstract Nature of Risk
Risks surrounding AI research, development, and utilization have been discussed meticulously both domestically and internationally, including by the Ministry of Internal Affairs and Communications' Conference on AI Network Society. The "OECD Council Recommendation on Artificial Intelligence," adopted by the Ministerial Council of the Organisation for Economic Co-operation and Development (OECD) in May 2019, aims to promote AI innovation and trust by promoting responsible management of trustworthy AI while respecting human rights and democratic values.
Due to the impact of generative AI such as ChatGPT, there is a growing illusion that the accumulation of previous discussions is useless because the specific risks associated with its versatility cannot be foreseen. However, it should be noted that because of the abstract nature of those risks, we simply have not yet been able to evaluate how useful the discussions to date actually are.
Despite the fact that necessary principles for AI research, development, and utilization have been considered and already proposed, resistance to legal regulation and arguments that regulation is unnecessary are about to be repeated again with the emergence of new technology and the promotion of innovation. We should not stop at simplistic criticisms that "regulation = evil" and that it hinders the promotion of innovation, but rather consider the "regulation, correction, and discipline" that is inherently and essentially necessary for the use of generative AI, which has been avoided until now.
4. Generative AI and Legal Issues
Regarding legal issues surrounding generative AI, it is true that there are aspects that cannot be evaluated at this point, such as the extent to which unexpected problems that cannot be handled by previous discussions may occur, as it depends on the future expansion of generative AI functions and the invention of new usage methods. However, it is necessary to understand to some extent the legal issues expected when considering future specific regulations and discipline. Although this is merely an illustrative and hypothetical list, we will likely have to consider the following problems.
Since the spread of the Internet, every time a new technology or communication method has appeared in the field of information law, issues surrounding intellectual property rights, including the use of copyrighted works, and the rights to personal information and privacy have been discussed first. Even when a new field called robot law emerged from information law, discussions on intellectual property and personal information were again the first to be brought to the table. Since the discussion on legal issues surrounding generative AI is showing signs of repeating the same process as previous discussions, I am reaffirming that issues of intellectual property and personal information are unavoidable when emerging technologies appear.
As a starting point for categorical trial and error regarding the use of generative AI and legal issues, I would like to list the following points.
(1) Impact on Democracy
(a) Impact on decision-making and judgment in the structures of governance (legislative, judicial, administrative), (b) Impact on elections and candidates for public office, and problems associated with political use.
In making decisions and judgments, it is necessary to research precedents and vast amounts of necessary information, so the use of AI is clearly useful in the legislative, judicial, and administrative fields. However, if AI becomes involved even in final judgments, some might think it is acceptable to position it similarly to a human advisory body. However, because humans cannot judge whether an AI's judgment is correct, there is a risk that we will not even be able to judge whether a judgment on a matter exceeding human wisdom is correct.
(2) Impact on Expressive Activities
(a) Changes in communication and expressive activities, (b) Bias, discrimination, and ensuring fairness and justice in expressive activities, (c) Impact on the right to know, (d) Impact on intellectual activity itself (need to distinguish between intelligence, knowledge, and insight), (e) Cessation of expressive activities and thinking due to dependence on generative AI.
By simply querying generative AI, one can not only pick up necessary information from vast amounts of data but also output information that complements human intellectual activity. Therefore, we will come to rely on AI not only for information searches but also for analysis, organizing points of contention, and various creative activities. As a result, with the future spread of generative AI, will our intelligence be enhanced by its use, or will we fall into a state of suspended thinking and lose our intelligence as a result of excessive dependence? It is unlikely to head in one direction uniformly; rather, it is thought that the direction will change depending on the literacy, usage methods, and usage awareness of those using generative AI.
The use of generative AI will improve accuracy as the precision of output results increases, and whether one can obtain the expected answer will also depend on the ability to ask questions (prompting) in a way that makes it easy for the AI to derive that answer. In other words, in addition to existing information literacy, communication skills with AI will be required.
(3) Protection of Intellectual Property
(a) How to protect creations (outputs) and products (information) by generative AI, (b) Issues related to intellectual property rights, including copyrights, trademark rights, and design rights, in the use of generative AI.
The book by J.S. Mill introduced at the beginning questions "university education," and the use of generative AI will bring about major changes in the way education is conducted at universities. For example, considering the problem of plagiarism in report assignments, how to judge cases such as whether writing using generative AI constitutes plagiarism (the issue of using generative AI itself), when a student accused of plagiarism claims they used generative AI even though they did not (shifting responsibility to generative AI), or when a student is pointed out for plagiarism after using text from reference materials obtained from a friend without knowing it was written by generative AI (illegal/unjust acts by a third party in good faith) will be issues for future consideration.
(4) Protection of Personal Moral Interests (Personal Information, Privacy, Portrait Rights, etc.)
(a) Changes in the environment for handling personal information (difficulty of data protection), (b) How to protect personal information handled without the individual's knowledge or recognition, (c) The increased possibility of inferring personal information requiring special care after the fact even if sensitive information was not acquired. Discussions requiring consideration for the protection of personal moral interests, including personal information protection and the guarantee of privacy rights, are wide-ranging.
As a noteworthy judgment, the Italian data protection authority announced a ban on the use of ChatGPT due to violations of the EU's GDPR (General Data Protection Regulation). After OpenAI, the developer, responded with measures to ensure transparency and protect rights, the decision was lifted on April 28, 2023. The measures announced by OpenAI regarding opt-outs (stopping the use of personal data) include confirming that users have the right to perform that procedure, describing the necessary explanations in the privacy policy, and introducing a form to request opt-outs to allow exclusion from training data and chat history. On the other hand, regarding ensuring accuracy, it simply states that it is technically impossible to correct inaccurate information and explains that users should understand and use ChatGPT knowing that the accuracy of personal information in its responses cannot be guaranteed.
Furthermore, regarding personal information entered by users, it states that handling will be based on legitimate interests along with opt-outs. It can be said that part of the matters to be considered as data protection issues related to generative AI has become clear.
(5) Identifying and Dealing with Illegal and Unjust Use
Setting boundaries for the appropriate use of generative AI will become difficult. We must carefully consider how to prevent the use of generative AI as a tool for promoting or aiding not only crimes and other illegal acts but also unjust acts.
Dealing with "generative AI-utilizing crimes/unjust acts," where the act of using generative AI itself is illegal, and "generative AI-related crimes/unjust acts," where generative AI is used as a support tool to execute existing illegal acts, will be a challenge. Examples of the former include attack methods like "prompt injection," where malicious prompts (instructional text) are entered into generative AI for unauthorized use. The latter includes acts such as receiving guidance on creating computer viruses or manufacturing explosives, or using generative AI to execute existing crimes.
5. Points to Note in Efforts Toward AI Regulation
In considering new regulations, it is naturally expected that in the future, the nature of those regulations will be considered by referring to answers obtained by entering questions into generative AI. There is no room for doubt that it is useful to seek necessary knowledge for considering reliable and effective regulations by having AI comprehensively and exhaustively learn past regulatory cases and their effects. However, when an era arrives where regulations are considered centered on "regulation of AI, by AI, for AI," and it is no longer permitted to present counter-proposals to the optimal solutions derived by AI, a difficult future awaits humanity.
Perhaps anticipating such a sense of crisis, the G7 Digital and Tech Ministers' Meeting "Ministerial Declaration" (April 30, 2023) presented items for consideration to discuss the direction of AI regulation, such as promoting global interoperability of AI governance and establishing a forum to consider generative AI as part of "promoting responsible AI and AI governance." However, what became clear here is that the same discussions as previous approaches to regulation are being repeated. For example, Japan has been consistent in its direction that response through soft law, such as guidelines and self-regulation, is desirable rather than responding through strict legal regulations. At the opposite pole is the EU, which shows no sign of yielding its policy that response should be through strict regulation.
6. EU AI Regulation and the Brussels Effect
In a situation where countries are hesitant about AI regulation, only the EU has proactively specified its regulatory method. The "Proposal for a Regulation on Artificial Intelligence (Artificial Intelligence Act)", an EU bill to regulate the use of AI, was published by the European Commission on April 21, 2021. It sets usage regulations, including bans, according to the risk of the AI system. It aims to establish new legislation by expanding product safety regulatory obligations—currently imposed on manufacturers and importers of products sold (placed on the market) in the EU market—to AI systems classified as high-risk, making them subject to the "CE marking" (a mark indicating that the product meets EU standards), and building a conformity assessment and third-party certification system for that purpose.
There is a theory called the "Brussels Effect," which refers to the phenomenon where regulations proposed by the EU substantially influence global rule-making (The Brussels Effect: How the European Union Rules the World, Anu Bradford, supervised translation by Katsuhiro Shoji, Hakusuisha, 2022). It refers to a mechanism that can exert regulatory power in the market when five conditions are met: (1) market size, (2) regulatory capacity, (3) stringent rules, (4) inelastic targets, and (5) indivisibility. The EU's new AI regulation is expected to be a field that exerts a literal Brussels effect toward future AI research, development, and social implementation.
The objectives of the AI regulation shown in the AI Act proposal are: (a) harmonized rules for placing on the market, putting into service, and using AI systems in the EU; (b) prohibition of specific AI practices; (c) requirements and obligations for high-risk AI systems; (d) ensuring the consistency of transparency rules for AI systems intended to interact with natural persons, emotion recognition systems, biometric categorization systems, and AI systems used to generate or manipulate image, audio, or video content; and (e) market monitoring and surveillance. Regarding generative AI, (d) stipulates ensuring the transparency of AI systems. However, for text-generating AI, consideration is being given to including it in (c) high-risk AI systems, as well as adding disclosure obligations stipulated for ensuring transparency and "labeling" for that purpose.
7. Where AI Regulation is Headed
When new technology appears, discussions to regulate that technology are often held, but what should be regulated is not the technology itself, but the discipline of the humans who use it. Furthermore, the background to the sudden emergence of regulatory arguments along with the attention on generative AI is largely due to fear of the unknown and opaque elements.
AI regulation is not a problem simply associated with the advancement of information processing; the essence of the issues to be considered is the problems associated with autonomous judgment by AI, and those discussions have already been held since the beginning of the third AI boom. What is being tested now is the awareness on the human side regarding the autonomy of AI.
The situation where we laugh at AI that tells blatant lies—such as returning a profile of a different person when you enter your own name and ask a question—will not last very long, and eventually, we will not even be able to verify the credibility (fact-check) of information even if it is incorrect. When that happens, we will have to develop AI to request fact-checks, but then we will fall into an infinite loop where we must develop AI to confirm that those fact-checks are correct.
In the future, AI will exponentially improve the precision of its output results, and its autonomy will improve dramatically beyond what we imagine. AI is a technology developed by humanity. However, ironically, the discussions surrounding AI regulation accompanying the evolution of generative AI vividly represent the situation where human wisdom cannot keep up with that technology.
I want to continue my studies as a legal scholar so that an era does not arrive where only AI can derive the answers for the nature of AI regulation required for Trustworthy AI. While consulting with generative AI.
*This research was supported by the JST Moonshot Research and Development Program, JPMJMS2215.
*Affiliations and titles are as of the time this magazine was published.