Keio University

[Special Feature: Science, Technology, and Social Issues] Kyoko Yoshinaga: Perspectives on the EU AI Act and Emerging Technology Regulation

Writer Profile

  • Kyoko Yoshinaga

    Graduate School of Media and Governance Project Associate ProfessorOther : Non-resident Fellow, Georgetown University Institute for Technology Law & Policy

    Kyoko Yoshinaga

    Graduate School of Media and Governance Project Associate ProfessorOther : Non-resident Fellow, Georgetown University Institute for Technology Law & Policy

2024/08/05

The World's First Comprehensive AI Regulation Law Enacted

In the EU, the "AI Act" (officially the AI Regulation), the world's first comprehensive and direct regulation of AI, has been enacted (published in the Official Journal on July 12, 2024, and will come into force sequentially starting August 1, 20 days later). In addition to being enacted to address AI risks, it is also intended to facilitate the circulation of AI within the market by unifying the markets of the 27 member states. Furthermore, according to the EU, regulating AI in a unified and comprehensive manner will provide legal certainty.

The law regulates AI according to risk. Specifically, it covers (1) prohibited AI practices, (2) high-risk AI systems, (3) specific AI systems, and (4) general-purpose AI models (which were not envisioned in the initial drafting stage). Prohibited AI practices include, for example, the use of subliminal techniques (techniques that secretly act on the subconscious) or manipulative techniques; exploiting vulnerabilities such as age or disability to adversely affect human behavior or decision-making; so-called "social scoring," which evaluates and classifies individuals or groups based on social behavior or personal characteristics to cause harm or unfair treatment; crime prediction based solely on profiling; creating facial recognition databases scraped from the internet; emotion recognition in workplaces or educational institutions except for medical or safety reasons; the use of biometric categorization systems that collect sensitive personal information except where legal; and the use of real-time remote biometric identification systems in public spaces for law enforcement purposes, with some exceptions.

Most of the provisions are regulations for high-risk AI that could pose significant risks to human health, safety, or fundamental rights, including requirements for transparency such as establishing risk management systems and disclosing information to stakeholders. Additionally, providers of high-risk AI systems must undergo a conformity assessment before placing them on the market or putting them into service. For specific AI, developers and deployers (utilizing businesses) are subject to lighter transparency requirements; for example, they must inform end-users that they are interacting with AI (such as chatbots or deepfakes) (general-purpose AI will be discussed later). Currently, most AI in the EU single market consists of minimal-risk AI (such as AI-powered video games or spam filters), which have no specific legal obligations. Note that the law does not apply to AI used for military or research purposes.

Even for companies without a base in the EU, the law applies if they deploy services to the EU or if the output of high-risk AI systems is used in the EU, so Japanese companies falling into these categories will also be affected. From this perspective, an effect similar to the "Brussels Effect," where the GDPR (General Data Protection Regulation) influenced the world, is anticipated. Businesses that violate the law will face heavy fines.

The Beginning of the AI Regulation Debate

The global debate on AI regulation became active around 2016. In March of that year, there was shocking news that an AI Go program using "deep learning" had defeated a human champion. While deep learning brings various possibilities, it also presents the so-called "black box problem," where it is beyond human understanding why certain results are produced. It began to be pointed out that, depending on how AI models are built and used, biases could be amplified, leading to discriminatory results that favor specific groups, or humans could be arbitrarily manipulated unconsciously, ultimately having a negative impact on society as a whole. Thus, discussions on AI regulation began.

Japan has been a world leader in proposing principles for AI research and development, contributing to the debate on AI regulation. In 2016, Japan held the G7 presidency, and at the "G7 ICT Ministers' Meeting" in April of that year, Japan proposed eight principles for AI research and development. This triggered international discussions on AI principles, leading to the agreement on the OECD AI Principles and the G20 AI Principles in 2019. The following year, the "Global Partnership on Artificial Intelligence" (GPAI), an international organization to discuss the implementation of the OECD AI Principles, was established. As an expert member of GPAI, I am involved daily in research and studies on practices that serve as references for governments, companies, and organizations.

When Japan held the G7 presidency again in 2023, based on the results of the summit in Hiroshima, a new framework called the "Hiroshima AI Process" was created, where relevant G7 ministers lead discussions on creating international rules for the development and utilization of AI. At the end of November the previous year, when OpenAI released ChatGPT, the risks posed by generative AI became apparent, so it was decided to include countermeasures in the discussions. In the Hiroshima AI Process, through a "multi-stakeholder process" that broadly sought opinions from various stakeholders emphasized by Japan (non-G7 countries, the public and private sectors, academia, and civil society), the G7 Leaders' Statement was released on October 30, 2023, along with International Guiding Principles and a Code of Conduct for AI developers.

AI Regulation in the United States

In the United States, in October 2022, the White House Office of Science and Technology Policy (OSTP) announced five principles regarding AI under the title "Blueprint for an AI Bill of Rights." Since "Bill of Rights" refers to the human rights protection provisions in the U.S. Constitution, I think it was a very clever naming choice that evokes that concept. Subsequently, in July 2023, the Biden administration gathered seven leading AI development companies, including Google, OpenAI, and Anthropic, to have them commit to safe, secure, and trustworthy AI development as a voluntary initiative (Voluntary AI Commitments). Two months later, eight more companies, including Adobe, IBM, and NVIDIA, also joined. The administration is actively working on policies for AI ethics and responsible AI.

Furthermore, as guidance for a risk management framework that companies can refer to, the National Institute of Standards and Technology (NIST) released the "AI Risk Management Framework 1.0" in January 2023. Additionally, on October 30, 2023, immediately after the G7 Code of Conduct was released, President Biden issued the "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It is often misunderstood that this Executive Order signaled a shift in the U.S. from a soft law approach (non-legally binding guidelines, etc.) to a hard law approach (legally binding laws), but that is not the case (I confirmed this point by visiting Washington, D.C. at the end of June to speak with government officials and think tank researchers). This Executive Order is a command from the President, as the head of the executive branch, to federal government officials and administrative agencies; it does not require specific actions from companies. The specific content of the guidance is left to each government agency.

At the local government level, there are examples of laws being enacted to regulate AI. In New York City, a law was passed requiring notification if AI is used in hiring activities, but it is said to not be functioning very well.

At the federal level, the United States still lacks a comprehensive personal information protection law. Bills have been introduced and then disappeared for some time. In fact, it is said that the absence of such a law allowed AI development companies to advance their research and development. Regarding AI as well, legislative bills particularly concerning accountability are being actively proposed by members of Congress, but there is no prospect of them being enacted.

AI Regulation in Japan

In Japan, as mentioned earlier, the Ministry of Internal Affairs and Communications (MIC) released the "AI R&D Guidelines for International Discussions" in July 2017, consisting of nine principles (adding the "Principle of Collaboration" to the eight principles contributed to the OECD). From the perspective of utilization, the "AI Utilization Guidelines" were published in August 2019. Around the same time, in March 2019, the Cabinet Office issued the "Social Principles of Human-Centric AI" (decided by the Integrated Innovation Strategy Promotion Council). Furthermore, the Ministry of Economy, Trade and Industry (METI) published the "Governance Guidelines for the Implementation of AI Principles Ver. 1.1" in January 2022, which organized action goals that AI businesses should practice when respecting the "Social Principles of Human-Centric AI" and presented hypothetical practical examples.

Furthermore, based on the "Social Principles of Human-Centric AI," the "AI Guidelines for Business" were published on April 19, 2024, integrating the MIC's AI R&D and Utilization Guidelines and METI's Governance Guidelines while considering the emergence of new technologies. I was involved in the formulation of these guidelines as a member of METI's "Conference on AI Guidelines for Business," and the fact that MIC and METI joined forces to consolidate them into a single set of guidelines is commendable. I have worked for government agencies for many years as a think tank researcher, and it is very rare for multiple ministries to collaborate on issuing guidelines; I hope such inter-ministerial cooperation continues in the future.

Thus, while Japan currently adopts a non-legally binding soft law approach as a framework for comprehensive AI regulation, it is addressing individual fields by amending existing laws to keep pace with AI progress (for example, the amended Financial Instruments and Exchange Act and the Act on Improving Transparency and Fairness of Specified Digital Platforms). (For details, see Naohiro Furukawa and Kyoko Yoshinaga, "Responsible AI and Rules" (Kin'yu Zaisei Jijo Kenkyukai, May 2024)).

Looking abroad, the UK, like Japan, is taking a sector-by-sector approach centered on self-regulation. Israel, which has many AI startups, also maintains that many issues can be handled with existing laws; its stance is to respond sector-by-sector if intervention by the relevant authorities is necessary, balancing the need to address AI-specific risks and the speed of change through soft law approaches and modular experiments. Singapore also places soft law at the center of its comprehensive regulation, providing the "AI Verify toolkit" for governance and technical evaluation.

On the other hand, the EU is currently the only one adopting hard law as an approach to comprehensive AI regulation, but bills to comprehensively regulate AI have been introduced in Canada, South Korea, and Brazil. China has introduced voluntary principles and guidelines to integrate ethics into the entire life cycle of AI for general scientific and technological research, while applying hard law regulations to specific types of AI (recommender systems, deep synthesis technology, and generative AI). (For details on the above countries, refer to CEIMIA's A Comparative Framework for AI Regulatory Policy [PDF] report. I am also serving as an advisor for the second report.)

Methods of Regulation

Whether to use hard law or soft law depends on each country, and one is not necessarily better than the other. Furthermore, there is actually not much difference (the reason will be explained later). Since the circumstances each country faces are diverse—including economic conditions, cultural backgrounds, legal cultures, the existence of existing laws (e.g., provisions in personal information protection laws, civil law, criminal law), and corporate cultures—it is best to take measures suited to that country.

In Japan's case, as is already happening, it seems best to start with non-legally binding soft law (guidelines) as a means of comprehensive AI regulation and then regulate with hard law (legislation) in individual fields as necessary. Even with soft law, unlike the U.S., which lacks a comprehensive personal information protection law at the federal level, Japan has a solid personal information protection law (Japan used to be positioned somewhere between the U.S., which emphasizes the economy, and the EU, which emphasizes human rights, but since the EU's GDPR, Japan has also significantly aligned its legal amendments with the GDPR). Furthermore, Japan inherently has strong social sanctions and high corporate awareness of compliance. In Japanese committees for IT-related policies and legal system reforms, businesses are often included as members, which creates a certain incentive to comply; even without legal binding, if guidelines are issued by the government, most companies will try to address them seriously. I often hear from overseas counterparts, "Japan is lucky. in my country, if it's not legally binding, no one follows it."

Looking at the situation of personal information protection, in Japan, if it is reported that a company has leaked personal information, that company's reputation drops immediately. Therefore, Japanese companies are nervous about complying with the Personal Information Protection Act and take countermeasures. In Japan, especially among listed companies, there is a tendency to avoid even slight risks and not take challenges. Therefore, if Japan were to suddenly regulate AI comprehensively by law, no one would want to develop AI. This would hinder innovation, decrease Japan's international competitiveness, and adversely affect the economy. Since Japan faces a serious decline in the labor force due to a low birth rate and aging population, it must utilize AI effectively. Additionally, many AI risks can be addressed with Japan's existing laws.

However, there are fields that must be strictly regulated in line with technological progress. Regulation will be necessary for military aspects, government use of AI, and if AI moves to the next step, such as Artificial General Intelligence (AGI). Furthermore, if companies fail to follow the AI Guidelines for Business and develop or utilize AI as they please, leading to actual negative impacts on people and society, regulation will be unavoidable.

How to Ensure Interoperability

The question of how to ensure interoperability if each country's AI regulations are fragmented becomes a point of discussion. In the international community, because the circumstances each country faces and the degree of technological progress are too different, it is difficult to reach a consensus on legally binding hard law. In that case, consensus can only be reached in the form of "principles" in a broad framework—that is, consensus through soft law. However, when discussing at international conferences, I feel that whether it is hard law or soft law doesn't matter much, as they are becoming similar. In fact, every country advocates principles of human-centric AI, safety, fairness, transparency, and accountability. Technical standardization by ISO is also progressing.

At GPAI, practical matters are discussed, and recommendations, materials, and best practices that countries and companies can easily refer to are created through various projects. Japanese government agencies are also closely monitoring and supporting GPAI's movements and sharing and discussing them at G7 and OECD meetings, so it can be said they are mutually influencing each other. Participants from the Global South also participate in daily discussions at GPAI. Furthermore, a framework called the Hiroshima AI Process Friends Group has been created, with 53 countries and regions, including the EU, participating (as of June 2024).

In this way, international organizations are mutually influencing each other and striving to form a consensus. Note that countries referred to as the Global South are particularly concerned about jobs being taken by AI and access to AI (whether they can actually use AI). Technology-advanced nations need to create rules that also consider the perspectives of developing countries. However, in the end, the real power over AI rule regulation is held by countries leading in AI technology. If a country cannot gain a competitive advantage in technology, it will end up having other countries' rules forced upon it.

Regulation of General-Purpose AI

Until now, AI has been called "narrow AI" or "weak AI," performing specific tasks and trained on labeled datasets to operate within predefined environments, making it somewhat predictable. However, the emergence of generative AI such as ChatGPT, which is a Large Language Model (LLM), has made significant progress toward "strong AI" or Artificial General Intelligence (AGI) that performs a wide range of intellectual tasks, making it familiar to people. While there are various issues such as hallucinations (misinformation), privacy, intellectual property rights, bias, and deepfakes, concerns are also spreading that as general-purpose AI advances, it could be used for cybercrime or, ultimately, lead to the extinction of humanity.

Therefore, in the EU, scientists warned that an approach classifying AI systems as high-risk based on their intended purpose would create loopholes for general-purpose AI systems (foundation models). Organizations like the Future of Life Institute also argued that such systems should be incorporated into the AI Act, leading to a separate chapter for "General Purpose AI" (GPAI; this abbreviation is the same as the organization name mentioned earlier, which is confusing). Specifically, GPAI model providers are obligated to provide technical documentation and specification procedures, comply with the EU Copyright Directive, and publish summaries of content used for pre-training. Providers of GPAI models with systemic risks bear obligations such as model evaluation, conducting adversarial testing, tracking and reporting serious incidents, and ensuring cybersecurity protection (though models for R&D purposes before being placed on the market are excluded). This serves as a reference for Japan as well. (Note: It is necessary to be careful as current generative AI/GPAI is not AGI, but merely a step toward AGI.)

At the federal level, the U.S. does not yet have regulations for general-purpose AI models. Regarding countermeasures for generative AI risks, NIST issued a draft "AIRMS Generative AI Profile" as guidance on April 29, 2024.

Meanwhile, AI Safety Institutes are being established one after another for the development and utilization of safe and secure AI, including responses to risks from general-purpose AI. Following the UK and the US, Japan established one in February 2024, and discussions are reportedly underway in Canada and India as well (as of June 2024). These institutes study and promote evaluation methods and standards for AI safety, and by collaborating with each other, they are expected to contribute to resolving the aforementioned interoperability issues. (In France, Inria (National Institute for Research in Digital Science and Technology), which is also a support center for GPAI (the organization), collaborated with the UK AI Safety Institute in February 2024.)

What is Needed for the Regulation of Emerging Technologies—Flexibility, Speed, Multi-stakeholder, and Interdisciplinary Perspectives

The emergence of generative AI is said to have been much faster than predicted. In the future, AutoGPT, where AI learns spontaneously and operates autonomously without humans entering prompts (instructions), will likely become mainstream. The "black box" problem of AI will become increasingly serious, and things unpredictable by humans will occur. Besides private use, there is also the so-called dual-use problem, where it can be converted to military use. The faster technology progresses, the more flexible and rapid a response is required. In AI regulation, it is necessary to involve various stakeholders, including the public and private sectors, academia, and civil society organizations, in discussions.

In this regard, Japan's AI Guidelines for Business were made soft law to allow for flexible and rapid (agile) responses and to avoid hindering innovation. They also adopt a multi-stakeholder approach.

Furthermore, initiatives with interdisciplinary perspectives are necessary at the sites of corporate AI development. Unlike previous IT, AI risk issues affect humanity and society as a whole, so the perspectives of experts in law, economics, sociology, philosophy (ethics), psychology, and cultural anthropology, in addition to engineers, are useful. Since risks also vary depending on the context in which AI is used, it is necessary to include experts from the relevant fields (e.g., healthcare, finance) in discussions.

To Survive the AI Era—What is the Role of Universities?

AI developers are sometimes unaware of the constraints of the Personal Information Protection Act or Copyright Act, or may not be aware of trends in international discussions on AI ethics. Legal scholars also tend to focus only on regulation without knowing the technology well; there is a need to learn from each other. For example, in faculties and graduate schools researching AI, it is necessary to teach basic knowledge of law and ethics in addition to programming. In legal training as well, it is necessary to acquire basic literacy in technology.

In the future, as a role for universities to cultivate interdisciplinary perspectives for surviving the AI era, programs where students can earn degrees in multiple fields could be considered. In the United States, law schools that train lawyers implement Joint & Dual Degree Programs (programs where one can simultaneously earn a Master's degree from another graduate school in addition to a J.D. (Juris Doctor)). Furthermore, at Georgetown University Law Center, where I belong, one can earn degrees such as a Master of Laws in Technology Law & Policy (LL.M.) for those with a law degree, or a Master of Law and Technology (M.L.T.) for those without a law degree, signifying mastery of both law and technology. Additionally, discussions on AI are held by inviting experts in philosophy and cultural anthropology.

In this way, to solve the complex problems brought by AI, a combination of knowledge from a wide range of academic disciplines, application skills based on basic academic ability, and flexibility are required. It is hoped that we will face AI skillfully while engaging in discussions with people from various academic fields with a global perspective.

*This research was supported by the JST Moonshot Research and Development Program, JPMJMS2215. *Affiliations and titles are as of the time this magazine was published.