In its current effort to become “the world’s most AI-friendly country,” Japan is using a combination of “hard law” instruments and non-legally binding “soft law” guidelines. Its AI Promotion Act, which was passed back in May and took effect in September, is designed to facilitate AI development and encourage widespread social adoption of AI technologies.
The AI Promotion Act creates a new AI Strategy Headquarters, which is chaired by the Prime Minister and tasked with coordinating policy across all government ministries. The fact that this new entity is chaired by Prime Minister Sanae Takaichi shows that Japan is taking AI very seriously right now. Also, according to the AI Promotion Act, a forthcoming AI Basic Plan is expected in 2026.
In his recent report for the Center for Strategic and International Studies (CSIS), Kyoto University researcher Hiroki Habuka writes, “The newly enacted AI Promotion Act can be understood as a mechanism to ensure the entire government can respond swiftly to changes in AI technology and risks, enabling the Cabinet Office to serve as a ‘control tower’ for risk assessment and the consideration of countermeasures.” The law is, by design, subject to change.
Although the law technically requires businesses to adhere to the new measures implemented by the government, the law is also non-binding, and there are no penalties for noncompliance at this time. Later, we’ll discuss how non-binding laws work uniquely well in Japan—and not so well in the U.S.
Japan’s extreme sense of urgency regarding AI development
Sensing that they are behind in the AI arms race, Japan wants to create policy that encourages innovation and public usage of AI. According to a December 6 Jiji Press article, Japan believes that “the United States and China are taking the lead in the development of AI, with the proportion of people who have used generative AI standing at 68.8% and 81.2% respectively, far higher than 26.7% in Japan.”
The Japanese government hopes to change these numbers; in fact, per the draft version of the forthcoming AI Basic Plan, Japan intends to increase the AI utilization rate among the public to 50 percent, and eventually to 80 percent.
According to Japan’s AI Guidelines for Business, a soft-law instrument that was first published in April 2024 and then updated in December 2024 and March 2025, AI is viewed as a solution to social and economic problems unique to the island nation—such as labor shortages arising from Japan’s aging population and low birthrate numbers.
Japan’s pillars for AI governance and “Society 5.0”
Although operating with a new sense of urgency in 2025, Japan has not changed its AI governance policies dramatically. As Habuka explained during a March 2025 CSIS event,
“The Japanese AI governance policy has not dramatically changed since around 2016. So, we have mainly three pillars for AI governance. The first one is promoting the development and use of AI across society. The second pillar is taking a sector-specific regulatory approach rather than [a] holistic approach. The third pillar is pursuing an agile and multi-stakeholder governance model, rather than the top-down or command-and-control type of governance.”
Since 2016, Japan has had a vision for its future society. Called “Society 5.0,” it is a future where cyberspace and the physical world are fully integrated with AI and robotics. As Habuka explains, “The Society 5.0 vision is like a human-centered society where a high-degree of integration between cyberspace and physical space can promote economic development and also solve societal problems.”
Given this vision, it is safe to say that Japan is ready to embrace AI into the fabric of their society. And to be sure, this vision for the future explains some of the country’s unique AI policy decisions.
Japan amended its Copyright Act to make accommodating provisions for AI training
As I’ve previously addressed on ManageEngine Insights, the Copyright Act of Japan (1970) was amended back in 2018 to create accommodating provisions for AI training.
As it currently stands, Japanese entities that are building AI models can legally process any data “regardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise.” Policy-maker Nagaoka Keiko has gone on the record, saying “It is possible to use the work for information analysis—regardless of the method, regardless of the content.”
As long as the copyrighted training data is being used for information analysis (as opposed to human enjoyment), that usage is likely not subject to Japanese copyright law.
However, there are exceptions to this “information analysis” loophole. If the copyright holder is “unreasonably harmed” (e.g., their earnings are jeopardized), the LLM service providers could be liable.
Additionally, although these companies can legally pull copyrighted material into their AI models’ training data, regulators don’t want to see AI output that looks overtly reliant upon that copyrighted work. And, as another caveat, the users’ inputting of copyrighted works isn’t covered by the information analysis provision; so, hypothetically, the users could get into trouble.
Japan may amend its federal data privacy legislation (APPI) to bolster AI development
Japan’s data privacy regulator—the Personal Protection Commission (PPC)—is currently considering amending the “Japanese Act on the Protection of Personal Information” (APPI) in order to make it easier for AI companies to use people’s personal data to train their models.
The proposed amendment would allow third parties to use data subjects’ personal information in AI model development efforts—without the data subjects’ prior consent—as long as there is “objectively no risk” to the data subjects.
As it stands today, the APPI mandates that AI companies gain prior consent before processing Japanese citizens’ sensitive information, such race, medical records, and criminal backgrounds; however, in an effort to bolster AI model training, the PPC is rethinking its policy.
As long as the sensitive data is solely being used to help train the models, and it doesn’t violate the users’ personal rights or interests, the PPC doesn’t seem to think there’s an issue.
Defending his rationale for rolling back this prior consent mandate, PPC Chair Satoru Tezuka worries that, [without such deregulatory interventions] “Japan would find itself unable to compete on the same footing as the rest of the world.”
While the PPC does plan on rolling back the prior consent mandate, it also intends to pair this bit of deregulation with harsher fines on businesses that repeatedly violate existing personal privacy laws. A bit of a balancing act.
The similarities between the AI governance policies of the U.S. and Japan
From a regulatory standpoint, the U.S. and Japan each believe that there shouldn’t be a one-size-fits-all AI law. Given that AI tools tend to amplify existing risks, such AI tools should be regulated through existing legal frameworks within various industries. Or at least, that’s the train of thought.
Essentially, both nations believe that AI regulation should be sector-specific and based on the specific use case, as well as the level of risk. Both countries rely on their existing laws and regulatory frameworks to address AI risks in specific fields.
And where the nations’ AI governance policies diverge
Under the current administration, the U.S. is taking a market-led approach (and of course, the European Union takes more of a rule-marking/regulatory approach).
Conversely, Japan takes a primarily “soft-law” approach, whereby businesses voluntarily comply with suggested regulations, and the Japanese government offers non-binding, subject-to-change guidance.
Soft law instruments work particularly well in Japan because it’s a culturally unique country. In CSIS’ March 2025 event, “Unpacking Japan’s AI Policy,” Gregory Allen, the director of the Wadhwani AI Center, says,
“It […] speaks to why Japan is so comfortable with a soft law approach because historically, Japanese companies really do get on board with soft law approaches to an extent that is greater, perhaps, than other countries or regions.”
Habuka agrees with this sentiment, and then laments that Japanese companies can sometimes be too compliant. In fact, Japanese companies’ proclivity to be hyper-compliant was a concern that was cited in a recent government report. Habuka says,
“There’s a difference – cultural difference. Interestingly, in their report there is a sentence that says, given that Japanese companies are too compliant, maybe, new regulation would [have] too strong [of a] chilling effect on companies, and that is why we shouldn’t directly jump into a new regulation on AI.”
In addition to a penchant for rule-following, as well as a tendency to put the group (and society) ahead of the individual, Japanese society is not litigious like American society. Habuka elaborates,
“Our society–Japanese society—is not like the U.S. society, where you can just go to court if you have–if you caused any problems. Like, Japanese businesses are really reluctant to go to the court. We prefer ex ante agreement or maybe settlement. And for that kind of culture, it’s important to have, to some extent, an agreed [upon] framework or guidance to be responsible for innovation, rather than just make things happen and go to court afterwards.”
It might go without saying that the soft-law approach would not work well in the United States.
Japan’s unique cultural context makes the soft-law approach viable
Given Japan’s strong culture of social accountability throughout society, most businesses are wary of ignoring AI ethics and governance guidelines; the last thing Japanese companies want is dilution of public trust. In a November 2025 brief written for the National Bureau of Asian Research, Kyoko Yoshinaga explains,
“Several factors explain why this approach works effectively in Japan. The long-standing relationship between Japanese government and industry means that when the government issues a guideline, companies tend to comply even without legal obligation. They know that noncompliance could result in reputational harm, loss of government support, or stricter regulation later.”
Again, quite different from the U.S.
Other differences between American and Japanese AI governance policies
In his October report, “Japan’s agile AI governance in action,” Habuka points out that the differences between the two nations’ AI governance policies reflect their respective national strategies more broadly. Habuka writes,
“The AI Action Plan for the United States explicitly sets competition among nations and the establishment of technological dominance as its clear goals. This is based on a competitive worldview that positions AI as a strategic asset capable of altering the geopolitical power balance. In contrast, Japan’s policy prioritizes fostering public trust in AI systems.”
What’s more, Japan doesn’t try to intervene in algorithmic output to try to impose its preferred views on the greater world. Emphasizing the two countries’ ideological differences, Habuka continues,
“While Japanese policy documents speak of fundamental values such as the rule of law, human rights, democracy, diversity, and fairness, they make no mention of content intervention to exclude specific political or social ideologies from AI. This approach seeks to form a cooperative international order where different values can coexist and interlink, rather than attempt to propagate a specific set of values globally. In short, whereas the United States aims for the export of its own values, Japan aims for interoperability based on the premise of diverse values.”
In short, for better or worse, due to vast cultural differences, it is unlikely that the U.S. could—or would—be able to successfully incorporate many of Japan’s AI governance policies.


