#21 Asia AI Policy Monitor
Japan AI Bill, Autonomous Vehicles & Privacy Rules, HK AI Copyright Opt-out, N. Korea AI Hacking, ASEAN Military AI, China NPC Work Report AI, and more!
Thanks for reading this month’s newsletter along with over 1,700 other AI policy professionals across multiple platforms to understand the latest regulations affecting the AI industry in the Asia-Pacific region.
Do not hesitate to contact our editors if we missed any news on Asia’s AI policy at seth@apacgates.com!

Governance
Japan’s Cabinet announced it will submit the Promotion of Research and Development and Application of Artificial Intelligence-Related Technologies Bill to the Diet. The Bill continues the government’s light touch approach to AI - supporting research and talent development in AI, with some governance issues related to privacy, and copyright. These details will be fleshed out in an AI Basic Plan, pursuant to the Bill, and also the formation of an AI Strategy HQ.
The purpose of the Act: to promote the comprehensive and systematic push for AI research, development, and utilization in Japan, boosting national welfare and economic progress, and creating a “basic plan” plus an AI strategy headquarters.
Japan’s Cabinet also is supporting a bill regarding transparency of firms involved in human rights related AI harm.
The government at a Cabinet meeting Friday adopted a bill allowing it to investigate businesses, give them guidance and disclose, as needed, their names in cases of human rights abuses and other malicious activities related to the use of artificial intelligence (AI).
South Korea’s Communications Commission issued guidance for GenAI user protections, including transparency and data protection.
6 Key Action Points
1. Protect Users’ Personality Rights
• Put mechanisms in place to prevent AI outputs from infringing personal dignity, privacy, or reputation. This may include filtering hateful or defamatory content and offering user-reporting channels.
2. Clarify AI Decision-Making Processes
• Clearly indicate that content was generated by AI (“AI-generated”) and, where feasible, provide users with basic insight into how the AI’s outputs are produced (e.g., references or metadata).
3. Respect Diversity and Mitigate Bias
• Address potential biases in training data and model design. Implement policies or technical measures to reduce discriminatory outputs and enable users to report instances of bias.
4. Manage Data Collection and Usage
• Clearly inform users if their input or generated data may be reused for AI training. Provide meaningful opt-out or consent mechanisms, and maintain oversight of how user data is collected and processed.
5. Establish Responsibility for Generated Content
• Outline which responsibilities lie with the service provider and which with the user. Inform users about potential inaccuracies or risks, and maintain risk management systems (e.g., monitoring for harmful content).
6. Ensure Healthy Distribution of Generated Content
• Prevent or deter users from intentionally creating illegal, unethical, or harmful content. Employ filtering, user guidance, and, where relevant, measures to protect minors or other vulnerable groups.
China’s Premier Li Qiang delivered the government work report - emphasizing AI initiatives.
China to support extensive application of large-scale AI models
Under the AI Plus initiative, China will work to effectively combine digital technologies with the country's manufacturing and market strengths. The country will support the extensive application of large-scale AI models and vigorously develop new-generation intelligent terminals and smart manufacturing equipment, including intelligent connected new-energy vehicles, AI-enabled phones and computers, and intelligent robots.
Kazakstan legislators recently introduced an AI bill.
The legislation proposes a complete ban on digital systems that make decisions without human intervention.
The bill was presented by one of its key developers, Mazhilis deputy Ekaterina Smyshlyayeva, who emphasized the need for a transparent and effective legal framework for integrating AI into Kazakhstan’s economy.
“President Kassym-Jomart Tokayev has repeatedly highlighted the importance of AI development and issued relevant directives. During a recent visit to the Artificial Intelligence Development Center, he stressed the need for a balanced approach to AI regulation. On one hand, it is a matter of security; on the other, it is essential for development. Striking this balance is crucial,” Smyshlyayeva stated.
India’s MeitY recently concluded public comments on AI Governance Guidelines.
The Subcommittee’s report highlights the importance of a coordinated, whole-of-government approach to enforce compliance and ensure effective governance as India's AI ecosystem evolves. Its recommendations, based on extensive deliberations and a review of the current legal and regulatory context, aim to foster AI-driven innovation while safeguarding public interests.
Intellectual Property
China’s Supreme Court reported to the NPC on the rising number of AI and IP related cases and the need to address these complex issues.
Zhang said the top court supported the lawful application of artificial intelligence and “protects innovation strictly in accordance with the law”.
“Measures were taken to punish infringements using AI technology, promoting orderly and regulated development,” he told the National People’s Congress (NPC).
Japan’s Anime Association issued comments on Japan’s Intellectual Property Strategy 2025, including on AI.
The ideals aimed for in Article 30-4 of the Copyright Act are ultimately pointing in the opposite direction to the country's ideal intellectual property strategy.
Please take note of the following six points.
(1) There are many cases where the voices of voice actors and others are treated as materials and used or sold as voice changers that evoke specific characters or voice actors. This is not only a violation of copyright and publicity rights, but also a serious situation that could be defamation.
(2) With the advent of generative AI, it is now possible to instantly generate a large number of imitations that closely resemble the appearance and voice of the original character, and it is also easy to make them do or say anything.
Although the Supreme Court ruling denied the "publicity right of objects," in today's world where generative AI is on the rise, isn't it necessary to reconsider the publicity right of "characters"?
(3) Although traditional copyright law does not protect art styles or artistic styles, we believe that it is necessary to start a discussion on a new "generative AI law" that goes beyond the scope of traditional copyright law against acts such as reproducing a specific art style or artistic style through intensive additional learning and generating a large number of imitations in a short period of time…
Hong Kong’s proposed TDM exception for copyright infringement, aimed to make it competitive with similar AI-friendly copyright rules, will also include opt-outs to balance copyright owner interests.
A bill to amend the Copyright Ordinance to support artificial intelligence development will be submitted to the Legislative Council in the first half. The Commerce and Economic Development Bureau is proposing the exception for both commercial and non-commercial uses, allowing reasonable use of copyrighted works in computational data analysis and processing.
Privacy
Japan considers amending privacy rules to allow access for AI training.
Japan's Personal Information Protection Commission is considering nixing a prior consent requirement when obtaining sensitive personal information for the development of artificial intelligence.
The move by the government agency is intended to make it easier for AI-related businesses to utilize personal information. The personal information protection law is reviewed every three years.
"In light of the creation and development of new industries, a study is being made while balancing the protection of personal rights and interests and the utilization of personal information," Chief Cabinet Secretary Yoshimasa Hayashi said at a news conference on Friday.
Cybersecurity, Trust & Safety
Australia’s E-safety Commissioner announced that Google reported over 250 instances of AI-enabled terrorism content among other harmful material.
The Australian eSafety Commission has brought to light a disturbing trend in the misuse of AI technology. According to their report, Google, a major player in the tech industry, disclosed receiving more than 250 complaints globally over a period of nearly a year. These complaints specifically pertained to the use of Google’s AI software in creating deepfake terrorism material.
This revelation marks a significant milestone in our understanding of the potential misuse of AI technologies. It’s the first time we’ve gained insight into the scale of this problem on a global level, underscoring the critical need for enhanced safeguards and regulatory measures.
North Korean hackers are increasingly utilizing the US-based ChatGPT to enhance productivity.
We banned a number of accounts that were potentially used to facilitate a deceptive employment scheme. The activity we observed is consistent with the tactics, techniques and procedures (TTPs) Microsoft and Google attributed to an IT worker scheme potentially disrupting malicious uses of our models connected to North Korea. While we cannot determine the locations or nationalities of the actors, the activity we disrupted shared characteristics publicly reported in relation to North Korean state efforts to funnel income through deceptive hiring schemes, where individuals fraudulently obtain positions at Western companies to support the regime’s financial network.
Japan’s METI released a report on support for autonomous delivery robots.
In order to implement new mobility in society, it is important to first increase social acceptance, such as by making many people aware of its existence.
METI will continue to work with the Robot Delivery Association and other industry players, as well as relevant government ministries and agencies, to promote the implementation of automated delivery robots in society.
Multilateral
ASEAN issued a joint statement on cooperation in the use of AI in the military domain.
PROMOTE the accountable and responsible use of AI in the defence sector and ensuring that accountability and responsibility can never be transferred to machines, consistent with international law, including international humanitarian law, ASEAN relevant instruments, ethical guidelines, governance approaches, and frameworks related to the application of AI in the defence sector;
APEC’s Subcommittee on Standards focused on AI technology in recent meetings in South Korea.
As AI technologies continue to transform industries and societies, discussions at the APEC Sub-Committee on Standards and Conformance meeting in Gyeongju last week focused on promoting recognition of AI-related standards to facilitate trade and ensure transparency in the digital economy.
Dr Byung Goo Kang, Chair of the APEC Sub-Committee on Standards and Conformance, emphasized the importance of international collaboration in AI standardization, noting that technical alignment can enhance trust in AI systems while reducing regulatory complexity for businesses.
Japan conducted a meeting of the Hiroshima AI Process Friends Group of 40 countries.
On Thursday, February 27, 2025, Minister of State for Science and Technology Policy, Ihara Iwauchi, participated in the reception on the first day of the Hiroshima AI Process Friends Group Meeting.
The EU and India issued a statement on the 2nd meeting of the Trade and Technology Council, including AI cooperation.
The two sides reiterated their commitment to safe, secure, trustworthy, human-centric, sustainable and responsible Artificial Intelligence (AI) and to promote this vision on the international level. In addition, with a view to ensuring continued and impactful cooperation on AI, the European AI Office and India AI Mission agreed to deepen cooperation, encouraging an ecosystem of innovation and fostering information exchange on common open research questions for developing trustworthy AI. They also agreed to enhance cooperation on large language models, and to harness the potential of AI for human development and common good, including through joint projects such as developing tools and frameworks for ethical and responsible AI. These will build on the progress made under R&D collaboration on high-performance computing applications in the areas of natural hazards, climate change, and bioinformatics.
South Korea hosts the 2025 APEC meetings, with a focus on AI.
Notable sessions for the meetings in Gyeongju include an exhibitions on customs technologies and green customs initiatives; policy dialogues on AI governance, digital privacy, and cross-border data flows; workshops on carbon-free energy, hydrogen and fuel cell standardization, and clean energy transitions; as well as discussions on financial inclusion, structural reform, and the future of work.
The US issued an executive order bolstering investment restrictions, in particular from China, on AI projects in the US.
It will also seek, including in consultation with the Congress, to strengthen CFIUS authority over “greenfield” investments, to restrict foreign adversary access to United States talent and operations in sensitive technologies (especially artificial intelligence), and to expand the remit of “emerging and foundational” technologies addressable by CFIUS.
In the News and Analysis
Asia AI Policy Monitor editor Seth Hays shares views on how democratic countries in East Asia, including Japan, South Korea and Taiwan, view the release of China’s DeepSeek R1 model.
The release of China's DeepSeek R1 generative AI model has prompted Japan, South Korea and Taiwan to reassess their AI governance in terms of privacy protection, national security and industrial policies. The countries have responded with varying measures, such as suspending the use of DeepSeek over data privacy and national security concerns, promoting laws centred on AI governance and leading in global AI governance discussions, demonstrating their capability to boost their domestic AI industries amid regional volatility and geopolitical tension.
China and Africa work together through the Digital Silk Road initiative on AI.
According to a recent assessment by Global System for Mobile Communications, Africa represents only 2.5 per cent of the worldwide AI market, yet estimates suggest that the technology could increase the continent’s economy by US$2.9 trillion by 2030 – the equivalent of increasing annual gross domestic product growth by 3 per cent.
Analysis from CSIS on Japan’s AI policy shows the country favors a light touch approach on regulation.
Two key publications from the first half of 2024 strongly signaled that Japan was heading toward new legislation aimed to more comprehensively regulate AI technology: in February, a concept paper from the ruling Liberal Democratic Party, and in May, a white paper by the Japanese Cabinet Office’s AI Strategy team. Both documents recommended the introduction of new regulations for large-scale foundational models. All signs suggested that Japan was moving in parallel with its allies toward establishing a strengthened regulatory structure for AI.
Hong Kong’s plans to use AI in effort to cut costs and headcount in government.
Hong Kong’s finance chief has unveiled a belt-tightening budget seeking to tap new sources of revenue and ease a HK$87.2 billion (US$11.2 billion) deficit, starting with a pay freeze for all public servants, a downsizing of the civil service by 10,000 positions and a cut in education spending.
Citing the benefits of the “one country, two systems” governing principle amid China’s rising prowess in technological innovation, Financial Secretary Paul Chan Mo-po on Wednesday also pledged to transform the city into an international exchange and cooperation hub for artificial intelligence (AI), making it a new key economic driver.
South Korea’s LG Electronics plans to build the world’s largest AI data center.
Once complete, it would be the world’s largest AI data center that can scale up to 3 gigawatts of capacity. Featuring advanced cooling infrastructure, as well as regional and international fiber bandwidth, the data center would be able to handle significant and sudden variations in energy load, according to a press release.
Advocacy
Pakistan has an open consultation on its draft National AI Policy ongoing.
China’s TC260 issued public consultation on draft AI Safety Standards until 26 February.
New Zealand’s Privacy Commissioner issued public consultation on its draft Biometric Processing Privacy Code of Practice. The Code includes 12 prospective rules until 14 March.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.