#20 Asia AI Policy Monitor
Asia @ AI Action Summit, New Ultraman Copyright Case in China, 'Dark Pattern' regs in Korea, Indonesia Deepfakes, Japan AI Contracts Checklist, and more...
Thanks for reading this month’s newsletter along with over 1,700 other AI policy professionals across multiple platforms to understand the latest regulations affecting the AI industry in the Asia-Pacific region.
Do not hesitate to contact our editors if we missed any news on Asia’s AI policy at seth@apacgates.com!
Paris AI Action Summit
Our editor, Seth Hays, published a round-up and analysis of Asian countries’ activity at the Paris AI Action Summit at Asia Times.
As the AI industry develops in Asia, countries will want to see the overall positive economic and social value of AI on balance with any negative outcomes, otherwise tightening of the industry may come in the form of greater regulation in the years to come.
Signatories from Asia join over 100 countries to sign the Inclusive and Sustainable AI for People & the Planet including Australia, Cambodia, China, India, Indonesia, Japan, Kazakstan, New Zealand, Singapore, South Korea, and Thailand. The agreement supports the following:
Promoting AI accessibility to reduce digital divides; Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all; Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development; Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth;Making AI sustainable for people and the planet; Reinforcing international cooperation to promote coordination in international governance.
The Paris Charter on AI in the Public Interest includes 10 signatories, with Action Summit co-chair India the only one from Asia. The principles include:
Openness drives progress in science, catalyzes innovation and enables competition. Today, openness in AI is largely driven by a few actors’ decision to partly open their foundation models.
Accountability across every step of AI design, development, and deployment is a cornerstone in achieving AI for the public interest.
Military
No signatories from Asia joined the Paris Declaration on Maintaining Human Control in AI enabled Weapon Systems. The Declaration includes the following:
Consistent with our commitment to ensure responsible application of AI in the military domain, we will not authorise the decision of life and death to be made by an autonomous weapon system operating completely outside human control and a responsible chain of command.
This report highlights how US and China competition on AI and nuclear weapons plays out.
In recent years, the previous bipolar nuclear order led by the United States and Russia has given way to a more volatile tripolar one, as China has quantitatively and qualitatively built up its nuclear arsenal. At the same time, there have been significant breakthroughs in the field of artificial intelligence (AI) technologies, including for military applications. As a result of these two trends, understanding the AI-nuclear nexus in the context of U.S.-China-Russia geopolitical competition is increasingly urgent
Privacy
South Korea’s Privacy regulator seeks a suspension on domestic use of DeepSeek due to privacy concerns, following an inquiry earlier this month.
As a result of our own analysis, we have identified some shortcomings in communication functions and personal information processing policies with third-party businesses that have been pointed out in domestic and international media.
Deepseek announced last week (2.10.) that it had appointed a domestic agent, and that it had neglected to consider domestic protection laws during the process of launching its global service, and that it would actively cooperate with the Personal Information Protection Commission in the future.
On the sidelines of the Paris AI Action Summit, privacy regulators from Australia, South Korea, UK, Ireland and France agreed to a joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protective AI.
We commit to the following:
To foster our shared understanding of lawful grounds for processing data in the context of AI training in our respective jurisdictions.
To exchange information and establish a shared understanding of proportionate safety measures based on rigorous scientific and evidence-based assessments and tailored to diversity of use cases.
To continuously monitor both the technical and societal implications of AI and to leverage the expertise and experience of data protection authorities and other relevant entities, including NGOs, public authorities, academia and businesses, in AI-related policy matters when possible.
To reduce legal uncertainties and secure space for innovation where data processing is essential for the development and deployment of AI.
To strengthen our interactions with relevant authorities, including those in charge of competition, consumer protection and intellectual property, to facilitate consistency and foster synergies between different applicable regulatory frameworks to AI systems, tools and applications.
Trust, Safety & AI Governance
OpenAI discovered the use of open source LLM-powered AI surveillance tools used by China to monitor anti-CCP speech on global social media.
There have been growing concerns that A.I. can be used for surveillance, computer hacking, disinformation campaigns and other malicious purposes. Though researchers like Mr. Nimmo say the technology can certainly enable these kinds of activities, they add that A.I. can also help identify and stop such behavior.
Japan’s METI published a checklist for contracts in AI use and development.
The checklist was formulated with the aim of ensuring an appropriate allocation of benefits and risks between the parties involved and thereby promoting the utilization of AI, and is based, inter alia, on the following principles:
To provide users of services using AI technology with the basic knowledge they need to fully consider the scope of use of data provided to service providers and the contractual benefits (level of service, terms of use of AI products, etc.)
In order to prevent inappropriate use of the data provided, specific points that users should check when entering into a contract (checkpoints) must be described.
The Australian eSafety Commissioner published guidance on the growing use of AI chatbots by children.
Companies that are creating, using and distributing rapidly evolving tools and technologies should adopt Safety by Design principles to ensure robust protections for all users, especially children and young people. This means embedding safety into the design of AI companions at every stage, not adding it as an afterthought.
A consortium of government and academic organizations from China announced the formation of the China AI Safety Network, which will act as the peer to other AI Safety Institutes globally - unveiled at the AI Action Summit.
The China AI Safety and Development Association (中国人工智能发展与安全研究网络, CNAISDA), which is online at cnaisi.cn, describes itself as “representing China in dialogue and collaboration with AI security research institutions around the world.”1 On an official list of Paris summit side events and an event registration page, it is labeled as “the Chinese equivalent of the AI Safety Institute.”
Singapore unveiled at the Paris AI Action Summit the following initiatives: (i) Global AI Assurance Pilot for best practices around technical testing of GenAI applications; (ii) Joint Testing Report with Japan; and (iii) Publication of the Singapore AI Safety Red Teaming Challenge Evaluation Report.
The launch of the Global AI Assurance Pilot by AI Verify Foundation and the Infocomm Media Development Authority (IMDA), which is a testbed to establish global best practices around technical testing of GenAI applications.
The release of a Joint Testing Report in collaboration with Japan under the AI Safety Institute (AISI) Network, which aims to make Large Language Models (LLMs) safer in different linguistic environments through assessing if guardrails hold up in non-English settings.
The publication of the Singapore AI Safety Red Teaming Challenge Evaluation Report 2025 (1.41MB), so that we understand how LLMs perform with regard to different languages and cultures in the Asia Pacific region, and if the safeguards hold up in these contexts.
The Safer Internet Lab published a report on Indonesia’s experience of disinformation, including deepfakes, during the 2024 presidential elections.
The use of artificial intelligence (AI) technology is predicted to be increasingly used. While the Constitutional Court (MK) has prohibited the use of AI in campaigns, the rapid technological development makes the regulation difficult to enforce. The public still struggles to distinguish between information conveyed directly and information generated using artificial intelligence. A concerning aspect is the use of deepfake videos in campaigns, which can mimic voices and resemble individuals in images/photos/videos.
Enforcement
Public Interest Litigation considered in India by Delhi High Court on DeepSeek
The Delhi High Court on Wednesday sought the Centre’s response to a public interest litigation (PIL) challenging the operations of Chinese artificial intelligence (AI) company DeepSeek in India over alleged data privacy violations.
South Korea’s FTC amended enforcement rules against dark patterns.
The Fair Trade Commission ( Chairman Han Ki-jung , hereinafter referred to as the " FTC " ) announced that the Enforcement Decree and Enforcement Rules of the Electronic Commerce Act, which contain matters delegated by law such as increases in regular payment amounts and the period of consumer consent before conversion to paid services, and matters necessary for their enforcement, will be revised and implemented from February 14, 2025 in accordance with the revision ('24.2.13.) of the Act on Consumer Protection in Electronic Commerce , etc. ( hereinafter referred to as the " E - Commerce Act " ) to regulate six types of online deceptive marketing practices (aka " dark patterns " ) .
Intellectual Property
China’s Hangzhou Internet Court rules in favor of Ultraman IP owners against online genAI platforms that recreate images based on the several criteria, such as the use of copyrighted works in training, the commercial nature of the end product, the notoriety of the infringed works, and the reasonableness of actions to prevent such infringement.
In summary, the defendant should have known that network users used his services to infringe upon the right of information network dissemination but did not take necessary measures. He failed to fulfill his duty of reasonable care and was subjectively at fault, constituting aiding and abetting infringement…
The judgment in this case adheres to the principles of giving equal importance to development and security, promoting innovation and combining it with governance according to law, taking into account the protection of rights and serving industrial development, and striving to safeguard and support the construction of an artificial intelligence governance system at the judicial level.
India’s Bollywood music industry seeks to join news publishers in a copyright infringement action against OpenAI.
In India, the music labels are “concerned OpenAI and other AI systems can extract lyrics, music compositions and sound recordings from the internet,” said an industry source who spoke on condition of anonymity as the matter is in court.
The Indian companies’ latest action comes after Germany’s GEMA, which represents composers, lyricists and publishers, said in November it had sued OpenAI for ChatGPT’s alleged unlicensed reproduction of song lyrics with which “the system has obviously been trained”.
Human Rights
Japan and Canada joined 39 signatories to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law at the sidelines of the Paris AI Action Summit. The Framework includes the following provisions:
Japan has contributed to the drafting of the Convention as the only observer state of the Council of Europe from Asia. Japan will continue to lead international efforts to achieve safe, secure, and trustworthy AI including through the Hiroshima AI Process, and will continue to encourage innovation creation. Japan's signing of the Convention has great significance from the perspective of demonstrating both domestically and internationally Japan's positive attitude towards participating in and contributing to discussions on international frameworks for AI.
Multilateral
At the AI Action Summit, the OECD (which includes Japan, South Korea, New Zealand, and Australia) published a report on IP issues in AI training.
Recent technological advances in artificial intelligence (AI), especially the rise of generative AI, have raised questions regarding the intellectual property (IP) landscape. As the demand for AI training data surges, certain data collection methods give rise to concerns about the protection of IP and other rights. This report provides an overview of key issues at the intersection of AI and some IP rights. It aims to facilitate a greater understanding of data scraping — a primary method for obtaining AI training data needed to develop many large language models. It analyses data scraping techniques, identifies key stakeholders, and worldwide legal and regulatory responses. Finally, it offers preliminary considerations and potential policy approaches to help guide policymakers in navigating these issues, ensuring that AI’s innovative potential is unleashed while protecting IP and other rights.
India and France hold second AI Policy Roundtable at AI Action Summit.
The Office of the Principal Scientific Adviser (PSA) to the Government of India, in collaboration with the Indian Institute of Science (IISc), Bengaluru, the IndiaAI Mission, and Sciences Po Paris, successfully hosted the ‘2nd India-France AI Policy Roundtable’ as an official side event to the AI Action Summit 2025. Held at the Sciences Po Paris University campus, this high-level discussion brought together policymakers, researchers, and industry leaders to explore collaborative AI governance and innovation between India and France.
India PM calls for global AI governance, job re-skilling, and regulations for deepfakes as co-chair at Paris AI Action Summit. He said:
"We must develop open-source systems that enhance trust and transparency. We must build quality data centers free from biases, democratize technology, and create people-centered applications," he said. "Concerns related to cybersecurity, disinformation, and deepfakes must also be addressed."
In the News and Analysis
South Korea announced a plan to acquire 10,000 GPUs to power a national AI datacenter.
The government plans to obtain GPUs, including Nvidia’s H100 and H200 models, to bolster its large-scale AI infrastructure.
India and China top trust in AI according to Edelman Trust survey, followed by South Korea and Japan in 5th and 6th place.
Advocacy
Pakistan has an open consultation on its draft National AI Policy ongoing.
China’s TC260 issued public consultation on draft AI Safety Standards until 26 February.
New Zealand’s Privacy Commissioner issued public consultation on its draft Biometric Processing Privacy Code of Practice. The Code includes 12 prospective rules until 14 March.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.