#42 Asia AI Policy Monitor
🇨🇳 SAMR AI cases | 🇺🇸 AI trade secrets conviction | 🇨🇳 DeepSeek IP allegations | 🇹🇭 AI risk consultation | 🇮🇩 Grok reinstated | 🇦🇺 AI broadcast disclosure rules | 🇨🇳 CAC labeling
Thanks for reading this month’s newsletter along with over 2,200 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Thanks for reading our first edition of the Year of the Horse!
As a special promotion, please see the following from our supporters at Aus Gov Data Summit with a special discount for our readers.
India AI Impact Summit
Happening this week, India is hosting the AI Impact Summit. Stay tuned for the final declaration, and more analysis. Asia AI Policy Monitor was cited in the Straits Times about possible outcomes for the summit. For our full analysis, read here.
Some announcements on increased state investment in AI are likely, but these may not significantly shift India’s position without stronger international partnerships. U.S.–India cooperation appears uncertain following recent tensions between Modi and Trump. India–China alignment is similarly unlikely, beyond expected photo opportunities. But Europe and other powers offer opportunity.
Intellectual Property
China’s SAMR released 5 typical cases of unfair competition in AI.
SAMR provides an administrative route for the enforcement of intellectual property in China in addition to civil and criminal enforcement mechanisms. SAMR released these cases to “effectively guide business entities to operate legally and compliantly and maintain the healthy development of the artificial intelligence industry.”
A US court convicted a Chinese national of trade secret theft and economic espionage in a case involving AI technology.
Yesterday, a federal jury in San Francisco convicted former Google software engineer Linwei Ding, also known as Leon Ding, 38, on seven counts of economic espionage and seven counts of theft of trade secrets for stealing thousands of pages of confidential information containing Google’s trade secrets related to artificial intelligence technology for the benefit of the People’s Republic of China (PRC). The jury’s verdict follows an 11-day trial before U.S. District Judge Vince Chhabria for the Northern District of California.
“This conviction exposes a calculated breach of trust involving some of the most advanced AI technology in the world at a critical moment in AI development,” said Assistant Attorney General for National Security John A. Eisenberg. “Ding abused his privileged access to steal AI trade secrets while pursuing PRC government-aligned ventures. His duplicity put U.S. technological leadership and competitiveness at risk. I commend the trial team and investigators whose exceptional work resulted in this conviction.”
OpenAI accuses Chinese AI provider Deepseek of IP infringement (perhaps through Unfair Competition) through distillation of their model (similarly Chinese model Kimi was caught referring to itself as “Claude”).
In the memo sent to the U.S. House Select Committee on Strategic Competition between the U.S. and the Chinese Communist Party on Thursday, OpenAI said: “We have observed accounts associated with DeepSeek employees developing methods to circumvent OpenAI’s access restrictions and access models through obfuscated third-party routers and other ways that mask their source.”
“We also know that DeepSeek employees developed code to access U.S. AI models and obtain outputs for distillation in programmatic ways,” the memo added.
Cybersecurity
Thailand Cybersecurity Agency opens consultation on AI guidelines.
The guide is explicit about four core technical and social risks:
Hallucinations / Fabrication
AI may generate false but plausible information.
Prompt Injection & Manipulation
AI can be tricked into unsafe or misleading behavior.
Data Poisoning
Biased or malicious training data can distort outputs.
Over-reliance
Users may trust AI outputs without verification.
Governance
Indonesia allowed X’s Grok to operate following response to concerns over deepfake imagery.
The Ministry of Communication and Digital (Kemkomdigi) is processing the normalization of Grok service access conditionally and under strict supervision, after X Corp submitted a written commitment regarding service improvement measures and compliance with applicable legal provisions in Indonesia.
Australia posted rules on AI disclosure in broadcast radio.
The Australian Communications and Media Authority (ACMA) has registered updated rules for commercial radio broadcasters that include new requirements for content broadcast around school drop-off and pick-up times, and also for disclosing artificial intelligence use.
Under the Commercial Radio Code of Practice 2026, radio stations will be required to let their audience know when a synthetic voice is being used to host a regularly scheduled program or news broadcast. This is the first time AI has been addressed in a broadcasting code of practice.
China’s CAC published cases of failure to label AI.
Recently, some online accounts have been publishing AI-generated and synthesized information without adding AI identifiers, which has misled the public by using false and misleading content, damaging the online ecosystem and causing a negative impact. The Cyberspace Administration of China has urged websites and platforms to conduct thorough investigations and rectifications, handling 13,421 accounts in accordance with laws and contracts, and removing over 543,000 pieces of illegal and irregular information. Some typical cases are reported below:
Vietnam opened consultations on high risk AI systems.
Artificial intelligence systems are included in the High-Risk Artificial Intelligence Systems category after consideration of the following principles:
a ) The results of the system are used as a basis or factor that significantly influences decision-making affecting the legitimate rights, obligations, or interests of organizations or individuals;
c) Determined according to specific purpose and context of use, not solely by technology, algorithms, or models;
d) System malfunctions or failures that could have serious, irreparable consequences for individuals, society, or the public interest;
d) It has a wide scope of implementation or scale of impact, and is not of an isolated or individual nature;
e) Risks cannot be fully controlled by other relevant legal regulations alone;
g) It is possible to define and apply consistent management obligations to that group of systems.
India’s MeitY adopted rules on synthetically generated materials.
The amendments as outlined in the draft notification introduce:
A clear definition of “synthetically generated information”;
Labelling and metadata embedding requirements for such information to ensure users can distinguish synthetic from authentic content;
Visibility and audibility standards requiring that synthetic content be prominently marked, including a minimum 10% visual or initial audio duration coverage; and
Enhanced verification and declaration obligations for SSMIs, mandating reasonable technical measures to confirm whether uploaded content is synthetically generated and to label it accordingly. These amendments are intended to promote user awareness, enhance traceability, and ensure accountability while maintaining an enabling environment for innovation in AI-driven technologies.
The UN published a report on AI and child safety, citing Australia as leading the way.
At the end of 2025, Australia became the first nation in the world to ban social-media accounts for children under 16, on the basis that the risks from the content they share far outweighs the potential benefits.
The Government there cited a report it had commissioned, which showed that almost two-thirds of children aged between 10 and 15 had viewed hateful, violent or distressing content and more than half had been cyberbullied. Most of this content was seen on social media platforms.
Several other countries, including Malaysia, the UK, France and Canada, look set to follow Australia’s lead, preparing regulations and laws for similar bans or restrictions.
A Chinese think tank published a report on AI governance issues in China.
Currently, global artificial intelligence (AI) technology is iterating and breaking through at an unprecedented pace, generating new productivity and empowering industrial transformation, while also bringing complex and far-reaching challenges to social governance. In 2025, China took a crucial step in the field of AI governance. Looking ahead to 2026, with the popularization and deep integration of large-scale modeling technology, AI governance will enter a new stage of systematic and refined development. How to strike a balance between incentivizing innovation and preventing risks, and how to address emerging challenges such as technology misuse, data security, and the impact on employment structures, will become important issues for China in promoting the healthy and orderly development of AI.
Singapore published a review of its economic strategy emphasizing AI.
Singapore is well-positioned to capture new trade and investment flows, as firms gravitate towards trusted locations to manage risks in a more volatile and uncertain world. With the global environment in flux and emerging technologies introducing new risks, Singapore can capitalise on our trusted reputation – a key enabler we have built up over decades – to offer new trust technologies and services (e.g. cybersecurity, AI assurance, and Testing, Inspection and Certification) that will extend our lead in modern services.
Korea published AI transparency obligations.
Purpose: Ensure users clearly know when they are using AI-based products/services and when outputs are AI-generated, to prevent confusion, deception, and erosion of trust.
Legal basis: Implements Article 31 of Korea’s AI Basic Act, establishing mandatory transparency obligations for AI providers.
Three core obligations:
Advance notice: Users must be informed in advance if a product or service is powered by high-impact AI or generative AI.
Output labeling: AI-generated results must be clearly indicated as such.
Deepfake disclosure: Audio, images, or video that are difficult to distinguish from reality must be explicitly disclosed or labeled.
Who is responsible:
The obligation falls on the AI business that directly provides the product or service to end users, not upstream model developers (unless they serve users directly).
Applies to foreign companies if Korean users are affected.
Scope clarifications:
Merely using AI internally to create content (e.g., a filmmaker using AI tools) does not trigger obligations.
Text outputs are generally excluded from the “deepfake” category.
How disclosure is done:
Via terms of service, contracts, UI notices, on-screen labels, audio notices, watermarks, metadata, or physical labeling—depending on context.
Machine-readable methods (e.g. metadata, watermarking) are allowed but require at least one user-facing notice.\
Korea also published AI impact assessment guidelines per the AI Basic Act.
Multilateral
APEC 2026 will convene Digital Ministers for discussions on AI.
The Digital Economy Steering Group will advance work on data flows, digital tools, online safety and emerging technologies, especially artificial intelligence, reinforcing APEC’s efforts to promote an enabling environment for the digital economy.
APEC’s most recent regional trends reports increasing investment in the region for AI.
The February 2026 APEC Regional Trends Analysis (ARTA) finds that growth momentum in the region has strengthened, supported by resilient consumption, robust trade, and surging AI-related investment. Yet beneath the improved near-term outlook, structural vulnerabilities, rising trade restrictions, and increasingly concentrated technology investment are deepening medium-term risks. Sustaining growth will require credible economic management, inclusive productivity-enhancing reforms, and stronger regional cooperation to navigate a more fragmented global environment.
ASEAN countries joined up to implement Singapore’s Agentic AI Guidelines.
An initiative spanning Singapore, Thailand, Malaysia, Indonesia, and the Philippines aims to help enterprises operationalise the new requirements under Singapore’s new Model AI Governance Framework for Agentic AI.
Announced by Armor and Microsoft Solutions Partner for Security, the five-country initiative addresses emerging compliance challenges in the framework amid tightening AI oversight in the region.
Recent UNICEF policy guidance for Children and AI was based on research conducted by INTERPOL in 11 countries, including Southeast Asia (Cambodia, Indonesia, Malaysia, the Philippines, Thailand and Vietnam.)
All governments expand definitions of child sexual abuse material (CSAM) to include AI-generated content, and criminalise its creation, procurement, possession and distribution.
AI developers implement safety-by-design approaches and robust guardrails to prevent misuse of AI models.
Digital companies prevent the circulation of AI-generated child sexual abuse material – not merely remove it after the abuse has occurred; and to strengthen content moderation with investment in detection technologies, so such material can be removed immediately – not days after a report by a victim or their representative.
In the News and Analysis
Think tank launches AI Governance in South Asia Report.
South Asia is a critical site for shaping inclusive AI governance. Five nations in the region: India, Sri Lanka, Bangladesh, Nepal and Bhutan collectively represent 1.75 billion people. Despite differences in digital infrastructure capacity, institutional maturity and technology capabilities, they are navigating the AI transformation through similar policy approaches and governance frameworks. Between 2018 and 2025, all five nations have published national AI strategies, either in draft form or formally adopted. Their national strategies focus on using AI for inclusive development through balanced regulatory approaches. This paper finds that this apparent convergence provides the foundation for coordinated engagement in areas such as data governance, reskilling and standard setting.
Advocacy
Thailand Cybersecurity Agency opens consultation on AI guidelines.
Vietnam is seeking consultations on high risk AI systems.
The Ministry of Science and Technology is drafting a Decision of the Prime Minister promulgating the List of High-Risk Artificial Intelligence Systems. The Government Electronic Information Portal would like to present the full text and request that agencies, organizations, and individuals both domestically and internationally study it and contribute their opinions.
UN IGF 2026 Call for Thematic Inputs.
Stakeholders from all regions and stakeholder groups are invited to share perspectives on the most pressing emerging issues, priorities, and challenges in the governance of digital technologies.
Your contributions will help inform the overall theme, subthemes, intersessional work, and programme development of IGF 2026—supporting a transparent, bottom-up process.
🗓️ Deadline: 28 February 2026, 23:59 UTC
Uzbekistan issued a public consultation on AI ethics guidelines.
UN’s WSIS+20 UNGA side events are open for submission of ideas.
The OECD is opening consultations on Global AI Governance.
Your contributions will inform global AI strategies and policies for AI in government through OECD reports and knowledge products, with top submission featured in the OECD.AI Policy Observatory through a new repository on AI in Government.
📥 Submit here by 27 February: https://oecd.ai/wonk/call-ai-in-gov
China’s TC260 opened a consultation on cybersecurity for AI training data cleansing standards.
To effectively address the new risks and challenges brought about by the rapid development and application of artificial intelligence technology, comprehensively improve the security level of artificial intelligence applications in various industries, and ensure the high-quality development of artificial intelligence, the Secretariat has organized the compilation of four draft guidelines for cybersecurity standards and practices, including the “ General Principles of Artificial Intelligence Application Security Guidelines , “ the “ Artificial Intelligence Application Security Guidelines for Broadcasting, Television and Online Audiovisual, “ the “ User Security Guidelines for Using Artificial Intelligence,” and the “Security Guidelines for Cleaning Artificial Intelligence Training Data . “
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.



