#36 Asia AI Policy Monitor
🌏 Singapore Cyber Consultation on AI · 🇻🇳 VN Cybersecurity & Deepfakes · 🇰🇷 Korea Public-Sector AI Rules · 🇨🇳 China Global AI Forum · 🇦🇺 Australia AI Companions & Safety · ✨ & More!

Thanks for reading this month’s newsletter along with over 2,000 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Intellectual Property
Australia Labor Party rules out AI copyright changes, along with consultations on alternative ways to reduce barriers and claims for infringement in AI training.
Labor has ruled out changing copyright laws to give tech giants free rein to train artificial intelligence models on creative works, after the proposition was met with widespread backlash from artists.
The government’s copyright and AI reference group will meet early this week to examine whether the laws need to be refreshed, but Attorney-General Michelle Rowland stressed that any changes would not include a carve-out for developers to train their systems on Australian works.
Such an exemption has been called for by parts of the tech sector and floated by the Productivity Commission in their interim report into harnessing data and digital technology, which estimated that AI could deliver a $116 billion boost to the economy over a decade.
Cybersecurity & Trust/Safety
Vietnam’s National Assembly proposed amending the Cybersecurity Law to include prohibitions on deepfake content.
Stating the need to develop a Law on Cyber Security, Delegates at Group 10 (including the National Assembly Delegation of Ninh Binh Province and the National Assembly Delegation of Quang Tri Province) said that, in the context of the strong development of the Fourth Industrial Revolution, the requirement to ensure cyber security has become an important task associated with protecting national sovereignty and socio-economic development. Resolutions and conclusions of the Party, especially Resolution No. 30-NQ/TW in 2018 on the National Cyber Security Strategy, Resolutions No. 57, 59, 66 and 68 of the Politburo and the XIII Congress Documents all clearly stated: It is necessary to perfect the legal system on cyber security, ensure digital sovereignty, data security and create a legal corridor for digital transformation, innovation and international integration. The amendment of the Law on Cyber Security aims to fully institutionalize the Party’s new policies, in line with international commitments and profound changes in the non-traditional security environment.
India’s MeitY new IT Rules 2025 to Mandate Labels, 36-Hour Takedowns for deepfakes.
India is set to roll out the world’s toughest deepfake regulations under the amended Information Technology (IT) Rules, 2025, effective November 2025. The Ministry of Electronics and IT (MeitY) has finalized sweeping changes targeting synthetic AI-generated content, from celebrity deepfake videos to election-sabotaging misinformation.
An Indian court orders the removal of deepfake videos of a justice, and the arrest of suspected creators as defamation.
A court in Mohali has directed YouTube, Telegram and Instagram to immediately remove within 24 hours all objectionable AI-generated deepfake videos aimed at defaming Punjab Chief Minister Bhagwant Mann.
Australia’s eSafety Commissioner published a note to AI Companion providers regarding obligations.
Australia’s eSafety Commissioner has issued legal notices to four popular AI companion providers requiring them to explain how they are protecting children from exposure to a range of harms, including sexually explicit conversations and images and suicidal ideation and self-harm.
Notices were given to Character Technologies, Inc. (character.ai), Glimpse.AI (Nomi), Chai Research Corp (Chai), and Chub AI Inc. (Chub.ai) under Australia’s Online Safety Act.
The notices require the four companies to answer a series of questions about how they are complying with the Government’s Basic Online Safety Expectations Determination. The notices require these providers to report on the steps they are taking to keep Australian’s safe online, especially children.
Singapore is conducting comments on further guidelines for securing AI systems.
Agentic AI possesses sophisticated abilities to understand the context, formulate plans and take independent actions to achieve specified objectives. There are new risks involved, with greater potential for impact given agentic AI’s new capabilities and access to tools and data.
In view of these risks and the increased interest in Agentic AI usage, CSA has developed the Addendum with industry, government and international partners to support system owners in securing their agentic AI systems. The Addendum is designed to be read alongside the Guidelines and Companion Guide. The Addendum:
a. Outlines how risks can be identified and assessed based on the capabilities of Agentic AI systems (e.g., by mapping out agentic workflows to identify where threat actors could potentially exploit vulnerabilities) and
b. Provides practical controls to mitigate relevant risks across the development lifecycle. Practical examples will also be provided to illustrate how the Addendum can be applied across different scenarios and levels of system autonomy. These include use cases such as app development and coding assistants, automated client onboarding systems, and automated fraud detection systems.
Governance
South Korea’s Ministry of Interior issued rules on public sector AI.
The Ministry of the Interior and Safety (Minister Yoon Ho-joong) announced the establishment of “Public Sector AI Ethics Principles” to promote administrative innovation through artificial intelligence (AI) technology and secure public trust in the use of AI.
Hong Kong’s Privacy Commissioner lays out her vision for AI governance in the city.
Privacy Commissioner Ada Chung shares that data protection authorities must pivot from being policing bodies to innovation enablers, guiding technological change by cultivating a lawful, secure, and supportive environment.
Japan’s AI Safety Institute released an open source AI safety evaluation tool.
To conduct AI safety assessments based on the guide, it is necessary to set more specific evaluation criteria and evaluate the AI safety level of the AI system being evaluated. This assessment tool (Note 2) provides an AI safety evaluation environment using highly versatile evaluation criteria. By using this evaluation tool, AI businesses can reduce the work of setting evaluation criteria and building the environment, making it easier to conduct AI safety assessments. This evaluation tool also includes an automated AI safety assessment (automated red teaming) function that analyzes how attackers attack AI systems. This evaluation tool is open-source and available under the Apache 2.0 license. This evaluation tool supports a general-purpose AI safety evaluation. However, businesses that require more specialized AI safety evaluation, such as AI safety evaluation using evaluation items specific to their organization, can use this evaluation tool as a reference and develop customized evaluation tools by modifying and reusing it within the scope of the license.
Multilateral
ASEAN formed an AI Safety Network.
NOTE the recommendations from the Feasibility Study on the ASEAN AI Safety Network (ASEAN AI Safe) highlight a need to strengthen institutional capacity to coordinate, support, and elevate AI safety efforts across ASEAN, including the proposal to establish the ASEAN AI Safety Network (ASEAN AI Safe) as a regional mechanism to support the ASEAN Digital Senior Officials’ Meeting (ADGSOM) and the ASEAN Working Group on AI Governance (WG-AI) on AI safety, enhance regional capacity, foster collaboration, promote AI safety research and innovation efforts, and advance AI safety adoption;
FINALISE the establishment of the ASEAN AI Safety Network (ASEAN AI Safe) and anchor it within the ASEAN structure, to further advance ASEAN’s capacity in AI safety through exchanging best practices, promoting interdisciplinary collaboration, developing multi-stakeholder partnerships, enhancing capacity building and research, and fostering collaboration with external partners;
ESTABLISH the ASEAN AI Safety Network (ASEAN AI Safe) in full alignment with ASEAN centrality, consensus-based decision-making, and the ASEAN Charter, and will be guided by transparency, accountability, and multistakeholder participation;
The US and Japan signed an MOU on Tech Prosperity, including on AI.
Advancing pro-innovation AI policy frameworks and initiatives to support the adoption of a U.S. and Japan-led AI technology ecosystem;
Promoting exports across the full stack of U.S. and Japanese AI infrastructure, hardware, models, software, applications, and related standards;
Partnering to ensure the rigorous enforcement of existing protection measures, strengthen protection measures related to critical and sensitive technologies, and enhance supply chain resilience for the AI tech stack;
Promoting mutual understanding of guidelines and frameworks for AI development and adoption from the respective Participants, with the goal of harmonizing practices as applicable to encourage interoperability;
Advancing and refocusing the partnership between the U.S. Center for AI Standards and Innovation and the Japan AI Safety Institute towards a shared mission to promote AI innovation by fostering a secure and trustworthy AI ecosystem, including through working towards best practices in metrology for AI and industry standards development, improving understanding of both advanced AI models and sector-specific applications to drive continued AI adoption; and
APEC released its Artificial Intelligence (AI) Initiative (2026-2030).
Our strategic direction for realizing this vision is articulated through three overarching objectives, as follows:
a. Foster resilient economic growth across and within APEC economies by advancing AI innovation and promoting secure, accessible and reliable AI ecosystems for all.
b. Increase member economies’ meaningful participation in AI transformation through cooperation and capacity-building initiatives for the benefit of all.
c. Encourage AI development and adoption by leveraging energy and efficient technologies and fostering resilient AI infrastructure investment.
China, at the recent APEC meeting in South Korea, announced proposals to establish a global AI governance body.
Chinese President Xi Jinping took centrestage at a meeting of APEC leaders on Saturday to push a proposal for a global body to govern artificial intelligence and position China as an alternative to the United States on trade cooperation.
The comments were the first by the Chinese leader on an initiative Beijing unveiled this year, while the United States has rejected efforts to regulate AI in international bodies.
In the News & Analysis
Tech for Good published a report on fighting scams in Southeast Asia, many of which are AI enabled.
Southeast Asia’s rapid digital transformation has unlocked economic opportunity, but it has also created new vulnerabilities in the form of increasingly sophisticated scams and fraud. As millions of people and businesses come online, many are exposed to evolving digital risk without the necessary safeguards to protect themselves. Scams today exploit not only technical loopholes but also human trust, behavioural habits and systemic gaps, leading to erosion of trust, mounting financial losses and growing social harm across the region.
Advocacy
Singapore Cybersecurity Agency is conducting comments on further guidelines for securing AI systems until 31 December.
Australia is conducting consultations on its copyright regime in light of the rejection of TDM exceptions for AI training.
The Government is convening our Copyright and AI Reference Group (CAIRG) over the next two days to discuss three priority areas:
Encourage fair, legal avenues for using copyright material in AI
Examining whether a new paid collective licensing framework under the Copyright Act should be established for AI, or whether to maintain the status quo through a voluntary licensing framework.Improve certainty
Explore opportunities to clarify or update how copyright law applies to material generated through the use of AI.Avenues for less costly enforcement
Make it easier to enforce existing rights through a potential new small claims forum to efficiently address lower-value copyright infringement matters.
Indonesia conducted consultations on its AI Road Map and Ethics Guide. Views can be sent here: kerjal.aikita@mail.komdigi.go.id and the documents found here.
The Public Consultation of the White Paper on the National Artificial Intelligence Roadmap and the Draft Guidelines for Artificial Intelligence Ethics is intended to obtain responses and input from relevant stakeholders to enrich the material of the White Paper on the Artificial Intelligence Roadmap and the Draft Guidelines for Artificial Intelligence Ethics, so that a comprehensive and accurate study is produced to support Artificial Intelligence in Indonesia.
India’s MeitY is taking comments on rules for synthetic AI content.
The feedback/comments on the draft rules in a rule wise manner may be submitted by email to itrules.consultation@meity.gov.in in MS Word or PDF format by 6 th November, 2025.
India is calling for proposals for the next AI Impact Summit in 2026.
UN’s WSIS+20 UNGA side events are open for submission of ideas.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.


