#37 Asia AI Policy Monitor
🇰🇷 AI Act Decree, 🇯🇵 GenAI Copyright Test Case, 🇨🇳 Agentic Cyber Ops, 🇦🇺 AI Safety Institute, 🇸🇬 MAS AI Risk Rules, 🇮🇳 Copyright & Synthetic Media Consultations
Thanks for reading this month’s newsletter along with over 2,000 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Legislation
South Korea published its draft enforcement regulation for the AI Act for public consultation.
According to the Ministry of Science and ICT, the AI Framework Act Enforcement Decree stipulates that “high-impact” AI, which significantly impacts human rights, must provide advance notice to users and indicate that the results are AI-generated using a watermark or other similar marking.
In addition to advance notice through terms and conditions and the user interface (UI), invisible watermarks that are invisible to the human eye but automatically detectable by machines as AI-generated are also recognized as markings.
Specifically, deepfake results must be labeled as deepfake, taking into account the user’s physical condition, such as age or disability.
Intellectual Property
Japan authorities bring first GenAI produced copyright infringement action.
Police in Japan have accused a man of unauthorized reproduction of an AI-generated image. This is believed to be the first ever legal case in Japan where an AI-generated image has been treated as a copyrighted work under the country’s Copyright Act.
According to the Yomiuri Shimbun and spotted by Dexerto, the case relates to an AI-generated image created using Stable Diffusion back in 2024 by a man in his 20s from Japan’s Chiba prefecture. This image was then allegedly reused without permission by a 27-year-old man (also from Chiba) for the cover of his commercially-available book.
India’s Ministry of Information and Broadcasting is requesting input on rules to improve copyright infringement, largely in light of recent developments in AI.
Inviting inputs in addressing copyright infringement and strengthen anti piracy strategies- reg.
Japanese publishers call for solutions to AI copyright infringement.
Major publishing houses and creators’ associations in Japan have issued a joint statement calling on US tech company OpenAI to address copyright infringement, citing pictures and videos thought to be created by its Sora 2 generative artificial intelligence model.
Sora 2 has raised concerns after many videos featuring Japanese anime and game characters were created through the model and posted online.
In the statement, a total of 19 groups, including publishers and the Japan Cartoonists Association, confirm that they hold the principle of not tolerating copyright infringement.
Cybersecurity
Anthropic reports that its AI agents were used by Chinese state hackers for a large scale operation.
In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI’s “agentic” capabilities to an unprecedented degree—using AI not just as an advisor, but to execute the cyberattacks themselves.
The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.
Governance
Australia launched its AI Safety Institute.
The AISI will be an important capability in government, working directly with regulators to make sure we’re ready to safely capture the benefits of AI with confidence.
Its work will include:
helping government keep pace with rapid developments in AI technologies, dynamically addressing emerging risks and harms
enhancing our understanding of technical developments in advanced AI and potential impacts
serving as a central hub to share insights and support coordinated government action
giving guidance on AI opportunity, risk and safety for businesses, government and the public through established channels including the National AI Centre (NAIC)
supporting Australia’s commitments under international AI safety agreements.
Singapore’s Monetary Authority opened a consultation on AI and Risk Management.
Key AI Risk Management Systems, Policies and Procedures AI Identification
4.2. Identifying where AI is used within FIs is a critical prerequisite for applying the appropriate governance, risk management standards, and controls effectively to such usage of AI. Hence MAS expects FIs to establish clear definitions, criteria and processes, supported by robust systems, to facilitate this identification process and ensure the consistent identification of AI usage across all relevant business and functional areas. AI Inventory
4.3. Unapproved usage of AI, particularly in higher-risk use cases, can lead to unintended consequences and an FI being exposed to AI risks beyond its risk appetite. To mitigate this risk, MAS proposes that FIs establish and maintain an accurate and up-to-date inventory of AI use cases, systems or models to support governance and oversight, as well as risk management throughout the AI lifecycle. Such an inventory can be established specifically for AI or by enhancing existing inventories. In either case, there should be clear linkages between the AI inventory and other relevant inventories in the FI. Risk Materiality Assessment
4.4. MAS recognises that AI is used across a wide range of business and functional areas with varying levels of risks. MAS expects FIs to implement an appropriate assessment methodology to evaluate the risk materiality of AI use cases, systems, or models. MAS proposes that the risk materiality assessment minimally cover the key dimensions of impact, complexity and reliance.
China’s CAC published the latest deep synthesis algorithm list.
Article 19 of the “Regulations on the Administration of Deep Synthesis in Internet Information Services” clearly stipulates that deep synthesis service providers with public opinion attributes or social mobilization capabilities shall complete the filing, modification, and cancellation procedures in accordance with the “Regulations on the Administration of Algorithm Recommendation for Internet Information Services.” Deep synthesis service technology supporters shall also follow these procedures. Deep synthesis service providers and technology supporters who have not yet completed the filing procedures are urged to apply for filing as soon as possible.
China’s CAC published the latest filing informaiton for genAI services.
To promote the innovative development and standardized application of generative artificial intelligence services, the Cyberspace Administration of China, in conjunction with relevant departments, has been continuously carrying out the registration of generative artificial intelligence services in accordance with the requirements of the “Interim Measures for the Administration of Generative Artificial Intelligence Services.” As of November 1, 2025, 73 new generative artificial intelligence services have completed registration with the Cyberspace Administration of China. For generative artificial intelligence applications or functions that directly call the capabilities of registered models through API interfaces or other means, local cyberspace administrations are responsible for registration, with 35 new applications completing registration in this phase. As of November 1, a total of 611 generative artificial intelligence services have completed registration, and 306 generative artificial intelligence applications or functions have completed registration. The relevant information is hereby announced.
Advocacy
India Copyright infringement consultation.
In this regard, inputs and experiences are invited from the concerned stakeholders in respect of: Date:07.l1.2025 Current challenges being faced in identifying and removing pirated content; Technological or procedural gaps in enforcement and coordination and measures that can strengthen proactive monitoring and takedown mechanisms; Best practices adopted internationally that may be relevant to the Indian ecosystem; and Suggestions for improving coordination between platforms, Government agencies and rights holders. Inputs/suggestions may be sent through email at digital-mediamib@gov.in within 20 days of issuance of this communication.
South Korea is receiving comments on its AI enforcement decree until Dec 22.
The Framework Act on the Development of Artificial Intelligence and the Creation of a Trust Foundation was enacted (Act No. 20676, promulgated on January 21, 2025, and effective January 22, 2026) to protect the rights and interests of the people, improve the quality of life of the people, and strengthen national competitiveness by supporting the sound development of artificial intelligence and stipulating the basic matters necessary for the creation of a trust foundation for an artificial intelligence society. Accordingly, the purpose is to establish matters delegated by law, such as the procedures for establishing and amending the basic plan for artificial intelligence, the scope of projects eligible for support for artificial intelligence research and development, and matters necessary for its implementation…
Where to send comments:
Email: zsshim@korea.kr
Singapore’s Monetary Authority opened a consultation on AI and Risk Management until Jan 31.
The Monetary Authority of Singapore (MAS) is proposing to introduce Guidelines on Artificial Intelligence (AI) Risk Management (the “Guidelines”) 1 to enhance management of AI risks in financial institutions (FIs), and set out MAS’ supervisory expectations relating to AI risk management in the financial sector. The Guidelines focus on oversight of AI risk management in FIs, key AI risk management systems, policies and procedures, key AI life cycle controls, as well as capabilities and capacity needed for the use of AI.
India’s MeitY is taking comments on rules for synthetic AI content.
The feedback/comments on the draft rules in a rule wise manner may be submitted by email to itrules.consultation@meity.gov.in in MS Word or PDF format by 6 th November, 2025.
India is calling for proposals for the next AI Impact Summit in 2026.
UN’s WSIS+20 UNGA side events are open for submission of ideas.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.



