#38 Asia AI Policy Monitor
🇺🇳 Court AI Guidelines, 🇮🇳 Fake Case Law Warnings, 🇦🇺 National AI Plan & Deepfake Bill, 🇨🇳 Streaming Deepfakes Crackdown, 🇳🇿🇰🇷🇦🇺 Privacy Moves, 🇲🇾 AI–IP Gaps, 🇨🇳🤝🇦🇸 ASEAN AI...
Thanks for reading this month’s newsletter along with over 2,000 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Governance
UNESCO published the Guidelines for use of AI in Courts and Tribunals. We are proud to note that our supporters at Digital Governance Asia and APAC GATES made contributions recognized in the report.
Built around fifteen universal principles—from transparency, accountability, and human oversight to human rights protection and multistakeholder governance—the Guidelines provide practical orientation for judges, court administrators, and policymakers exploring AI adoption. They advocate for AI as an assistive, not substitutive, tool—used responsibly and always under meaningful human supervision.
Judges in India made a ruling about reliance on AI supported filings in legal practice.
Severe warning against AI-generated or unverified case law
This judgment devotes considerable attention to the GST Department citing case law that—upon verification—did not exist.
The Court stated:
“There are discrepancies in the judgments cited… Government Departments must exercise utmost caution while citing judicial precedents, especially if generated by using Artificial Intelligence software.” (¶70–73)
The Bench referenced prior warnings including:
KMG Wires Pvt. Ltd. v. National Faceless Assessment Centre, Bombay HC (2025),
Christian Louboutin SAS v. The Shoe Boutique, Delhi HC (2023),
both cautioning against AI hallucinations and fake case law.
Australia publishes its AI National Plan.
Australia is an active and influential player in the global AI ecosystem, consistently punching above our weight in research and innovation:
• Australia ranks highly in AI use by consumers. After adjusting for population size, Australia ranks third globally in the use of Claude, a popular AI tool developed by leading technology company Anthropic (Appel et al 2025).
• Australia attracted $10 billion in data centre investment during 2024, making it the second-largest destination globally that year for this asset class after the United States (Knight Frank 2025).
• Our AI industry is thriving, with more than 1,500 companies driving growth and innovation nationwide (Bratanova et al. 2025).
• Australia produces 1.9% of the world’s AI research publications, far exceeding our share of global population and GDP. Our research extends beyond core computer science and into practical, discipline-specific applications including in medicine, environmental science, agriculture and the social sciences (Bratanova et al. 2025).
• In 2024, Australia attracted $700 million in private investment in AI firms, reflecting increasing momentum in developing and deploying Australian AI solutions (Bratanova et al. 2025).
• Demand for AI-skilled workers has tripled since 2015, underscoring Australia’s position as a hub for cutting-edge technology and talent (Bratanova et al. 2025).
Our goals
The National AI Plan is anchored in 3 goals:
• Capturing the opportunity: We are fostering investment in world-class digital and physical infrastructure, supporting local capability and attracting global partnerships. By expanding high-speed connectivity, attracting investment in advanced data centres, and backing our researchers and businesses, we aim to lead in AI innovations and applications.
• Spreading the benefits: Our goal is to ensure that all Australians, regardless of background or location, shares the advantages of AI. We are supporting small and medium enterprises, regional communities and groups at risk of digital exclusion. Australian workers must share fairly in the potential productivity benefits of AI. Building digital and AI skills, growing and protecting jobs, supporting workforce transitions, and improving public services are central to this effort.
• Keeping Australians safe: We are committed to robust legal, regulatory, and ethical frameworks that protect rights and build trust. This includes ongoing review and adaptation of laws and establishing an AI Safety Institute. We are engaging internationally to manage risks such as bias, privacy breaches, and emerging threats, while promoting responsible innovation.
China’s CAC investigated use of deepfakes for impersonation on streaming platforms.
Recently, some online accounts have been using AI technology to impersonate public figures and publish marketing information in live streams and short videos, misleading netizens and engaging in false advertising and online infringement, seriously damaging the online ecosystem and causing adverse effects.
Australia’s eSafety Commission investigated deepfake imagery violations of children.
Australia’s eSafety Commissioner has launched enforcement action against a technology company responsible for ‘nudify’ services used to create AI-generated sexual exploitation material of Australian school children.
Australia’s eSafety Commission also published guidance on the governments approach to AI and safety.
The National AI Centre is supporting Australian industry with guidance and tools to adopt AI safety and secure productivity benefits. The Guidance for AI Adoption, released in October 2025, sets out six essential practices for responsible AI governance and adoption.40 It offers practical and accessible steps to help organisations develop and deploy AI. There are two versions of the guidance targeting different AI maturity levels: • Foundations: for organisations getting started in adopting AI • Implementation practices: for governance professionals and technical experts.
Privacy
New Zealand’s privacy regulator calls for reform due to new technology challenges including AI and automated decision making.
“We also need stronger protections for the significant privacy risks that arise from automated decision-making, which can cause problems such as inaccurate predictions, discrimination, unexplainable decisions, and a lack of accountability.
“Automated decision making is increasingly used to make decisions about people’s finances and allowances, which can really impact lives, and I think people should know why an automated decision is taken against them”, Mr Webster says..
South Korea’s privacy regulator investigated and has now commended China’s Deepseek over privacy violations remediation, citing it as a best practice.
The Personal Information Protection Commission announced that Deepseek won the grand prize for its achievement in promptly correcting and improving the personal information processing practices of the Chinese generative artificial intelligence service (hereinafter referred to as “DeepSeek”) at the preliminary evaluation of the “2025 Proactive Administration Best Practices Competition” held at the Government Complex Sejong on Thursday, November 27.
Australia’s privacy regulator published a guide on genAI in the workplace.
In practice, steps businesses can take include:
conducting a Privacy Impact Assessment to understand the impact of the use of publicly available GenAI tools to ensure that risks can be managed, minimised or eliminated;
in cases where organisational or privacy risks are too high, prohibiting personal information to be uploaded to, or the use of, publicly available GenAI products;
developing policies and procedures that govern the business’ use of GenAI tools, including ensuring staff are equipped to check and account for inaccuracies in the tools’ outputs;
ensuring privacy policies and collection notices reflect the organisation’s use of GenAI, where relevant;
when using organisational or enterprise licences to publicly available tools, actively engage with and manage privacy settings, including restricting access to the tool provider to user data for the purposes of training AI models, and restricting retention of user data where possible; and
communicating policies and educating staff on how to responsibly use publicly available GenAI products.
Intellectual Property
Australian news outlets amend copyright licenses in age of AI.
“In an era of misinformation and disinformation, obtaining a licence gives confidence that the information being copied and shared comes from a professional news media organisation like News Corp Australia- this really makes it invaluable.”
Mr Gray said the corporate community should set the standard for legal copyright compliance.
“A retailer or manufacturer wouldn’t allow other businesses to steal their output, so why should copyright holders?
“But there is no doubt in my mind that Australian business wants to do the right thing and it’s up to us to help them do so.”
A recent academic article discusses Malaysia’s lack of rules around AI and IP.
Existing research has focused on global AI regulations, particularly the European Union AI Act, but there has been little examination of how these frameworks compare to Malaysia’s National Guidelines on AI Governance and Ethics. This study hence explores the legal and governance challenges related to AI’s use of copyrighted works and patented inventions in Malaysia, where current laws struggle to address issues of authorship, inventorship, and fair remuneration.
A recent recap of Chinese AI patent examination rules.
Director Jiang just mentioned that the recent revisions to the Patent Examination Guidelines further improve the examination standards for patent applications in the field of artificial intelligence. We have noticed that the Guidelines have been revised in several recent revisions, all of which have addressed related content. What is the relationship between this revision and previous revisions? What is the specific significance of these adjustments and changes?
Legislation
Australia’s Parliament is considering a bill regarding deepfake nonconsensual imagery.
The My Face, My Rights framework seeks to:
● Recognise personal autonomy and consent in the digital environment.
● Strengthen the eSafety Commissioner’s powers to respond to AI-generated harm.
● Provide clear civil redress through the courts for individuals wrongfully depicted or exploited via deepfake material.
● Align domestic law with Australia’s international human rights obligations, including the International Covenant on Civil and Political Rights and the Convention on the Rights of the Child.
Multilateral
China and Asean cooperate on AI governance.
Cooperation on AI Governance
2.1 China will implement the Global AI Governance Initiative, and actively carry out communication, exchanges and practical cooperation with ASEAN countries in potential risk response, standardization of security governance, and the formulation of policies, laws and regulations related to security governance in the area of AI, with a view to jointly preventing the misuse of AI technologies.
2.2 China stands ready to support and encourage Chinese AI companies, universities, research institutions and industrial associations to strengthen exchanges and cooperation with their counterparts in ASEAN countries, so as to promote the interoperability and compatibility of relevant sector-specific standards and to share with each other the latest knowledge, best practices and experience.
2.3 China stands ready to enhance position coordination with ASEAN countries under multilateral platforms like the United Nations and jointly participate in the rule-making process concerning global AI governance, with the aim of achieving broad consensus in the field of international AI governance while fully respecting differences in policies and practices among countries.
The G20 released a statement on AI.
We further welcome the launch of the AI for Africa Initiative which was developed as a voluntary platform for multilateral and multi-stakeholder cooperation between the G20 and the African Union. We encourage the promotion of access to computing power in African countries, as well as AI talent and training, high quality and representative datasets, and infrastructure, as key building blocks for AI development and adoption in Africa. We encourage the development of the African AI ecosystem through voluntary contributions of technical and financial resources, and the development of Africa-centric sovereign AI capabilities, based on long-term partnerships with a focus on investment models that generate sustainable value on the continent.
The G7 published guidance for cybersecurity affected by AI.
We remain particularly concerned about the misuse of new technologies by perpetrators and evolving forms of offending. This includes the ongoing recirculation of known child sexual abuse material; the production and dissemination of child sexual abuse material, including that which is AI generated; financial sexual extortion; livestreamed abuse; and emerging trends such as sadistic online exploitation and youth-on-youth offences.
Advocacy
India Copyright infringement consultation.
In this regard, inputs and experiences are invited from the concerned stakeholders in respect of: Date:07.l1.2025 Current challenges being faced in identifying and removing pirated content; Technological or procedural gaps in enforcement and coordination and measures that can strengthen proactive monitoring and takedown mechanisms; Best practices adopted internationally that may be relevant to the Indian ecosystem; and Suggestions for improving coordination between platforms, Government agencies and rights holders. Inputs/suggestions may be sent through email at digital-mediamib@gov.in within 20 days of issuance of this communication.
South Korea is receiving comments on its AI enforcement decree until Dec 22.
The Framework Act on the Development of Artificial Intelligence and the Creation of a Trust Foundation was enacted (Act No. 20676, promulgated on January 21, 2025, and effective January 22, 2026) to protect the rights and interests of the people, improve the quality of life of the people, and strengthen national competitiveness by supporting the sound development of artificial intelligence and stipulating the basic matters necessary for the creation of a trust foundation for an artificial intelligence society. Accordingly, the purpose is to establish matters delegated by law, such as the procedures for establishing and amending the basic plan for artificial intelligence, the scope of projects eligible for support for artificial intelligence research and development, and matters necessary for its implementation…
Where to send comments: Email: zsshim@korea.kr
Singapore’s Monetary Authority opened a consultation on AI and Risk Management until Jan 31.
The Monetary Authority of Singapore (MAS) is proposing to introduce Guidelines on Artificial Intelligence (AI) Risk Management (the “Guidelines”) 1 to enhance management of AI risks in financial institutions (FIs), and set out MAS’ supervisory expectations relating to AI risk management in the financial sector. The Guidelines focus on oversight of AI risk management in FIs, key AI risk management systems, policies and procedures, key AI life cycle controls, as well as capabilities and capacity needed for the use of AI.
India’s MeitY is taking comments on rules for synthetic AI content.
The feedback/comments on the draft rules in a rule wise manner may be submitted by email to itrules.consultation@meity.gov.in in MS Word or PDF format by 6 th November, 2025.
India is calling for proposals for the next AI Impact Summit in 2026.
UN’s WSIS+20 UNGA side events are open for submission of ideas.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.



