#40 Asia AI Policy Monitor
🔮 2026 AI Predictions 🇹🇼 AI Basic Act Passed · 🇮🇳 AI Ethics Bill Proposed 🇰🇷 AI Act Implementation Confirmed 🇰🇷 AI Data Center Push · 🇲🇾🇮🇳 Grok Investigations 🇨🇳 Anthropomorphic Rule
Thanks for reading this month’s newsletter along with over 2,200 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
2026 Predictions
Expect additional ASEAN national AI legislation to follow Vietnam’s AI Act. These laws are likely to emphasize industrial policy and domestic AI industry development, rather than adopting strong ex ante guardrails of the EU AI Act. Instead, they will frame AI governance around balanced ethics, job creation, economic growth, and a clear push toward AI sovereignty.
In 2026, more child-specific AI protections are likely to emerge. These will focus on non-consensual deepfake imagery, alongside guardrails such as age-verification for AI use. While these measures are unlikely to replicate China’s strict usage limits, policymakers will draw lessons from China’s regulatory experience, including recent approaches to anthropomorphic AI.
Copyright rules across Asia will become clearer through litigation rather than legislation. Courts in India and Japan are already shaping how rights apply to AI training and outputs, and similar cases are likely to follow. Local AI developers are expected to strike market-specific licensing deals with content owners, while the current push for expanded copyright rules, such as those found in Singapore and Japan, is likely to have already peaked (look at Australia’s recent debates on copyright changes) and now to recede.
Legislation
Taiwan passed its AI Basic Act.
The Legislative Yuan yesterday passed a new law that lays out principles on how artificial intelligence (AI) is to be governed in Taiwan and designated the National Science and Technology Council (NSTC) as the governing authority for AI.
Under the Artificial Intelligence Basic Act (人工智慧基本法), the government is required to promote AI research and applications, while also prioritizing social welfare, digital equity, innovation and national competitiveness.
The act stipulates that AI development should adhere to seven core principles: sustainability and well-being, human autonomy, privacy and data governance, cybersecurity and safety, transparency and explainability, fairness and non-discrimination, and accountability.
India’s Parliament proposed an AI Ethics Bill.
The Bill proposed to constitute a Committee, Ethics Committee for Artificial Intelligence, to: Develop and recommend ethical guidelines for AI technologies; Monitor compliance with ethical standards in AI systems; Review cases of misuse, bias or violations; Promote awareness and capacity-building among stakeholders. The Ethics Committee for Artificial Intelligence will consist of: Chairperson having expertise in ethics and technology; Such number of experts in law, data science and human rights, to be appointed by the Central Government; Such number of representatives from academia, industry, civil society and government....
Australia’s Capital Territory (ACT) will not pass deepfake legislation during elections.
The ACT has a “watching brief” on how AI deepfakes could impact the lead-up to elections, but doesn’t plan to jump the gun on legislative change just yet.
An inquiry into the operation of the 2024 ACT Election and Electoral Act 1992 heard from several parties about how the acceleration of AI technologies needed to be taken into account before Canberrans next cast their vote in 2028.
South Korea confirms it will move forward with January implementation of its AI Act.
▶️ Confirms it will not suspend regulatory provisions of AI law but operate grace period on fine, dismissing calls for moratorium
▶️ Ministry emphasizes act was enacted after about four years of discussion, through agreement between parties, and by unifying 19 proposals
▶️ Ministry makes no direct reference to demands for moratorium on regulatory provisions
▶️ Demand for moratorium also associated with speculation imposition of such obligations could be seen as targeting US businesses
South Korea’s parliament also passed amendments to the AI Basic Act.
- Establishing a legal basis for the National Artificial Intelligence Strategy Committee and strengthening its functions.
- Establishing a basis for establishing and operating an artificial intelligence research institute
- Establishment of a new system to promote the adoption of AI in the public sector.
-Ensuring accessibility for AI vulnerable groups and establishing a basis for cost support for low-income families.
Additionally, South Korea considers an AI Data Center Promotion Act.
Artificial intelligence has emerged as a key driving force, transcending mere technology and determining a nation’s industrial competitiveness and security. The AI data centers that support it are a key strategic asset and essential infrastructure in the AI era.
Currently, countries around the world and global big tech companies are staking their lives on building gigawatt-class hyperscale data centers to secure AI hegemony, offering unprecedented support in areas such as electricity, land, and tax benefits. The South Korean government also aims to become one of the “G3 AI powerhouses.” However, outdated regulations and inadequate infrastructure, which fail to reflect the massive power demands of AI data centers, are hindering competitiveness.
Governance
Malaysia’s Communication and Multimedia Commission opened an investigation into X’s Grok AI over image generation abuse.
The Malaysian Communications and Multimedia Commission (MCMC) has taken note with serious concern of public complaints about the misuse of artificial intelligence (AI) tools on the X platform, specifically the digital manipulation of images of women and minors to produce indecent, grossly offensive, or otherwise harmful content. MCMC stresses that creating or transmitting such harmful content constitutes an offence under Section 233 of the Communications and Multimedia Act 1998 (CMA), which among others prohibits misuse of network or application services to transmit grossly offensive, obscene or indecent content. MCMC will initiate investigations on X users alleged to have violated CMA.
India’s MeitY opened an investigation into X’s Grok AI over image generation abuse.
Ministry of Electronics and Information Technology today issued a notice to social media platform X, asking it to remove obscene content. Sources said the government has raised concerns over the misuse of Grok AI to create and share obscene content.
In its letter to the Chief Compliance Officer of X, India Operation, the Ministry said the service of Grok AI is being misused by users to create fake accounts to host, generate, publish or share obscene images or videos of women in a derogatory or vulgar manner. It said the regulatory provisions under the Information Technology Act, 2000 and IT Rules, 2021, are not being adhered to by the platform.
Indonesia (along with Malaysia) additionally block Grok on the issue of deepfake images.
The moves reflect growing scrutiny of generative AI tools that can produce realistic images, sound and text and concern that existing safeguards are failing to prevent their abuse. The Grok chatbot, which is accessed through Musk's social media platform X, has been criticized for generating manipulated images, including depictions of women in bikinis or sexually explicit poses, as well as images involving children.
China’s CAC has a consultation on Anthropomorphic AI.
[Sample Articles]
Article 10 When providers conduct data processing activities such as pre-training and optimization training, they shall strengthen the management of training data and comply with the following provisions:
(i) Use datasets that conform to the core socialist values and embody the excellent traditional Chinese culture;
(ii) Clean and label the training data to enhance its transparency and reliability, and prevent data poisoning, data tampering and other behaviors;
(iii) Improve the diversity of training data and enhance the security of model-generated content through negative sampling, adversarial training, and other means;
(iv) When using synthetic data for model training and key capability optimization, the security of the synthetic data should be assessed;
(v) Strengthen daily inspection of training data, regularly iterate and upgrade the data, and continuously optimize the performance of products and services;
(vi) Ensure that the training data is legal and traceable, take necessary measures to ensure data security, and prevent the risk of data leakage.
Article 11 Providers shall have the ability to identify user status, and under the premise of protecting users’ personal privacy, assess users’ emotions and their dependence on products and services. If they find that users have extreme emotions or are addicted, they shall take necessary measures to intervene.
China’s Ministry of Radio and TV will conduct a crack-down on deepfakes.
With the rapid development of generative artificial intelligence technology, some online accounts are abusing AI tools to subvert, deconstruct, and vulgarize classic films, animations, and other content. This content seriously deviates from the core spirit of classic works, disrupts the order of online communication, encourages infringement, harms industry development, and interferes with minors’ formation of correct cultural cognition and perception of reality.
Energy
APEC released a report on electricity consumption, including from AI data centers for 21 jurisdictions.
Data Centre and AI Projections for APEC Electricity demand from data centres in the APEC region is projected to increase by approximately 140% between 2025 and 2035. Throughout this period, China and the United States are expected to remain the largest sources of growth. Most other APEC economies are also projected to experience significant growth in electricity use from data centres and AI workloads. Given the emerging nature of this topic, projections for electricity demand from data centres and AI workloads beyond 2035 are subject to greater uncertainty. In the Outlook, several factors are assumed to slow the pace of electricity demand growth after the mid-2030s. These include a shift from energy-intensive AI training toward less demanding inference workloads, as well as continued improvements in energy efficiency. Nevertheless, overall demand is expected to continue rising, based on the assumption that new applications for AI will continue to emerge.
Advocacy
China has a consultation until Jan 25 on Anthrpomorphic AI.
Subcommittee on AI and Copyright in India published preliminary report for feedback.
The comments/feedback, if any, may be provided to this Department on email id “ipr7- dipp@gov.in” within 30 days of the publication of this letter.
Uzbekistan issued a public consultation on AI ethics guidelines.
Rights and obligations of developers and implementers of artificial intelligence systems
Developers and implementers of SI systems have the following rights in accordance with current legislative acts:
protect their intellectual property in accordance with the procedure established by law;
patenting innovative technologies and algorithms;
work under fair wages and decent working conditions;
India Copyright infringement consultation.
In this regard, inputs and experiences are invited from the concerned stakeholders in respect of: Date:07.l1.2025 Current challenges being faced in identifying and removing pirated content; Technological or procedural gaps in enforcement and coordination and measures that can strengthen proactive monitoring and takedown mechanisms; Best practices adopted internationally that may be relevant to the Indian ecosystem; and Suggestions for improving coordination between platforms, Government agencies and rights holders. Inputs/suggestions may be sent through email at digital-mediamib@gov.in within 20 days of issuance of this communication.
South Korea is receiving comments on its AI enforcement decree until Dec 22.
The Framework Act on the Development of Artificial Intelligence and the Creation of a Trust Foundation was enacted (Act No. 20676, promulgated on January 21, 2025, and effective January 22, 2026) to protect the rights and interests of the people, improve the quality of life of the people, and strengthen national competitiveness by supporting the sound development of artificial intelligence and stipulating the basic matters necessary for the creation of a trust foundation for an artificial intelligence society. Accordingly, the purpose is to establish matters delegated by law, such as the procedures for establishing and amending the basic plan for artificial intelligence, the scope of projects eligible for support for artificial intelligence research and development, and matters necessary for its implementation…
Where to send comments: Email: zsshim@korea.kr
Singapore’s Monetary Authority opened a consultation on AI and Risk Management until Jan 31.
The Monetary Authority of Singapore (MAS) is proposing to introduce Guidelines on Artificial Intelligence (AI) Risk Management (the “Guidelines”) 1 to enhance management of AI risks in financial institutions (FIs), and set out MAS’ supervisory expectations relating to AI risk management in the financial sector. The Guidelines focus on oversight of AI risk management in FIs, key AI risk management systems, policies and procedures, key AI life cycle controls, as well as capabilities and capacity needed for the use of AI.
UN’s WSIS+20 UNGA side events are open for submission of ideas.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.Advocacy




