#16 Asia AI Policy Monitor
India's Elections & AI, Thailand to host UN AI Forum 2025, Japan AI candidate, Singapore-Australia MOU on AI, Copyright & AI in Singapore, New Zealand Biometric rules & privacy, & more!
Thanks for reading this month’s newsletter along with over 1,700 other AI policy professionals across multiple platforms to understand the latest regulations affecting the AI industry in the Asia-Pacific region.
Do not hesitate to contact our editors if we missed any news on Asia’s AI policy at seth@apacgates.com!
Intellectual Property
Singapore’s IP Office issued guidance on policy regarding exceptions to technical barriers for text and data mining, essential for AI training.
The comments we received are important for our review to ensure that we maintain a balanced regime which ensures robust copyright protection for rights owners while allowing the public to make reasonable use of copyright works and protected performances in legitimate, non-infringing ways that benefit society.
Copyright industry associations posted complaints about the proposed changes cited above in Singapore on its policy around technical barriers.
To adopt such a provision would undermine the fundamental abilities and rights of creators and copyright owners to license and protect their works, regardless of their type or size—individual creators, small businesses, medium sized enterprises, and large enterprises. Adopting this proposal would also frustrate the policy goals behind technological protection measures, which are designed to protect copyright and guard against unlawful and unauthorized access, piracy, and illicit activities.
China’s National IP Administration published guidelines on patent applications and AI for public comment, until December 13.
The Guidelines detail the type of Artificial Intelligence (AI) related patent applications; identification of inventors; subject matter eligibility; disclosure requirements; inventiveness examination; and ethical issues.
Multilateral
Australia and Singapore signed an MOU on AI Adoption.
Key objectives include:
i. Encouraging the sharing of best practices between both Al ecosystems across the governmental, industry and research domains;
ii. Facilitating increased access to Al technologies, markets, and talent;
iii. Building linkages between research and industry to support the commercialisation of Al applications; and
iv. Promoting Responsible AI including support for the development and adoption of ethical governance frameworks for the trusted, secure, safe, and responsible development and use of Al technologies; and where appropriate, the alignment of governance and regulatory frameworks and tools.
China and the United States renewed and amended their Cooperation Agreement on Science and Technology, which has been running since 1979, and specifically excludes sensitive areas for cooperation, such as AI.
Unlike earlier versions, the new agreement is more focused and limited in scope. It emphasizes basic research while excluding sensitive areas such as artificial intelligence and semiconductor technologies. This narrower focus is a pragmatic response to the current geopolitical realities, where concerns over national security and technological competition dominate Washington's agenda.
Thailand will host UNESCO's Third Global Forum on the Ethics of AI from 25-27 June 2025, marking the forum's debut in Asia.
In parallel, Thailand is rolling out the UNESCO Readiness Assessment Methodology (RAM), a comprehensive diagnostic tool that indicates where countries stand regarding AI infrastructure, governance, investments, social policy, and public institutions. This aids countries in the ethical development, governance, and use of AI, ensuring alignment with human rights and the Sustainable Development Goals.
South Korea will host APEC 2025, and will focus on digital economic issues, including AI.
The second priority, Innovate, reflects Korea’s commitment to harnessing technology for sustainable and inclusive growth. Yoon highlighted that with digitalization transforming economies, APEC 2025 aims to bridge the digital divide and ensure equitable access to emerging technologies.
“These days, one cannot talk about digitalization and new technology without mentioning Artificial Intelligence (AI),” she said. “AI is having a fundamental impact on our lives and economies, changing the way we do business, the way we work and the way we connect.”
Digital Governance Asia presented a talk at the 2025 Internet Governance Forum in Riyadh, covering privacy regulations in Australia, Hong Kong Singapore, South Korea, and New Zealand.
To identify best practices in AI policy, an understanding of the entire AI supply chain is needed, from infrastructure to commercialization. That’s why Digital Governance Asia is based in Taiwan - where most AI chips are made - and incorporated outside of Seattle where companies are bringing AI to commercial success.
Giving voice to smaller states, and the Global Majority in the AI governance formation process is vitally important. The big three jurisdictions taking all the oxygen out of the debate are the EU, US and China.
Our call to action is to identify best practices in policy from around the Asia-Pacific, such as the 5 case studies we presented on privacy regulators from Australia, Hong Kong, Singapore, South Korea and New Zealand. These case studies present opportunities to identify best practices in AI policy and help countries with less of a tradition of privacy protection to leapfrog on AI policy and mitigate the AI Governance Digital Divide.
Keeping up to date on policy means being informed, and tools such as the Asia AI Policy Monitor newsletter by Digital Governance Asia is a good example. Joining a network such as the Asia-Pacific AI Harm Remedy Network and monitoring actual AI harm will pre-position countries to address issues before they happen, and shape policy that is effective and tailored to promote human rights, democracy resilience and rule of law.
Education
China is urging schools to promote the use of AI tools in primary and secondary schools.
The Ministry of Education has asked the schools to improve AI education to “meet China’s future demand for innovative talent” and improve students’ digital skills and problem-solving abilities, according to a ministry circular released last week.
Governance
South Korea’s Parliament’s Science and Technology Committee passed a bill on AI and Trust-Based Foundations - moving it forward in the legislative process.
This Bill would guide the ethical and responsible development and use of artificial intelligence (AI) technologies and incorporates 19 Bills for establishing rules for AI. The Bill's purpose it to protect the rights, interests and dignity of the people. The Bill defines the term "high-impact artificial intelligence" as an artificial intelligence system that may affect or cause danger to human life, physical safety, and fundamental rights, and is used in energy supply, in the production of food, in the development of medical devices, in the management of nuclear materials, the analysis of biometric information and more.
China’s Ministry of Industry and Information Technology (MIIT) established a committee for standard setting in AI.
The committee will be responsible for “making and revising” standards for different AI vertical markets, including assessment and testing, data sets, large language models (LLMs), and application development management, according to a statement dated November 22 and published on Friday….
The 41-member committee includes Baidu AI technology ecosystem general manager Ma Yanjun, Alibaba’s Judy Zhu Hongru, vice-president of the cloud unit’s standardisation operations, Tencent vice-president Jiang Jie who oversees its AI Lab, and Huawei’s director of the standardisation department You Fang.
India’s Ministry of Electronics and Information Technology (MeitY) initiated 8 projects to boost creating tools and technologies to address privacy and Artificial Intelligence (AI) governance.
The 8 projects on AI and privacy are: 1) Machine Unlearning 2) Synthetic Data generation 3) AI Bias Mitigation 4) Explainable AI Framework 5) Privacy Enhancing Strategy 6) AI Governance Testing Framework 7)AI Ethical Certification 8) AI Algorithm Auditing Tool
Australia’s Department of Industry Science and Resources announced the Developing a National AI Capability Plan.
The plan has 4 objectives.
Grow investment:
Review how existing state and federal government support mechanisms work together to hinder or enable, Australia’s AI ecosystem.
Look for ways to boost private sector innovation and investment in AI capability.
Strengthen AI capabilities:
Identify strengths and emerging areas of opportunity for Australian businesses.
Explore new areas of comparative advantage.
Boost AI skills:
Work to accelerate AI literacy, identifying new skills, training and re-training.
Ensure workers can reskill throughout their career to take advantage of new employment opportunities.
Secure economic resilience:
Identify areas where we need sovereign capability or infrastructure to get the most out of AI technologies.
Learn from the experiences and rights of communities and workers – making AI work for us and not the other way around.
In the News & Analysis
Global Coalition for Tech Justice members analyzed the recent impact of AI and tech on India’s elections in 2024.
It should be noted that, despite widespread fears that AI-generated content would inundate social media with falsehoods, in India deepfakes were predominantly used to troll rather than to launch information warfare.
This Tech and Social Cohesion article details the use of AI avatars by a candidate for public office in Japan.
Enter ‘AI Takahiro’, an avatar created by 33-year old candidate Anno Takahiro. The avatar’s livestream on YouTube was just one part of this former software engineer-turned-science-fiction writer’s ground-breaking campaign, born out of frustration with the one-sided nature of political communication…
Takahiro was inspired by Plurality, a book by Audrey Tang, former Taiwan Minister of Digital Affairs, and Glen Weyl, co-Founder of the Plurality Institute and Research Lead of Microsoft Research's Plural Technology Collaboratory, together with a network of civic technologists, which explores collaborative governance and AI's potential.
Advocacy
New Zealand’s Privacy Commissioner issued public consultation on its draft Biometric Processing Privacy Code of Practice. The Code includes 12 prospective rules until 14 March.
Rule 1 – Purpose of collection
Rule 2 – Source of biometric information
Rule 3 – Collection of information from individual
Rule 4 – Manner of collection of biometric information
Rule 5 – Storage and security of biometric information
Rule 6 – Access to biometric information
Rule 7 – Correction of biometric information
Rule 8 – Accuracy of biometric information
Rule 9 – Retention of biometric information
Rule 10 – Limits on use of information
Rule 11 – Disclosure of biometric information
Rule 12 – Disclosure of biometric information outside New Zealand
China’s National IP Administration published guidelines on patent applications and AI for public comment, until December 13.
The full text is available here (Chinese only). Comments are due before December 13, 2024.
The AI Asia Pacific Institute and New Zealand’s Netsafe opened public comment on a discussion paper of AI safety online.
…this paper examines the impact of advancements in AI on various categories of online harms, including child sexual exploitation and abuse (CSEA), violent and graphic content, extremism, harm to health and well-being, indecent and obscene content, hate and discrimination, cyberbullying and harassment, misinformation and disinformation, and scams.
China’s TC260 issued public comment on standards for GenAI system emergency response until 31 December.
Australia’s Treasury opened a public comment period until February 15 on digital competition.
This proposal paper seeks information and views to inform policy development on a proposed new digital competition regime with upfront rules to promote effective competition in digital platform markets by addressing anti-competitive conduct and conduct that creates barriers to entry or exploits the market power of certain digital platforms
Sri Lanka’s National AI Strategy is open for consultation until 6 Jan 2025.