#15 Asia AI Policy Monitor
December policy round-up: Australia Senate AI report, Korea biometric guidance, Japan AI & copyright infringement, Philippines AI & jobs, China regulates genAI video, Singapore MAS guidance, and more!
Thanks for reading this month’s newsletter along with over 1,600 other AI policy professionals across multiple platforms to understand the latest regulations affecting the AI industry in the Asia-Pacific region.
Be sure to reach out if you are going to the Internet Governance Forum. Digital Governance Asia will host a Lightening Talk entitled, “Privacy, Policy and Power in Asia's AI Regulations” on December 17, Tuesday, at 9:30am SA time.
Do not hesitate to contact our editors if we missed any news on Asia’s AI policy at seth@apacgates.com!
Intellectual Property
Japan’s Agency for Cultural Affairs announced an effort to attack manga and anime piracy with the use of AI tools.
The Agency for Cultural Affairs says the use of AI will help to track down digital pirates more efficiently. Agency officials have come up with the plan as damage from piracy websites, including foreign-language versions, is becoming increasingly serious. They estimate the cost of the damage at 2 trillion yen, or 13.3 billion dollars, a year.
China’s National Radio and Television Administration announced guidance to online video platforms to remove certain genAI videos of popular classic dramas and literary works that go “against their original spirit.”
In a notice issued on the weekend, the National Radio and Television Administration (NRTA) said that some AI videos adapted from ancient works, such as showing the Monkey King – a literary and religious figure from 16th-Century novel Journey to the West – riding a motorcycle, have “gone against the original spirit of the classics and allegedly infringed their rights”.
In a submission to the US Trade Representative, the US Motion Picture Association requests Singapore to clarify its position on AI training on copyrighted materials, including opt-outs.
The organization expressed these objectives in a submission to the U.S. Trade Representative to convince the government to exert pressure on countries with or aiming to have copyright exceptions for AI companies to allow rights owners to opt out.
For more on Asia’s (including Singapore, Hong Kong, India, Australia, Japan and South Korea) copyright policies - check out Digital Governance Asia’s article in Tech Policy Press we published earlier this year, including policy recommendations.
Copyright agencies around Asia should be proactive and provide specific guidance about copyright infringement and AI training to creators and the AI industry, as they already have in Japan and South Korea, in order to prepare both industries for interacting in an ethical and mutually beneficial way, according to the standards set by local IP rules and case law.
In the meantime, policymakers should look to peers, foster interoperability of rules and guidelines, and ensure that respect for copyright is emphasized in government guidance.
Governance
Malaysia launched its National AI Office.
The office is expected to serve as a centralised agency for AI, providing strategic planning, research and development as well as regulatory oversight, among others, according to details published on its website.
It will pursue seven deliverables in its first year, including developing a code of ethics, an AI regulatory framework and a five-year AI technology action plan until 2030.
South Korea’s AI Framework Act is stalled in the National Assembly due to the recent political turmoil arising from the ill-conceived declaration of martial law by the country’s president.
The delay has heightened uncertainty for the country’s AI sector, which is already grappling with regulatory ambiguities and infrastructure challenges.
The bill, designed to foster AI development and establish trust-based governance, was set for review by the National Assembly’s Legislation and Judiciary Committee but was sidelined as lawmakers prioritized a motion for a special investigation into allegations against President Yoon.
Australia’s Parliamentary Select Committee on Adopting AI issued its final report. The report includes 13 recommendations:
whole-of-economy, dedicated legislation to regulate high-risk uses of AI.
adopt a principles-based approach to defining high-risk AI uses, supplemented by a non-exhaustive list of explicitly defined high-risk AI uses.
ensure the non-exhaustive list of high-risk AI uses explicitly includes general-purpose AI models, such as large language models (LLMs).
increase the financial and non-financial support it provides in support of sovereign AI capability in Australia, focusing on Australia’s existing areas of comparative advantage and unique First Nations perspectives.
the final definition of high-risk AI clearly includes the use of AI that impacts on the rights of people at work, regardless of whether a principles-based or list-based approach to the definition is adopted.
extend and apply the existing work health and safety legislative framework to the workplace risks posed by the adoption of AI.
workers, worker organisations, employers and employer organisations are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.
consult with creative workers, rightsholders and their representative organisations through the CAIRG on appropriate solutions to the unprecedented theft of their work by multinational tech companies operating within Australia.
require the developers of AI products to be transparent about the use of copyrighted works in their training datasets, and that the use of such works is appropriately licenced and paid for.
undertake further consultation with the creative industry to consider an appropriate mechanism to ensure fair remuneration is paid to creators for commercial AI-generated outputs based on copyrighted material used to train AI systems.
implement the recommendations pertaining to automated decision-making in the review of the Privacy Act.
implement recommendations 17.1 and 17.2 of the Robodebt Royal Commission pertaining to the establishment of a consistent legal framework covering ADM in government services and a body to monitor such decisions.
take a coordinated, holistic approach to managing the growth of AI infrastructure in Australia to ensure that growth is sustainable, delivers value for Australians and is in the national interest.
A recent report from Tech Crunch on Australia’s data centers focuses on the development of Sovereign AI:
Australia will also have compelling reasons to develop sovereign AI models locally:
Culture: Reliance on offshore AI models could dilute Australian values, delivering decisions influenced by foreign norms rather than reflecting local societal and cultural principles. A sovereign AI approach will ensure technologies align with Australian community values and support a strong national identity.
AI ecosystem: Sovereign AI could drive productivity, foster local AI development, and position Australia to reap the benefits of innovation in AI, Dudley said. By cultivating in-country AI expertise and infrastructure, Australia could stimulate growth and maintain its competitiveness in the global economy.
Security and governance: Locally hosted AI ensures Australian laws govern sensitive data such as medical records and personal information. This protects intellectual property and establishes clear legal accountability for AI-driven decisions — an essential safeguard for ethical and legal integrity.
Human Rights
Australia’s former Human Rights Commissioner was interviewed by ABC on various AI and HR issues.
As Australia’s Human Rights Commissioner, Edward Santow led Australia’s first major inquiry into the human rights implications of new and emerging technologies. Now he explores the rise of AI and its promise to advance human well-being while also considering the risks and dangers it presents and how to address them. His new book, co-authored with Daniel Nellor, is called Machines In Our Image: The Need for Human Rights in the Age of AI
Finance
Singapore’s Monetary Authority published model risk guidance for using generative AI tools in the financial industry.
Good practices include:
establishing cross-functional oversight forums to avoid gaps in AI risk management;
updating control standards, policies and procedures, and clearly setting out roles and responsibilities to address AI risks;
developing clear statements and guidelines to govern areas such as fair, ethical, accountable and transparent use of AI across the bank; and
building capabilities in AI across the bank to support both innovation and risk management.
Military and Cybersecurity
The Global Consortium on Responsible Use of AI in the Military members participated in Singapore’s Rajaratnam School of International Studies (RSIS) event on arms control in an era of disruptive technologies.
This year, the conference sought to address the complexities posed by disruptive technologies in the evolving arms control landscape, focusing on four key themes: the applicability of existing arms control and governance mechanisms to disruptive technologies, the future of governance for military AI, challenges in cyberspace, and the future of outer space security. These themes represent critical pillars of global strategic stability in today's security environment.
This analysis looks at the geopolitics of military AI and China.
I propose eight principles that link geopolitics with the development of AI for military use.
1. AI is now and forever a fundamental technology for defence and security
2. Harnessing the power of AI is an absolute necessity for all militaries if they want to be able to retain the edge on the battlefield
3. Military access to AI is now reliant on access to wider, civilian AI development
4. Countries with advanced civilian AI development will give their militaries an advantage over countries that don’t
5. Countries without domestic civilian AI development need to obtain their military AI from countries that have developed it
6. This gives AI developer countries structural influence over the use of military AI in the buyer country, reducing the buyer country’s military (and therefore political) autonomy
7. As all AI has inherent values reflecting the national regulatory system that governed its development, so the use of AI – including AI on the battlefield – will reflect the ethics and regulations of the AI developer country. A buyer country will be forced to incorporate these AI ethics into its military use
8. So a country that uses military AI developed by another country will be forced into political, military, and regulatory strategic alignment, thus reinforcing the membership of the bloc of which the developer country is a member
Trade
The United States added over 100 Chinese companies to export control lists for advanced AI chips.
PRC leadership at the highest levels has stressed the importance of building an indigenous and self-sufficient semiconductor ecosystem, referring to ICs in particular as critical to PRC national security strategy. Reporting from PRC state-owned media outlets has even referred to integrated circuits (ICs) as the “main battlefield” of the PRC's Military-Civil Fusion (MCF) National Strategy to eliminate barriers between the PRC's civilian research and commercial sectors and its military and defense industrial sectors to ensure that innovations in the civilian sector simultaneously advance the PRC's military capabilities.
Additionally, BIS is imposing new controls on certain high-bandwidth memory (HBM) commodities that provide necessary memory capacity and bandwidth needed for advanced artificial intelligence (AI) models and supercomputing applications.
China in response to the above restrictions has imposed a bar on trade of certain rare earth elements to the US used in high end IC and AI technology.
These restrictions double down on previously announced controls on these metals, going so far as to ban shipments of antimony, gallium, and germanium to the United States.
Education
A student in India sued a university over use of genAI to assist in writing exam essays.
According to the Unfair Means Committee, he was accused of submitting 88 per cent of AI-generated answers and failing the exam on June 25. The Controller of Examinations also upheld this decision.
Thailand’s Minister of Education, Science, Research and Innovation announced plans to make the country an AI learning hub.
The Ministry also wants more AI-skilled workers and plans to add 30,000 at the engineering level in three years, generating 100 AI innovations worth 40 billion baht, as well as promoting AI adoption via 600 agencies nationwide.
The Education Ministry and Unesco on Wednesday announced their partnership in hosting the 3rd Unesco Global Forum on the Ethics of AI 2025, dubbed "Ethical Governance of AI in Motion".
The forum is scheduled for June 24-27, 2025 in Bangkok, marking the first international conference on the subject in Asia-Pacific.
Privacy
South Korea’s Personal Information Protection Commission issued the Artificial Intelligence (AI) Privacy Risk Assessment and Management Model (Draft) as well as guidance for improved management of biometric data.
Recently, biometric technology (access control, financial payment, AI voice assistant, etc.) using biometric information such as face, voice, and fingerprint is being utilized in various fields, but biometric information can identify individuals by itself and cannot be changed, so the risk of misuse, abuse, and leakage is greater than other information. Accordingly, the processing requirements for biometric information are strictly regulated, so there have been restrictions on utilization compared to the development of related technologies.
Labor
A recent report highlights the impact of AI on Business Process Outsourcing (BPO) operations in the Philippines.
The Philippines, the second-largest BPO market in the world after India, has 1.84 million BPO workers. Although there is no official data for job losses due to AI, the Philippines’ labor secretary, Bienvenido Laguesma, told local media in June that some workers are already losing their jobs to AI. Industry estimates suggest that while 300,000 Filipinos could be out of work due to AI in the next five years, 100,000 new jobs could be created in roles like data curation.
In the News & Analysis
Vietnam signed an agreement with Nvidia to open an AI research center.
The agreement, which was signed yesterday in Hanoi in the presence of Prime Minister Pham Minh Chinh and visiting Nvidia CEO Jensen Huang, will involve the expansion of an AI data center owned by the Vietnamese military-owned Viettel Group, which already uses Nvidia technology. Nvidia also said it has acquired healthcare startup VinBrain, a unit of the prominent Vietnamese conglomerate Vingroup.
Nvidia continued its push into the Southeast Asia market with pledges to help support Thailand’s “sovereign” AI compute infrastructure.
These events capped a year of global investments in sovereign AI, the ability for countries to develop and harness AI using domestic computing infrastructure, data and workforces. AI will contribute nearly $20 trillion to the global economy through the end of the decade, according to IDC.
Digital Governance Asia and Asia Policy Monitor Editor-in-Chief Seth Hays, asked what the political economy of Asia’s AI - in particular Asia’s Global Majority countries - should be in response to a recent article published in NATURE, entitled “Why ‘open’ AI systems are actually closed, and why this matters.”
Recently Digital Governance Asia submitted comments to Sri Lanka's Ministry of Digital Economy on the country's National AI Strategy. In our submission, we emphasized the need to question resource investments in the AI industry for Sri Lanka's specific context, namely whether investments in the industry crowd out more productive areas of investment and development.
This academic paper reviews China’s array of AI rules.
Case studies on autonomous driving and financial AI demonstrate how adaptive regulatory models balance innovation with risk management via pilot projects, stringent data protection, and iterative policy evolution. These models transition from localized experiments to national standards, managing risks through data governance and public safety measures. Analyzing legislative proposals like the Model Artificial Intelligence Law (MAIL) and the Artificial Intelligence Law of P.R. China (Scholarly Draft Proposal), this paper contrasts MAIL's centralized, precautionary framework with the Scholarly Draft's flexible, tiered system that promotes innovation through differentiated risk management. This reflects the tension between central regulatory control and sector-specific governance in aligning rapid technological advancement with coherent legislative oversight.
22 Indian languages have been trained across various LLMs sponsored by the Indian government.
Designed to represent India's vast linguistic, cultural, and demographic diversity, IndicVoices marks a significant milestone in advancing speech recognition and artificial intelligence (AI) in multilingual contexts. With 12,000 hours of speech data from 16,237 speakers across 208 Indian districts and 22 languages, the initiative is a testament to India's technological and collaborative prowess.
The AI Asia Pacific Institute published a report on ASEAN’s use of Generative AI.
This discussion paper highlights five key lessons learned in the ASEAN AI experience to date:
The potential and urgent need to bridge the digital and cultural divides
The depth of the coordination challenges in ASEAN
Weak awareness and understanding of AI/Gen AI
The challenges for ASEAN of fitting into the global frameworks of AI digital powers.
Considerable opportunities along the AI life cycle
Deloitte published a report on AI in the Asia-Pacific region highlighting the trends and challenges firms have in the region to harness the technology.
The report outlines four high-impact actions that organisations can take to improve AI governance:
1. Prioritise AI Governance to realise returns: Continuous evaluation of AI governance across policies, principles, procedures, and controls, including monitoring changing regulations.
2. Understand and leverage the broader AI supply chain: Understand and interact with the broader AI supply chain, including developers, deployers, regulators, and customers, with regular audits throughout the AI lifecycle.
3. Build risk managers not risk avoiders: Develop employees' skills to identify, assess, and manage risks, focusing on the "people and skills" pillar of the AI Governance Maturity Index.
4. Communicate and ensure AI readiness: Be transparent about the AI strategy, benefits, and risks, provide training, and reskill teams. Practical steps include scenario planning, narrative development, and crisis exercises.
Advocacy
Australia’s Treasury opened a public comment period until February 15 on digital competition.
This proposal paper seeks information and views to inform policy development on a proposed new digital competition regime with upfront rules to promote effective competition in digital platform markets by addressing anti-competitive conduct and conduct that creates barriers to entry or exploits the market power of certain digital platforms
Sri Lanka’s National AI Strategy is open for consultation until 6 Jan 2025.