#25 Asia AI Policy Monitor
Malaysia's Approach to AI, South Korea's Courts on AI, China Enforcement against AI Crime, India Copyright and AI review, Hong Kong Privacy and AI survey, and more!
Thanks for reading this month’s newsletter along with over 1,800 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.

Intellectual Property
In short: IP rules around AI are experiencing both upstream and downstream policy scrutiny. Where India is looking at the broader picture of its copyright act, as several other countries have in the region (e.g. Hong Kong), we see granular policymaking in South Korea looking at sui generis rules around “style-of” prompting, and China at the provincial level across IP, including promoting AI patents, and scritinizing liability.
India established a panel to examine revisions of the copyright act in light of AI advances.
The memo, which is not public, said the commerce ministry set up a panel of eight experts last month to examine issues related to AI and their implications for India's copyright law.
The experts have been tasked to "identify and analyze the legal and policy issues arising from the use of artificial intelligence in the context of copyright," the memo added.
China’s Guangdong Province Supreme People’s Court issued guidance on promoting AI through IP trials, including the following 24 points.
Align AI IP trials with national innovation strategy and ensure fair competition and governance.
Balance innovation with regulation by tailoring IP protections to diverse AI use cases.
Support AI+ integration in the Greater Bay Area through judicial reform and industrial innovation.
Strengthen IP trial mechanisms to protect AI core technologies like chips and large models.
Clarify ownership of AI-generated innovations to promote fair benefit-sharing and collaboration.
Adapt contract enforcement to AI R&D cycles to boost innovation-to-commercialization pipelines.
Protect open-source contributors and govern open AI ecosystems through nuanced IP adjudication.
Define and protect data rights in AI training to unleash the multiplier effect of lawful data use.
Determine ownership of AI-generated content based on human input and contractual agreements.
Clarify liability for AI-generated content infringement by assessing roles of developers, platforms, and users.
Regulate deepfakes and synthetic content using a “notice + action” regime to ensure trusted AI use.
Enhance patent trials for AI to support high-value, strategic tech development.
Guard AI trade secrets by balancing confidentiality protection with labor mobility.
Prevent unfair competition by aligning AI business practices with legal and ethical norms.
Combat AI-enabled monopolies to ensure SME participation and innovation diversity.
Crack down on AI-related IP crimes while avoiding over-criminalization that hinders tech growth.
Support cross-border AI IP governance to protect China’s global innovation interests.
Refine evidence rules for AI cases to handle complex, technical proof challenges.
Streamline complex AI case handling by setting up expert panels and harmonizing trial standards.
Deepen judicial research to proactively guide AI-IP rulemaking.
Expand public outreach on AI-IP rights using model cases and multimedia tools.
Coordinate with government to shape a joint AI-IP governance system.
Modernize judicial tech using AI responsibly to improve trial efficiency.
Build interdisciplinary judicial teams with legal and technical AI expertise.
South Korea’s Legislative Research Center notes that “style of” image creation, such as Ghibli or Disney, in genAI tools may violate Korean copyright law.
Yet, the Korean parliamentary research service has taken a cautious approach regarding full disclosure [of copyright protected training data]. The parliamentary research service said that full disclosure should be undertaken based on social consensus and development of AI industry. It also noted that copyright law should help “balance the interests” between original artists and AI service operators.
Privacy
In short: Hong Kong’s privacy regulator plays a primary role in regulating and formulating policy on AI, even outside the privacy sector. The recent survey of firms in the city, including its expansive subject matter inquiry, is a prime example of this broader AI policing role.
Hong Kong’s privacy regulator examined AI use by firms in the city.
1️⃣ 80% of organizations use AI daily, a 5% rise from 2024, with nearly 88% having used AI for over a year.
2️⃣ Around 54% of these organizations use three or more AI systems, mainly for customer service, marketing, administration, compliance, and R&D.
3️⃣ Half collect personal data via AI systems, providing clear Personal Information Collection Statements on data use and transfer.
4️⃣ 79% retain personal data collected via AI systems with defined retention periods, while 21% do not retain such data.
5️⃣ All have implemented data security measures like encryption and anonymization, with 29% conducting specialized AI-security exercises.
6️⃣ 96% test AI systems for reliability and fairness before deployment, and 83% conduct pre-implementation privacy impact assessments.
7️⃣ 92% have formulated data breach response plans, with 32% specifically covering AI-related incidents.
8️⃣ 63% reference PCPD’s AI guidelines for data handling, with an additional 29% planning to do so.
9️⃣ 79% have established formal AI governance structures or designated oversight personnel.
Governance
In short: AI Governance plays an important role in political messaging around Asia both for governments and the private sector, primarily focused on the business opportunities, but hedged with promises for responsibility, and not “over regulation”. Politicians in South Korea, running for the presidency are making promises for dedicated ministries, while Malaysian officials try to balance regulatory push against AI investments. In ASEAN tech governance of AI is all about growth and innovation, and private firms like South Korea’s Lotte Group, try to position themselves are responsible AI promoters.
The Tech for Good Institute published a report on Tech Governance in the ASEAN-6, including Singapore, Thailand, Vietnam, Philippines, Malaysia and Indonesia.
As Southeast Asia's digital economy continues to drive economic progress, policy and governance trends are shifting from rapid expansion to responsible development. ln 2024, governments in the region focused on sustainable growth and trust in the digital ecosystem amid the rapid innovation and adoption of artificial intelligence (Al) models, algorithms and products.
South Korea’s presidential hopeful pledges to create AI Ministry.
Earlier in the day, Mr Han vowed to push forward the nation’s artificial intelligence expertise as he sought to consolidate conservative support ahead of the meeting with the ruling party’s nominee.
In outlining his first campaign pledge since he launched his presidential bid on May 2, Mr Han vowed to launch a new ministry to oversee the country’s artificial intelligence (AI) strategies and science innovation. Mr Han’s campaign said he aims to boost cutting-edge AI semiconductor production and create a 1 trillion won (S$926 million) fund to nurture local talent and court overseas scientists as Korea strives to catch up with global peers.
Malaysia’s Science Minister clarifies the country’s approach to AI.
Minister of Science, Technology and Innovation Chang Lih Kang clarified the government’s position during his address at the Perak Ignite Entrepreneur Summit 2025, held at Sekolah Menengah Kebangsaan Yuk Choy. He stated that while a specific AI law is not currently on the legislative agenda, the ministry’s long-term goal is to eventually codify AIGE into enforceable law.
“There is no clear time frame yet [for an AI law], but that is our eventual goal… so that this guideline [AIGE] can be enforced as law,” he told reporters.
South Korea’s Supreme Court organized a committee on AI to advise how best to utilize AI innovation in the judiciary.
Recently, judicial branches around the world have entered a competition to introduce AI-based trial systems, and it is necessary to actively respond so that our judicial branch does not fall behind in its leading position in the field of judicial informatization, judicial efficiency, transparency, and accessibility.
Hong Kong’s Commissioner for Digital Policy calls for responsible AI.
Commissioner for Digital Policy Tony Wong Chi-kwong has called for a ban on generative AI systems that may pose such threats, and for extensive supervision of AI software used in critical infrastructure.
Wong made his call last month at the World Internet Conference Asia-Pacific Summit. Hosting 1,000 local and overseas participants for the first such event held outside the mainland, the city showed how it can serve as a bridge and two-way platform linking China with the rest of the world in AI development.
South Korea’s Lotte Group establishes new AI ethics standards.
Lotte Group announced Wednesday its new code of ethics for employees regarding the use of artificial intelligence (AI), as the technology becomes increasingly integrated across the conglomerate’s various divisions and expanding business operations.
The key conglomerate in the country had a pronouncement ceremony for the new future-oriented business principles at the group’s office in Lotte World Tower in southeastern Seoul. Eighty employees of the group’s holding company, Lotte Corp., and subsidiaries who deal with AI technologies joined the event. Lotte Corp. Co-President and Head of Business Innovation Rho Jun-hyung was among the participants.
The code, consisting of 10 clauses under six key values, is a set of guidance to keep the workers from causing any social disputes throughout all stages of business tasks, from development to application in practice. The key values are human dignity, security, transparency, fairness, responsibility and solidarity.
Multilateral
In short: Recent multilateral AI activity cannot be seen outside the context of the US-China competition in technology. Countries want to leverage respective strengths in the AI ecosystem, and move towards a safer and more predictable space. For example, Taiwan touts its leading role in chip manufacture, as India boosts its downstream software talent. Multilateral organizations, like APEC focus on areas like education and AI that represent lower hanging (but still important) policy fruit. China leverages the UN to sap any unilateral governance power the US may have, while smaller leaders like Singapore continue to do quiet work on practical governance initiatives outside the usual international organizations.
Taiwan’s Representative to the US says his country can play a role in advancing AI.
Taiwan’s contribution to the global AI revolution is already significant, with our semiconductor industry, led by TSMC, powering much of the world’s AI innovation. To bolster this momentum, Taiwan is investing $3 billion in AI infrastructure and supercomputing, reinforcing our ability to help the United States pursue a prosperous, secure and innovation-driven future.
India and Indonesia explore cooperation in AI.
Digital transformation is no longer an option, but a strategic necessity for Indonesia. In a move that reflects the spirit of collaboration and technological independence, the Indonesian government is exploring concrete partnerships with India in the areas of 5G and artificial intelligence (AI). The meeting between Communications and Digital Minister Meutya Hafid and Indian Ambassador to Indonesia, Sandeep Chakravorty, marked the beginning of a joint step towards an inclusive and sovereign digital future.
Sinagpore’s IMDA resleases consensus document following AI researcher gathering from US, China, and other countries.
The 2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety aims to support research in this important space by bringing together AI scientists across geographies to identify and synthesise research priorities in AI safety. The result, The Singapore Consensus on Global AI Safety Research Priorities, builds on the International AI Safety Report-A (IAISR) chaired by Yoshua Bengio and backed by 33 governments. By adopting a defence-in-depth model, this document organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment), and challenges with monitoring and intervening after deployment (Control).
South Korea’s Education Ministry speaks at APEC in Jeju on AI challenges.
Speaking at the opening plenary of the APEC Human Resources Development Working Group on Wednesday, Seok-Hwan Oh, Vice Minister of Korea’s Ministry of Education, emphasized the urgent need to reform education systems to keep pace with technological disruption.
“We are at a turning point,” Vice Minister Oh said. “Education must go beyond transmitting knowledge. It must connect learners, encourage critical thinking and promote adaptability.”
He highlighted Korea’s initiative to introduce AI-powered digital textbooks designed to personalize learning and equip students with problem-solving skills.
China’s UN Representative supports UN-led global AI governance.
AI, as a strategic technology leading the new round of scientific and technological revolution and industrial transformation, is profoundly reshaping people's work and life, said Fu Cong, China's permanent representative to the United Nations, at a side event of the Group of Friends for International Cooperation on AI Capacity-Building.
Fu said that in October 2023, China put forward the Global AI Governance Initiative, offering b approach to global AI governance -- AI governance should be discussed by all, promoted by all, and the benefits of AI shared by all.
"Capacity-building has long been a cornerstone of global AI governance," said Fu.
Cybersecurity
In short: Asia experiences some of the leading edge in AI-fueled cybersecurity and scam harms (reports and actions taken recently listed below). Investing further in research in this area will be important to shape meaningful AI governance that should move from abstract harms (e.g. the end of humanity) to real harms affecting individuals and communities (e.g. deepfake non-consensual imagery).
The Safer Internet Lab and Google published reports from Singapore, Taiwan, Thailand, Philippines, and Vietnam on online scam issues, including the use of AI to enhance these crimes.
Vietnam
Scammers leverage generative AI for deepfake video-call scams—using AI-generated voices and videos to impersonate trusted individuals.Philippines
Filipino victims face increasingly sophisticated AI-enabled schemes: AI-powered phishing, voice-cloning scams, ransomware delivery, and deepfake videos to extract personal data or funds. AI-generated content allows highly personalized, hard-to-detect scam messages across calls, SMS, email, and social media .Thailand
AI-assisted identity fraud has surged, notably via AI-powered face-alteration during video calls to bypass biometric checks, tricking hundreds of victims in a single campaign.Indonesia
Beyond traditional phishing, Indonesian fraudsters employ AI to replicate facial and audio features, creating deepfake videos for “Digital Pension” scams that defeat mandated biometric checks.Taiwan
Scammers exploit AI-generated images, videos, and voices in one-page scam posts and fake celebrity endorsements on Facebook and LINE. Deepfake romance-investment syndicates have used AI-created likenesses of public figures and wealthy personas to build trust.Singapore
AI underpins automated phishing via chatbots that produce official-sounding emails and messages at scale, and deepfake videos/voice-cloning of government officials and celebrities to coerce victims into “safety account” transfers. Generative AI also powers mass-targeted social media ads for fake investment schemes, exploiting high trust in digital identity systems and messaging apps.
Taiwan’s National Cybersecurity Institute bolsters defenses against China-based hackers.
The initiative, outlined by the National Institute of Cyber Security in April 2025, will oversee key areas including bolstering societal resilience, defending supply chains and infrastructure, and ensuring the safe use of artificial intelligence, the Taipei Times newspaper reported.
Cambodia’s Ministry for Post and Telecommunications notes the importance of cybersecurity harmonization in the AI era.
Prof. Ou Phannarith, Director of the ICT Security Department at Cambodia’s Ministry of Post and Telecommunications (MPTC), emphasized a twofold approach to cybersecurity in the age of AI. “From Cambodia’s perspective, we’ve seen rapid evolution in AI, and we must respond accordingly,” he said. The first step entails building public awareness of how AI affects daily life, business operations, and the broader digital ecosystem. The second involves guiding national strategies, especially for policymakers, on how to address emerging cyber threats, including phishing, AI misuse, and digital vulnerabilities.
China’s CAC issued a notice on a three-month campaign to address AI harms. The campaign called "Clear and Bright: Rectification of Abuse of AI Technology" will address six points:
1️⃣ Illegal AI Products:
Providing unregistered AI services or unethical functions like biometric cloning ("one-click undressing") violates privacy and laws.2️⃣ Tutorials and Product Sales:
Teaching or promoting tools to create deepfake audio/video content and selling related illegal AI software.3️⃣ Poor Training Data Management:
Using copyrighted, fake, or illegally sourced data for AI training without proper data governance.4️⃣ Weak Security Management:
Lacking proper content moderation, security audits, and oversight of AI-generated content on social platforms.5️⃣ Unimplemented Content Identification:
Failing to clearly identify AI-generated synthetic content, causing misinformation and misleading the public.6️⃣ Security Risks in Critical Sectors:
Inadequate industry-specific safeguards for AI in healthcare, finance, and education, causing harmful misinformation and disruptions.
Human Rights
In short: Undervalued issue areas in the AI discussion tend to be around competition, rule of law, community agency, and democracy-impact. While privacy tends to be a top policy concern, AI governance policymakers should rely more on these issue areas in formulating policy, as the bottom line for any technology adoption is quality of life, human dignity, and human agency.
South Korean rights groups submit views Ito the UN Human Rights Council on AI’s impact on good governance.
What are the opportunities and the challenges or risks of integrating AI into governance frameworks, particularly in terms of promoting and protecting human rights and upholding good governance principles?
The use of AI to provide personalized public services can effectively enhance the welfare benefits available to individuals, thereby contributing positively to the promotion of human rights. However, the potential for private companies with exclusive access rights to define governance rules may lead to a decline in democracy and exacerbate social inequalities.
The Center for Data Innovation published an article detailing how indigenous groups utilize AI and set governance standards, citing Te Hiku Media, a New Zealand Māori based group working on language preservation.
In New Zealand, for instance, Te Hiku Media–a charitable media organisation with a core focus on Māori language revitalisation–developed a Māori speech recognition model that not only preserves the language but also sets ethical standards for how AI can empower, rather than erase, marginalised cultures. Their work shows that the path forward lies not in regulation that constrains innovation, but in participation that expands inclusion.
The Center for AI and Digital Policy published its ranking of countries, the AI and Democratic Values with Japan (#1) and South Korea (#3) at the top. Vietnam, Thailand and Myanmar join the bottom ranks.
CAIDP AI Policy Recommendations 2025
Global support for the International AI Treaty
Prohibitions on AI systems that undermine human rights and democratic values
Human oversight of AI systems across the lifecycle, including a Termination Obligation
Implementation and enforcement of AI governance frameworks, such as the EU AI Act and the Hiroshima AI Process
Algorithmic transparency, including the ability to contest adverse outcomes
Establish a UN Special Rapporteur for AI and Human Rights
Establish liability rules for AI systems
Advocacy
Malaysia’s Personal Data Protection Department is conducting a public consultation on automated decision making and profiling until May 14.
Philippines Privacy Regulators are collecting public comments on biometric data collection until May 30.
Saudi Arabia opened comment on the Global AI Hub law until May 14.
Pakistan has an open consultation on its draft National AI Policy ongoing.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.