#23 Asia AI Policy Monitor
Singapore's Brand Protection Congress on AI, No AI in K-Pop, HK Employee Checklist for AI, North Korea Military AI in Drones, Singapore Issues Deepfake Warning
Thanks for reading this month’s newsletter along with over 1,800 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Intellectual Property
Our editor shared views on how IP Rights are Human Rights in the AI Era at the Singapore Brand Protection Congress.
Seth Hays, representing the NGO Digital Governance Asia, challenged the common anti-IP stance among civil society groups, arguing that this view is misguided, especially in the context of AI. Copyright and other IP protections remain essential for supporting creators and ensuring the fair use of content in AI development.
The Korea Music Copyright Association (KOMCA) has introduced a new procedure requiring artists to declare AI usage in song creation.
The new measure, which went into effect on March 24, mandates songwriters to confirm they have 100 percent contributed to writing the song without using AI. The KMCA confirmed that the criteria for not using AI does refer to "0 percent contribution."
Privacy
Hong Kong’s Privacy Regulator (PDPC) reports that less than 30% of Hong Kong firms have rules around use of AI.
Less than 30 per cent of organisations in Hong Kong have established guidelines for employees using artificial intelligence (AI), the city’s privacy commissioner has said, urging companies to avoid inputting sensitive data into such tools as much as possible.
Privacy Commissioner Ada Chung Lai-ling said on Sunday that further amendments to the Personal Data (Privacy) Ordinance were needed to reduce the risk of data leakage, after her office released guidelines aimed at helping companies regulate AI use among staff.
Hong Kong’s PDPC published the “Checklist on Guidelines for the Use of Generative AI by Employees”.
The Guidelines also provide practical tips on supporting employees in using Gen AI tools, which include:
Enhancing transparency of the policies or guidelines: Regularly communicate the policies or guidelines to employees and keep employees informed of any updates in a timely manner;
Providing training and resources for employees’ use of Gen AI tools: Educate employees on how to use Gen AI tools effectively and responsibly, including explaining the capabilities and limitations of the tools, providing practical tips and examples, and encouraging employees to read the privacy policies and terms of use of such tools, etc.;
Providing a support team: Set up a designated support team to assist employees in using Gen AI tools in their work, provide technical assistance, and address employees’ concerns; and
Establishing a feedback mechanism: Establish channels for employees to provide feedback to help the organisation identify areas for improvement and tailor internal policies or guidelines according to the circumstances.
South Korea’s PIPC conducted a seminar on data protection issues and open source AI development.
Many companies have stated that they are struggling with legal uncertainty that arises when using user data held by themselves or their clients for AI development. To this end, various suggestions have been made, such as providing clear legal guidelines for the legal use of user data, specific methodologies for processing anonymous and pseudonymized data, and establishing re-identification evaluation criteria for de-identified data.
Governance
Uzbekistan’s parliament reviewed a proposed AI Law and Privacy.
However, the government has also raised alarms about the risks posed by unregulated AI use. In particular, concerns have grown over privacy violations stemming from deepfake content and manipulated media. In 2024 alone, incidents involving fake AI-generated images and videos of public figures increased fiftyfold. The number of reported cases involving illegal use of AI-generated content rose from 1,129 in 2023 to 3,553 in 2024.
The Philippines is setting up a national deepfake task force.
The Presidential Communications Office and the Cybercrime Investigation and Coordinating Center have signed a partnership agreement to further strengthen the government’s campaign against fake news.
According to CICC Undersecretary Alex Ramos, the agreement includes the establishment of a multisectoral campaign to “empower the public and various schools, institutions, stakeholders to decisively combat proliferation of deepfakes and disinformation.”
Singapore’s Ministry of Trade and Industry published an advisory on export controls on advanced semiconductor and artificial intelligence (AI) technologies.
Businesses should be aware that engaging in illicit practices can lead to legal, operational and reputational consequences. Appropriate action, in accordance with Singapore’s laws, will be taken against companies or individuals in Singapore engaged in fraudulent or dishonest practices to evade export controls that they are subject to…
To mitigate the risk of inadvertent violations, businesses are encouraged to:
a) Implement a robust internal compliance programme which includes Know Your Customer (KYC) practices and end-user screenings to ensure that business transactions are made with legitimate customers or end-users that adhere to relevant export control regulations, and order screening procedures that consider potential red flags such as abnormal shipping routes, etc. For more information, refer to Singapore Page 2 of 2 Customs’ Strategic Trade Handbook, and Singapore Customs’ guidance on sanctioned lists and red flags; and
b) Engage appropriate legal expertise, where necessary, for international business activities involving controlled technologies.
Microsoft outlined efforts to prevent AI manipulation in Australia’s upcoming elections.
AI generated content such as deepfakes – convincing videos and audio made to look and sound like real people – threaten to spread disinformation, sow mistrust and undermine the democracy we value so highly.
This is why Microsoft has taken multiple steps to protect electoral processes as part of the company’s strategy to empower election stakeholders to defend democracy around the globe.
Philippines organizations call for legal protections against deepfakes.
Legislators must enact laws that will allow the government regulate social media and other platforms and to curb the spread of deepfakes, Scam Watch Pilipinas Co-Founder Jocel De Guzman said, as news personalities, business tycoons and celebrities continue to become victims of fake video and audio contents.
ScamWatch Pilipinas is an active partner of the Cybercrime Investigation and Coordinating Center (CICC), an attached agency of the Department of Information and Communications Technology.
During the Economic Journalists Association of the Philippines and San Miguel Corp’s annual business journalism seminar, De Guzman said the impact of deepfake, not just in the 2025 midterm elections, could be “widespread.”
Cybersecurity & Military AI
Singapore’s Cybersecurity Agency expanded the issuance of Cyber Essentials and Cyber Trust certification marks to include AI Security.
Organisations who use or plan to use AI can take reference from the expanded Cyber Essentials content on how to utilise AI securely. For example, under the “Assets” category, which focuses on the need for organisations to know their own software assets, it provides guidance on how an organisation can have visibility on third-party AI tools used by its employees but not provided by the organisation (also known as Bring Your Own AI). Organisations should mitigate the associated risks as any compromise could lead to leakage of confidential data.
As for Cyber Trust, an example of a risk scenario is one where an attacker exploits a weakness in an insecure Large Language Model (LLM) used by the organisation and injects malicious content as prompts to manipulate the behaviour of the LLM.
Taiwan indicates that China is using AI to stoke political division.
In a report to parliament, the security bureau said it had detected more than half a million pieces of "controversial messages" so far this year, mostly seen on social media platforms including Facebook and TikTok.
Beijing has targeted sensitive moments such as President Lai Ching-te's speech on China last month or chipmaker TSMC's announcement of new U.S. investment to launch what the report said was "cognitive warfare," adding such efforts were "designed to create division among our society."
MIT reports that security researchers identified suspected AI agents in Singapore and Hong Kong trying to access vulnerable servers.
The AI research organization Palisade Research has built a system called LLM Agent Honeypot in the hopes of doing exactly this. It has set up vulnerable servers that masquerade as sites for valuable government and military information to attract and try to catch AI agents attempting to hack in…
Since LLM Agent Honeypot went live in October of last year, it has logged more than 11 million attempts to access it—the vast majority of which were from curious humans and bots. But among these, the researchers have detected eight potential AI agents, two of which they have confirmed are agents that appear to originate from Hong Kong and Singapore, respectively.
Singapore’s government issued warnings about deepfake use in scams.
In this scam variant, scammers would impersonate high-ranking executives from companies that the victims work for through the alleged use of digital manipulation, and instruct victims to transfer funds from company accounts. Victims would receive unsolicited WhatsApp messages from scammers claiming to be executives from the company that the victims work for, inviting the victim to join a live-streamed Zoom video call with their high-ranking executives from their companies. It is believed that digital manipulation had been used to alter the appearances of the scammers to impersonate these high-ranking executives. In some cases, the video calls would also involve scammers impersonating MAS officials and/or potential “investors”.
North Korea claims to use AI for suicide drones.
Analysts have said the development of the technology was likely assisted by Russia, which North Korea has supported recently by sending its soldiers to help with Moscow's war in Ukraine.
The Foundation for the Defense of Democracy reports that DeepSeek is being deployed by China’s armed forces for non-combat related work.
The PLA has reportedly used DeepSeek’s latest AI models for a range of non-combat tasks, including in hospital settings and personnel management. According to the Central Theatre Command, which has jurisdiction over the defense of Beijing, PLA military hospitals have used DeepSeek to provide treatment plans for military doctors and produce data storage plans. The Nanjing National Defense Mobilization Office has also released a manual on using DeepSeek to assist in emergency evacuation planning and other non-combat-related tasks.
TechCrunch reports on information about use of AI for censorship in China.
The system appears primarily geared toward censoring Chinese citizens online but could be used for other purposes, like improving Chinese AI models’ already extensive censorship.
In the News & Analysis
Asian countries such as Japan, South Korea, Taiwan, Singapore and Malaysia play a critical role in the smuggling of China-bound advanced AI chips blocked by the US.
The first hurdle is obtaining the chips. To do this, you go to a “third country” where importing them remains legal. Japan, South Korea, and Taiwan are good candidates – they’re major shipping hubs near China where AI chip purchases are common, and they’re not subject to the new country caps on AI chip exports.
Once you’ve settled on a third country, you (1) create a shell company, (2) create a digital presence, including a website and email addresses, (3) fabricate financial records, and (4) establish a relationship with a local chip reseller who partners with a major AI chip distributor.
Japanese government officials are calling on Europe to help develop non-English/Chinese LLMs.
AI development was flagged as an area for Japan-EU cooperation by Motoki Kurita, the deputy director of Japan’s Ministry of the Economy Trade and Industry’s (METI) IT industries division.
“We think that the models for generative AI are skewed towards English and Chinese language models. So we believe we can work together on non-English and non-Chinese AI and we can share insights with the EU on data and spread that, expanding it to other regions that are non-English and non-Chinese speaking countries,” he told Euronews during a briefing in Tokyo.
AI security screening technology at an Australian event failed to identify guns.
The screening technology in place at the MCG — called Evolv Express — has come under fire in the United States over failures to detect weapons in schools.
Grok-enhanced X platform in India sparks controversy for obscenity and criticism of PM Modi.
And just like that, the floodgates opened. Indians bombarded Grok with everything – cricket gossip, political rants, Bollywood drama – and the bot took it all on, unapologetically and with some style. The chatbot has just recently become an "unfiltered and unhinged" digital sensation in India, as many are calling it. Just last year, Musk dubbed it the "most fun AI in the world!".
Advocacy & Events
Saudi Arabia opened comment on the Global AI Hub law until May 14.
Australia’s 5Rights Foundation and the Digital Futures for Children centre are delighted to invite you to celebrate the launch of the Children & AI Design Code, a pioneering new protocol developed by 5Rights for the design, development and deployment of AI systems that impact children. Tuesday 29 April | 17:00 – 18:15 CEST.
Japan’s Ministry of Land, Infrastructure, Transport and Tourism is conducting a public comment on autonomous driving regulations in urban areas until April 14.
Japan’s Financial Services Agency published an AI Discussion Paper for public comment.
Pakistan has an open consultation on its draft National AI Policy ongoing.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.