#28 Asia AI Policy Monitor
Vietnam Digital Tech Law on AI, Korean Special AI Act Amendments, China's NPC AI Law Proposal, Hong Kong Seeks Improved AI Governance, Singapore Launches AI Assurance Sandbox, and MORE!
Thanks for reading this month’s newsletter along with over 1,800 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Legislation
China’s National People’s Congress discusses a Comprehensive AI Act.
Artificial intelligence has injected new impetus into economic and social development, and has promoted profound changes in production methods, lifestyles, and governance methods. At the same time, it has also brought a series of challenges, and the call for accelerating legislation in the field of artificial intelligence is growing. The 2025 legislative work plan of the Standing Committee of the National People's Congress lists legislative projects on the healthy development of artificial intelligence and other aspects as preliminary review projects. Relevant parties are required to speed up research and drafting work and arrange review as appropriate.
South Korea introduced a Special AI Act to supplement the existing act.
Artificial intelligence technology is a core strategic technology that not only affects the interests of specific companies but also determines the economic growth and national competitiveness of a country, and there is an increasing need to promote and foster the artificial intelligence industry at the national level.
Vietnam passed the Digital Technology Law, covering parts of AI.
The law also offers regulatory clarity for artificial intelligence, mandating human oversight and human-centric approaches and categorising AI systems into three distinct groups: high-risk, high-impact and non-high risk. Those flagged as high-risk will face stringent technical standards and monitoring from authorities.
Intellectual Property
Policy makers gather for an IAPP conference and discuss copyright and AI issues, including in Asia.
The research generated eight AI legal criteria commonly used across jurisdictions. The criteria included where the ability for AI developers to train models using copyrighted material is illegal, fair-use privileges, exceptions for both commercial and non-commercial text data mining with and without opt-outs for rights holders, extended collective licensing provisions for rights holders, and transparency obligation requirements.
Among the jurisdictions Fisher covered, China, France, Saudi Arabia and the United Arab Emirates are either considering or have some legal mechanism in place to prevent using copyrighted material in training AI. Israel is the only country that currently allows some form of fair use of copyrighted work for training AI. Canada, India, South Korea and the U.S. are exploring potential legislation to allow fair-use exceptions to varying extents.
The Korean Intellectual Property Office launches discussions on “Measures to Strengthen Patent Competitiveness of AI Innovation Companies.”
According to the World Intellectual Property Organization (WIPO)’s generative AI patent report published last year, generative AI patents are expected to increase 19-fold over the next 10 years, from 733 in 2014 to 14,000 in 2023.has increased,Generative form of KoreaAIPatent application in China,Following the United States3It's above level.
* China 38,210 cases, USA 6,276 cases, Korea 4,155 cases, Japan 3,409 cases, India 1,350 cases
A partial amendment introduced in the legislature to the Korean AI Basic Act includes a provision on copyright.
The current law stipulates basic matters for the development of artificial intelligence and the establishment of a trust foundation, but does not have specific provisions for the protection of creators, such as the obligation to disclose learning data for generative artificial intelligence.
Although cases of generative artificial intelligence using various creative works as learning data without permission are increasing, there is no institutional device in place for creators to check how the content they created is being used for artificial intelligence learning. This raises concerns that the rights of creators may be infringed, and regulations for protecting creators from generative artificial intelligence are urgently needed.
Accordingly, artificial intelligence business operators should endeavor to disclose information on learning data, and if the copyright holder of a work, etc. requests confirmation of whether their work, etc. has been used as learning data, a procedure should be established to confirm whether it has been used for learning, while allowing a public-private consultative body to be formed so that artificial intelligence business operators can voluntarily comply with the obligation to ensure transparency, thereby seeking a balance between the development of artificial intelligence and the protection of creators' rights (newly established Article 31 and Article 31-2 of the bill, etc.).
Privacy
Korea’s privacy regulator fined an AI company for data protection issues.
The Personal Information Protection Commission (Chairman Koh Hak-soo, hereinafter referred to as the “Personal Information Commission”) held its 14th plenary session on Wednesday, June 25, and decided to impose a total of 137.2 million won in fines and 13.2 million won in surcharges on two businesses* that violated the Personal Information Protection Act.
* ① Korea Accreditation Support Center: A non-profit organization that performs management system certification agency accreditation evaluation, quality evaluator training, etc.
② TELUS International AI, Ltd.: A subsidiary of Canadian telecommunications company TELUS, a global company that supports projects related to artificial intelligence (AI) learning data for corporate customers.
Singapore’s privacy regulator published guidance on anonymization of data - key for use In training AI.
The guide is designed to help organisations adopt anonymisation practices to safeguard personal data. It provides an overview of basic anonymisation concepts and outlines practical steps for organisations to kickstart their anonymisation journey, focusing on structured, textual, and non-complex datasets.
Hong Kong’s Privacy Commissioner published the Hong Kong Letter appealing to companies to adhere to high levels of AI governance.
The Privacy Commissioner pointed out that the use of generative AI by employees without proper guidance would not only pose risks to personal data privacy but may also compromise the organisation’s own interests.
Competition
Singapore’s competition regulator took enforcement action against a firm for posting fake AI generated reviews.
“This is the second fake review case that CCCS has uncovered, and the first case involving both a third-party platform and the use of AI to create these fake reviews. When businesses post fake reviews to boost their ratings and popularity, they poison the well of consumer trust. Such deceptive practices, also known as “dark patterns”, not only mislead consumers but also disadvantage honest competing businesses. We remain committed to take firm action against businesses engaging in such unfair practices.” said CCCS’s Chief Executive, Mr. Alvin Koh.
Cybersecurity
China’s Cyberspace Administration published its report on AI abuses and violations.
Since the launch of the "Clear and Bright - Rectification of AI Technology Abuse" special campaign in April 2025, the Central Cyberspace Affairs Office has focused on AI technology abuse chaos such as AI face-changing and voice-over infringing on public rights and interests, and AI content labels missing and misleading the public, and has deepened the key rectification tasks of the first phase, deployed local cyberspace administration departments to increase the handling of illegal AI products, cut off the marketing and drainage channels of illegal products, and urged key website platforms to improve technical security measures and promote the accelerated implementation of synthetic content labels. In the first phase, a total of more than 3,500 illegal mini-programs, applications, intelligent bodies and other AI products were handled, more than 960,000 illegal and illegal information were cleaned up, and more than 3,700 accounts were handled, and all work has made positive progress.
Multilateral
Thailand hosts the 3rd UNESCO conference on AI Ethics.
Key themes include:
Human rights and AI – safeguarding personal data in the digital age
AI policy – setting transparent, accountable, and enforceable standards
AI in education – unlocking learning opportunities for all
Reducing inequality – ensuring inclusive AI for a fairer future
The future of work – equipping people for the AI-driven workforce
The G7 commits to AI and quantum computing at the most recent meeting.
The G7’s plans for AI start with the goal of using the technology for public good by keeping humans in the loop, an approach that has already been advocated for in the U.S., and will ideally ensure AI systems are designed and deployed responsibly.
Member nations also addressed the demands AI will require from data centers and how that will impact energy generation worldwide. The G7 acknowledged that developing nations stand to be left out from the global race to AI dominance.
“To fully realize the potential of AI for our publics and our partners, we commit to: Work together to accelerate adoption of AI in the public sector to enhance the quality of public services for both citizens and businesses and increase government efficiency while respecting human rights and privacy, as well as promoting transparency, fairness, and accountability,” the agreement says.
Governance
Singapore’s IMDA launches global AI assurance sandbox.
Expanded Global AI Assurance Sandbox will allow more companies to test real world applications, making AI safer
New adoption guide aims to simplify process for companies to evaluate and implement PETs
National certification for Data Protection Trustmark will raise data protection standards, on par with global standards
China’s Cyber Administration published guidance on Standardized Intelligent Society Governance.
The "Guidelines" aim to establish and improve a scientific and reasonable working mechanism for the research, formulation, implementation feedback, and optimization and improvement of intelligent society development and governance standards, and to build a standard system that covers the main social application scenarios of intelligent technology and effectively guarantees the benign and healthy development of the technology throughout its life cycle, so as to adapt to the needs of technological innovation, meet the needs of industrial development, support the construction of an intelligent society, and help modernize the national governance system and governance capabilities.
South Korea names AI Secretary.
Lee also named Ha Jung-woo, head of internet giant Naver's Future AI Center, as the new senior presidential secretary tasked with AI and future technology planning. Ha's work will involve policy related to nationwide adoption of AI, science and technology, population planning and climate change.
In the News & Analysis
A Korean firm launched an indigenous Korean language LLM.
Korean AI refers to AI that has been improved to best suit the Korean situation by learning intangible elements such as the Korean social context and the unique linguistic and cultural characteristics of the Korean language.
Rest of World says that looking at the US-China race in AI and “who is winning” overlooks important issues.
Models developed in China, such as DeepSeek and Qwen, do exhibit different values and have different content restrictions compared to those developed in the U.S.
For those affected by this race, there is increasing pressure to build and deploy not just good systems, but cheap, accessible AI. DeepSeek’s R1 model shifted narratives on open-source in both the West and China, with more need to compete on open-source — these cheap, powerful, yet compute-efficient models can be more easily deployed worldwide.
The Diplomat uncovers how China’s AI ecosystem can lock in Southeast Asian countries to their tech stack.
Chinese AI systems, built by major firms such as Huawei, Alibaba Cloud, Tencent, and SenseTime, are highly centralized and tightly integrated with cloud-based infrastructure. While regional centers outside China are necessary for Chinese actors to expand AI capacity and reach, the technical and intellectual makeups of these overseas facilities are essentially under the control of their headquarters. For instance, Huawei Cloud has built data centers in Thailand, Singapore, the Philippines, and Indonesia. The cloud backends of these facilities are configured, maintained, and periodically updated from China-based service centers.
Advocacy
The Bank of Thailand is conducting a consultation on risk management of AI systems until June 30.
China’s TC260 conducts public comments on guidelines for detecting synthetic AI content until July 6.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.
Hi Seth! My name is Pranav, and I’m the founder of HackWard, a self-led Cybersecurity initiative targeted towards increasing digital awareness and accessibility to people on the understanding of everyday vulnerabilities in the field of cybersecurity! I'm new to Substack and would love to connect with you! Feel free to check out my latest blog if you'd like! https://open.substack.com/pub/hackward/p/quishing-the-qr-code-scam-thats-fooling?r=5zco3z&utm_medium=ios