#13 AI Policy in Asia
Military, GenAI, Human Rights, AI Safety Institutes, Trust/Safety in Thailand, China, Australia, Malaysia, Taiwan, Japan, Singapore, South Korea and more...
Thanks for reading this month’s newsletter along with over 1,600 other AI policy professionals across multiple platforms to understand the latest policies affecting the AI industry across the Asia-Pacific region. Do not hesitate to contact our editors if we missed any big news in Asia’s AI policy at seth@apacgates.com!
Governance
Australia’s Minister for Industry and Science indicated that the country will continue its regulatory push on AI and for child safety online, despite changes in US policy following a Trump-win in the US presidential election - which is expected to bring industry-friendly, low/no regulatory oversight. The Minister said:
“The US may adopt in time a different approach to what the Biden administration had undertaken – we’ll wait and see and let that play out. But there are a lot of other countries that are thinking deeply about this and acting on it. We have a job we’ve said we’ll do for the public, and there’s an expectation … we will continue to do that, and we will. We will harmonise where we can and localise where we have to.”
Taiwan’s Ministry of Digital Affairs initiated collaboration with local governments to utilize AI.
MODA stated that this meeting's theme focused on Innovative AI Applications in the Public Sector, featuring experts sharing practical applications of AI tools for writing and generating charts. Taipei Veterans General Hospital introduced the adoption of Voice Recognition AI to create nursing records in the medical field automatically. The Land Administration Department of Yilan County demonstrated the use of AI-Assisted Real Estate Registration Review, which quickly verifies identities and documents to prevent identity theft and document fraud. The MODA's Administration for Digital Industries discussed measures for procuring commercial AI services through joint supply contracts.
Thailand’s Ministry of Digital Economy and Society issued a Generative AI Governance Guideline for companies using the technology. The Guide focuses on 5 parts:
1) Understanding Generative AI, which will help lay a foundation for those involved in the organization to understand the consistent principles in terms of definitions, meanings, and related terminology.
2) The benefits and limitations of Generative AI, which will show the perspective of practical application along with interesting use cases.
3) The risks of Generative AI, to create an understanding of the risks of Generative AI along with guidelines for managing these risks appropriately for the organization's actual use context.
4) Guidelines for applying Generative AI , to create an understanding of both the structure and the form of the application appropriately for the organization's context. And the last part is
5) Considerations for the application of Generative AI with good governance, focusing on organizations being able to create a balance between the utilization and risk management of Generative AI, along with promoting relevant parties to participate in various processes appropriately.
Military
The UN approved a resolution proposed by the Netherlands and South Korea on the implications of artificial intelligence (AI) in the military domain.
…States would be encouraged to pursue efforts at all levels to address related opportunities and challenges, including from humanitarian, legal, security, technological and ethical perspectives, by one of 14 drafts passed today in the First Committee.
China has developed an LLM based on Meta’s open source LLAMA model, which reports indicate are being used by the PLA.
In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT"…
ChatBIT was fine-tuned and "optimised for dialogue and question-answering tasks in the military field…"
Human Rights and Environment
Malaysia’s Johor state next to Singapore is seeing a boom in data centers, driving AI development in the region, taxing resources.
“With Singapore’s moratorium, Johor was a natural recipient of these investments,” he said. “There’s access to power infrastructure, water availability, submarine cable landings, and abundant land. Because Malaysia was already prepared with the infrastructure, data centers found it easier to land in Johor.”
While Singapore ended the moratorium in 2022, its insistence on energy efficiency and sustainability have meant continued investment in Malaysia, according to research firm BMI. Johor alone is set to draw some $3.8 billion in investment in data centers this year, according to separate estimates by Malaysian bank Maybank. In June, the Johor government said nine data centers were complete, with at least 30 more projects in the pipeline.
Bangladesh’s Ministry of Foreign Affairs advisor advocated for human rights centered use of AI.
In his address to the ministerial session, he called for responsible use of Artificial Intelligence (AI) in security and border management, emphasising that AI must respect human rights and be tailored to local contexts.
Trust, Safety, Cybersecurity
A recent article notes the rising debate about regulating deepfakes in Malaysia, covering how neighboring Singapore and South Korea have recently passed legislation around the genAI content:
Several high profile incidents have already occurred this year, with celebrities such as Datuk Seri Siti Nurhaliza Tarudin, athletes like Datuk Lee Chong Wei, and corporate figures like Petronas CEO Tan Sri Tengku Muhammad Taufik having their likenesses used in deepfakes promoting investment scams.
A recent Lawfare article sheds light on how AI will foster more disinformation citing a case from Australia:
Much ink has been spilled on the use of generative artificial intelligence (AI) in influence operations and disinformation campaigns. Often the scenarios invoked hang along pretty clean lines: a known state actor, a clear target, a specific goal. The archetypal examples are campaigns like Russia’s Doppelganger or China’s Spamouflage, both of which the U.S. Department of Justice has traced back to specific government-linked entities with clear political aims.
These cases are the exceptions, however, not the rule. A recent case in Australia—in which unsubstantiated headlines across the country led people to believe a foreign state might be behind a bot campaign—demonstrates that in practice disinformation is often a far messier issue.
Australia’s Cyber and Infrastructure Security Center report states that the country is susceptible to AI-driven malware attacks on critical infrastructure. Per officials with the Center:
Rapid uptake of artificial intelligence is enabling more persuasive and individually targeted cyber attacks, complicating mitigation. AI-driven attacks will further complicate the cyber security environment within Australia. Threat actors are embracing, integrating and evolving the use of AI in their operations. AI is already facilitating the creation of adaptable malware and enabling more realistic and tailored social engineering attacks to manipulate targets. AI is lifting the capability of all cyber threat actors to conduct attacks at greater speed, scale and effectiveness, and at a rate that may outpace many system defence capabilities. Less skilled threat actors are leveraging the increased commercialisation and public availability of AI tools to deploy ransomware, create deep fakes or conduct loweffort, yet high-yielding social engineering campaigns. These can be highly convincing and difficult to distinguish from authentic interactions, making detection efforts increasingly challenging for organisations and individuals.
South Korea’s President has initiated a 7-month police crackdown on deepfake pornography (more on Korea’s Privacy regulator’s participation below):
President Yoon Suk Yeol quickly confirmed the rapid spread of explicit deepfake contents and ordered officials to “root out these digital sexual crimes.” Police are now on a seven-month special crackdown that is to continue until March 2025.
It cited a recently amended law that for the first time makes acts of watching or possessing deepfake porn illegal and punishable with up to three years in prison. The maximum punishment for those who produce or distribute deepfake porn contents was increased from five to seven years in prison.
Police have so far detained 506 suspects this year, 411 of them aged between 10 and 19.
Privacy
South Korea’s Privacy regulator contributed to the President’s initiative to address deepfake non-consensual intimate imagery crackdown covering 4 areas and 10 initiatives:
1. Strong and effective punishment
Strengthening punishment for false videos
Strengthening investigative response capabilities
Strengthening international judicial cooperation
2. Improving platform accountability
Strengthening platform operator obligations
Establishing a cooperative system with domestic and foreign platform operators
3. Rapid victim protection
Strengthening deletion support
Establishing a one-stop support system
Strengthening R&D and AI risk management for victim protection
4. Public Awareness
Strengthening customized sexual crime prevention education for each target
Raising public awareness
Multilateral
The forthcoming AI Safety Institute International Network will conduct their first meeting in the US, and the Center for Strategic and International Studies (CSIS) published a piece asking 9 questions on how the network will feed into other international AI governance and safety initiatives. Members of the network from Asia include Japan, Singapore, South Korea and Australia.
The AISI International Network marks a significant next step in global AI safety efforts. The network provides an opportunity to build international consensus on definitions, procedures, and best practices around AI safety; reach economies of scale in AI safety research; and extend U.S. leadership in international AI governance. The similarities between currently established AISIs in terms of size, funding, and functions provide a strong basis for cooperation, though network members must be aware of the different institutions in which different AISIs are housed.
While the Seoul Statement is a good start for multilateralizing cooperation between AISIs, network members must now decide how to turn intent into action. At the November convening in San Francisco, they should strive to set the network’s goals, mechanisms, and international strategy in preparation for the AI Action Summit in February 2025. In doing so, they must ask tough questions, including about priorities, leadership, and membership.
The AI Safety Institutes of Singapore and the UK signed an agreement to bolster AI governance and safety:
The new MoC will strengthen cooperation between the AI Safety Institutes (AISIs) of both countries. Key areas of collaboration include:
i. AI Safety Research: Enhancing joint efforts to advance the science of AI safety, focusing on developing safer AI systems and risk management.
ii. Global Norms: Collaborating on international AI safety standards and protocols, including through possible cooperation with the Network of AI Safety Institutes, ensuring a global approach to AI risk mitigation.
iii. Information Sharing: Expanding knowledge exchange between the two countries’ AI Safety Institutes to ensure that AI systems are developed and deployed in ways that are trustworthy and safe for global use.
iv. Comprehensive AI Testing: Joint development of safety testing frameworks that provide robust evaluations throughout the AI lifecycle.
China hosted a World Customs Organization meeting on the use of AI technology to facilitate cross border trade and risk assessment.
Discussions also highlighted the tangible benefits of AI integration, such as greater risk management accuracy, reduced repetitive workloads, enhanced operational coverage, accelerated clearance times, and improved consistency in decision-making. Achieving these benefits, however, requires ongoing investments in specialized expertise, advanced computational resources, robust data analytics infrastructure, and well-defined policies.
A human-in-the-loop approach was also emphasized, which integrates human oversight within AI processes to ensure accountability and continuous improvement.
Furthermore, the mission acknowledged the synergistic potential of AI with other cutting-edge technologies like geospatial mapping, cloud computing, and Internet of Things (IoT), which together amplify benefits when aligned with strategic IT objectives.
In the news
Malaysia’s Prime Minister Anwar stated that the country will not get caught in the US-China competition around AI at the recent APEC summit in Peru.
Japan released a plan to boost the AI and chips industry with JPY10 trillion in support for domestic chip manufacturers and AI talent development.
Google, Temasek and Bain’s 2024 E-conomy Report highlights Southeast Asia’s AI industry potential and user base.
Taiwan’s TSMC will stop producing advanced AI chips for Chinese customers per US sanctions:
Taiwan Semiconductor Manufacturing Co (TSMC) has notified Chinese chip design companies that it is suspending production of their most advanced AI chips from Monday, the Financial Times reported, citing three people familiar with the matter.
Advocacy
Japan’s Fair Trade Commission opened a public comment period until 22 November on Generative AI Market Dynamics and Competition:
Given the rapidly evolving and expanding generative AI sector, the JFTC has decided to publish this discussion paper to address potential issues and solicit information and opinions from a broad audience. The topics outlined in this paper aim to contribute to future discussions without presenting any predetermined conclusions or indicating that specific problems currently exist. The JFTC seeks insights from various stakeholders, including businesses involved in different layers of generative AI markets (infrastructure, model, and application layers as described in Section 2), industry organizations, and individuals with knowledge in the generative AI field.
Sri Lanka’s National AI Strategy is open for consultation until 6 Jan 2025.