#10 Asia AI Policy for Small States, Military, IP, Privacy, Finance
Legislation, regulation and advocacy in the AI policy in Asia
Intellectual Property
This National Law Review article covers the status of patent/copyright and AI issues, across several jurisdictions including Korea, Singapore, Japan, Taiwan, Hong Kong, China, Australia, Malaysia, Philippines, India, Australia, Indonesia, and New Zealand.
Military AI
Singapore and the US held their 14th Strategic Security Policy Dialogue, with the read out noting on AI:
The two sides also discussed emerging areas of cooperation, including implementation of the Statement of Intent on Data, Analytics, and Artificial Intelligence Cooperation and progress on defense innovation.
The Second Responsible AI in the Military Domain (REAIM) summit was held in Seoul, South Korea this month. 60 countries signed the “Blueprint for Action”, although news outlets noted China’s absence (but less so India’s absence). Elsewhere in Asia, Japan, Australia, Mongolia, Papua New Guinea, Singapore, South Korea, Philippines, Pakistan and Brunei, joined the agreement. The following are salient points from the blueprint:
Acknowledge the following, which are not exhaustive, to ensure responsible AI in the military domain:
(a) AI applications should be ethical and human-centric.
(b) AI capabilities in the military domain must be applied in accordance with applicable national and international law.
(c) Humans remain responsible and accountable for their use and effects of AI applications in the military domain, and responsibility and accountability can never be transferred to machines.
(d) The reliability and trustworthiness of AI applications need to be ensured by establishing appropriate safeguards to reduce the risks of malfunctions or unintended consequences, including from data, algorithmic and other biases.
(e) Appropriate human involvement needs to be maintained in the development, deployment and use of AI in the military domain, including appropriate measures that relate to human judgment and control over the use of force.
(f) Relevant personnel should be able to adequately understand, explain, trace and trust the outputs produced by AI capabilities in the military domain, including systems enabled by AI. Efforts to improve the explainability and traceability of AI in the military domain need to continue.
Chinese defense scholars warned that AI may accelerate military escalation issues:
Experts at the PLA-affiliated National University of Defense Technology, for instance, note that “the large-scale military application of artificial intelligence [will] further increase the uncertainty and uncontrollability of crisis outbreaks and escalations,” potentially leading to the eruption of wars. Scholars affiliated with the PLA Air Force concur, arguing that AI systems will “aggravate the uncontrollable degree” of crises.
Finance
The Hong Kong Monetary Authority published a circular regarding the use of AI in monitoring fraudulent activity. The circular advises the following:
(a) Sharing experience and success stories of AIs – in November 2024 the HKMA will organise an experience sharing forum with speakers from the industry and technology firms on how artificial intelligence is being deployed to enhance the effectiveness and efficiency of suspicious activity monitoring;
(b) Providing targeted guidance and support – the HKMA will establish a dedicated team, supported by an external consultant, to provide supervisory feedback and technical guidance to assist AIs in applying artificial intelligence in enhancing their monitoring processes, through the existing Fintech Supervisory Sandbox and Chatroom; and
(c) Creating a conducive environment for AML/CFT innovation – the HKMA welcomes the use of artificial intelligence in AML/CFT work, and will continue to gauge the interest of AIs in applying artificial intelligence to the monitoring of suspicious activity and provide suitable guidance to the industry where appropriate.
The Reserve Bank of Australia in its September Financial Stability Review is working with the finance industry to monitor use of AI. The RBA highlighted four types of risks:
Operational risk from concentration of service providers;
Herd behavior could aggravate the transmission of shocks to the financial system;
Advances in AI and the emergence of GenAI has already led to an increase in credible misinformation and scam content such as deep fake images, videos or audio; and
AI models are complex and opaque making it difficult to assess their reliability.
Multilateral
Singapore and Rwanda issued the AI Playbook for Small States. The guide had input from small states around the globe, including from Asia: Bhutan, Brunei, Cambodia, Fiji, Laos, New Zealand, Papua New Guinea and Timor Leste. The Playbook notes the challenges for small states in AI, and provides recommendations:
[Challenges are] …(a) Access to resources and funding; b) Limitations of small domestic markets and hence, inability to tap on economies of scale or have a significant voice in international AI development; c) Access to data, either within the country or from outside sources; and d) Need for expertise and AI talent.
[Recommendations are] …
a) Grow AI adoption and development
Given that AI will alter the nature of many jobs today, workers must be upskilled to be ready for an AI-enabled economy. It is also important for small states to ensure adequate access to key infrastructural resources such as compute and data, while managing sustainability concerns due to the high resource drain from AI. Small states have limited resources and may need to prioritise their AI efforts for important sectors of their economies. It will be useful to have greater insight on how to focus efforts, as well as the role of government itself as a sector. In addition, micro, small and mediumsized enterprises (MSMEs) that comprise the majority of small states’ industries will likely face the greatest challenges in AI adoption. Small states, therefore, need support on how best to drive AI development for these MSMEs.
b) AI governance and safety
Building a safe and trusted AI ecosystem is multi-faceted. Small states may find a collective sharing of principles and frameworks that can be applied to governance of AI as well as the tools that can assist with this useful. Small states also recognise that with AI development accelerating, discussion on the different regulatory and governance approaches can be helpful.
c) Addressing the societal impact of AI
Small states recognise and can strongly identify with the importance of bridging the digital divide arising from AI. As AI technologies become increasingly integrated into various aspects of society, it is essential for the public to understand the potential benefits and risks associated with AI. Public literacy on AI can help individuals protect themselves from AI-generated harms.
Asian countries at the UN General Assembly made remarks on various aspects of AI policy, according to Digital Watch:
Tajikistan shared it was implementing a national strategy for digitalisation and proposed a UN resolution to highlight AI's role in creating socioeconomic opportunities. The Maldives emphasised that a robust ICT infrastructure and education are vital for a digital future. By expanding AI access in essential services such as healthcare and education, they aim to empower the next generation with the necessary skills for a competitive global economy. Mongolia and Uzbekistan emphasised challenges from the uncontrolled use of AI and supported UN resolutions promoting safe AI use for sustainable development and stronger international cooperation.
G20 engagement groups (including from Asia: China, India, Indonesia, Japan, Korea and Australia) from labor, civil society, think tanks and women’s groups (L20, C20, T20 and W20 respectively) issued the Sao Luis Declaration on AI. The Declaration focuses attention on environmental, and social impacts of AI, calling for support to bridge the AI divide among poorer countries, and alleviate discrimination, such as gender based violence which can arise from AI tools, among other concerns.
Vietnam and the US conduct talks on AI cooperation while Vietnamese leaders join the UN General Assembly in New York.
Leaders from Japan and the US issued the Wilmington Declaration Joint Statement, including provisions on AI and semiconductor supply chain resilience:
Through the Advancing Innovations for Empowering NextGen Agriculture (AI-ENGAGE) initiative announced at last year’s Summit, our governments are deepening leading-edge collaborative research to harness artificial intelligence, robotics, and sensing to transform agricultural approaches and empower farmers across the Indo-Pacific. We are pleased to announce an inaugural $7.5+ million in funding opportunities for joint research, and welcome the recent signing of a Memorandum of Cooperation between our science agencies to connect our research communities and advance shared research principles.
The Quad of the US, India, Japan and Australia will invest in the use of AI in biologics research:
The United States, Australia, India, and Japan look forward to launching the Quad BioExplore Initiative—a funded mechanism that will support joint AI-driven exploration of diverse non-human biological data across all four countries.
This project will also be underpinned by the forthcoming Quad Principles for Research and Development Collaborations in Critical and Emerging Technologies.
Human Rights, Employment, Environment
AI is making China’s fast fashion leaders dirtier, according to this Grist article:
But climate advocates and researchers say [Chinese fast fashion giant Shein’s] lightning-fast manufacturing practices and online-only business model are inherently emissions-heavy — and that the use of AI software to catalyze these operations could be cranking up its emissions. Those concerns were amplified by Shein’s third annual sustainability report, released late last month, which showed the company nearly doubled its carbon dioxide emissions between 2022 and 2023.
Bloomberg reports on the Philippine’s business outsourcing industry’s AI woes.
Nikkei reports on Asia’s data energy consumption, much of it driven by AI:
Economies across Asia are attempting to seize once-in-a-generation opportunities as supply chains shift away from China. But do they have enough clean energy to sustain economic growth and combat global warming while attracting investment in chips, artificial intelligence, data centers and other technologies?
Taiwan and South Korea boast the world's second- and third-largest semiconductor industries after the U.S., while Japan is working to regain its lost chip prowess. All three economies remain heavy users of fossil fuels, with Japan and Taiwan actually increasing their reliance on them since the 2011 Fukushima nuclear disaster...
Governance
Singapore’s Supreme Court issued a circular on the use of genAI in by court users. Generally the Court advises the following:
The Court does not prohibit the use of Generative AI tools to prepare Court Documents, provided that this Guide is complied with.
This Guide does not change a Court User’s duty to continue to comply with the relevant legislation, rules, codes of conduct and Practice Directions. (a) Where the Court User is a lawyer, the lawyer’s duty to comply with the rules of professional conduct remains. Lawyers continue to have a professional obligation 2 to ensure that materials they put before the Courts are independently verified, accurate, true, and appropriate. (b) Where the Court User is a Self-Represented Person, he or she is also responsible for ensuring that all information provided to the Court is independently verified, accurate, true, and appropriate.
The Court maintains a neutral stance on the use of Generative AI tools. It is important to emphasise that Generative AI is a tool, and any output generated should only be used on the basis that the Court User assumes full responsibility for the output. Unless specifically asked for by the Court, pre-emptive declaration of the use of Generative AI is not required, as the responsibility for any resulting content ultimately rests with the Court User.
Trust, Safety & Community
Australia’s Digital Platform Regulators Forum published a working paper on the impact of multimodal foundation models on digital platform regulation:
MFMs perform as supercharged AI creators. Give them a text prompt, and they can create an image to match. Feed them audio, and they might generate a corresponding video. Provide a picture, ask them to describe it, and they can provide a text description. These capabilities could open many opportunities for consumer and business adoption across various industries – from generating personalised content experiences to new ways of creating music and images.
Many of the risks associated with MFMs are similar to the limitations considered by DP-REG members in our examination of large language models (LLMs) – for example, the potential to produce unexpected outputs or outputs that are inaccurate or harmful. Although MFMs provide potential opportunities to consumers and business, they also have the potential to amplify risks. The ability to generate multiple types of content, such as image, audio and video also raise concerns about scams and deceptive practices, the spread of misinformation and disinformation, the generation of harmful content, and loss of control over personal information.
The US will host the first meeting of the international network of AI Safety Institutes in November, which include Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States.
Korea launches an AI-powered criminal justice system portal:
The system, developed over 33 months, is the result of a collaborative effort by South Korea’s key criminal justice agencies: the Ministry of Justice, the Supreme Prosecutors’ Office, the National Police Agency, and the Korea Coast Guard.
This overhaul of the aging previous system marks a substantial push towards online and non-face-to-face services, laying the groundwork for a fully electronic criminal justice procedure.
At the heart of the new KICS is an AI-based intelligent case processing support function designed to expedite and enhance investigative work.
By analyzing crime details, keywords, and charge information, the system can provide investigators with relevant information from similar cases, including investigation reports, rulings, and court decisions.
Advocacy
Sri Lanka’s National AI Strategy is open for consultation until 6 Jan 2025. From the strategy:
Guided by seven core principles (inclusivity and responsibility, trustworthiness and transparency, human-centricity, adoption-focus and impact-orientation, agile and adaptive governance, collaboration and global engagement and sustainability and future-readiness), our strategy focuses on:
Strengthening key foundational enablers: data, skills, infrastructure, R&D and public awareness.
Accelerating AI utilization through public sector transformation and private sector stimulation.
Ensuring safe and trustworthy AI through iterative governance, responsible practices, and public engagement.
China’s Cyber Administration published draft rules for labeling of GenAI content for comment until 16 Oct.
China’s national standards platform is also publishing rules for comment until 13 November regarding standards for internet safety and generative AI content.
In the News
Southeast Asia’s GenAI start-up scene is analyized in this report. Singapore is first, followed by Vietnam in GenAI start-ups.
The Center for Security and Emerging Technology published an english translation of Guangdong and Beijing AI strategies and plans.
Hong Kong will unveil comprehensive rules for the use of AI in the financial industry in October.
This insightful article from the Boston Review dives into Taiwan’s history and how its colonial past shapes the island’s leading semiconductor industry fueling AI globally:
The dead weight of centuries of colonization has shaped and continues to haunt Taiwan’s semiconductor project. Even as the country’s ultra-successful chip industry has secured its place atop global supply chains, neoimperial entanglements remain unsettled. Today we are witnessing the consequences…
But the nation’s majority have borne the social costs of modernity. Today Taiwanese society is sharply stratified, and class inequality has ballooned. The labor movement has won modest labor law reforms in recent years…
And the semiconductor industry continues to take an acute ecological toll, from pollutants and chemical exposure to titanic consumption of water and energy.
The chips were supposed to ease geopolitical uncertainties, not produce them…The age of High Imperialism is dead, and neoliberalism is fading, but so far as these developments portend, imperial conflict looks to be dressing itself in new clothing.
ChinaTalk blog has a great set of interviews regarding the formation of AI Sovereignty, tracing back to Chinese government initiatives on data/internet sovereignty, and citing Japanese and Singaporean approaches to AI localization.
Harvard Business Review released an article on the Top 50 AI hubs, India leads in Asia followed by China.
Events
Digital Governance Asia staff will moderate a session at the UNDP’s Responsible Business and Human Rights Forum in Bangkok, Thailand, Sept 25. Register here.
The Internet Governance Forum (IGF) will be held in Riyadh, Saudi Arabia in December. Digital Governance Asia is supporting a session covering Asia’s Privacy and AI regulations. Register here.
RightsCon will come to Taipei, Taiwan in February 2025, covering all aspects of digital human rights issues.