#9 Asia's AI leg/reg, advocacy, analysis, human rights, military AI
Singapore election deepfakes; Australia on privacy and AI; Philippine job loss to AI; Korea conf. on military & AI; Asia Human Rights/Digital Divide; China multilateral cooperation and more...
Privacy
Australia’s Office of the Information Commissioner released the 2025-2026 Strategic plan. On AI, it reads:
We are continuing work to respond to the privacy risks arising from artificial intelligence (AI), including the effects of powerful generative AI capabilities being increasingly accessible across the economy. The release of these technologies publicly and their distribution at no cost to the user amplifies the scale of potential privacy impacts and reinforces the importance of the OAIC building awareness of privacy risks and regulated entities’ obligations. Robust privacy governance and safeguards are an important foundation for using this technology in a way that builds trust and confidence in the community and enables entities to take advantage of the opportunities of AI.
Intellectual Property
At an Australian’s Senate Committee’s 5th public hearing on Adopting AI, representatives from Amazon called for amendments to Australia’s Copyright Act to allow for text and data mining exceptions similar to other countries (Singapore and Japan). The findings of the committee will be published this month.
Analysis
The Carnegie Endowment published a piece on China’s changing attitudes towards AI regulation as it balances safety concerns with technological competition and advancement:
There remain major open questions about the specific contours of China’s concerns over AI safety and what it intends to do about them. But the growing political and technical salience of these issues is significant for AI safety and governance globally. China is the key competitor for the United States in advanced AI, and that competition is a core dynamic shaping AI development globally. China’s leaders are acutely concerned with falling further behind the United States and are pushing hard to catch up in advanced AI. How China approaches building those frontier AI systems—the risks it sees and the safeguards it builds in—will influence the safety of systems built in China and around the world.
The Information Technology and Innovation Foundation issued a report that states China will soon lead in AI innovation. The report finds:
China leads in AI research publications and is competitive in generative AI, though U.S. publications have greater impact, with more citations and private-sector involvement.
Tsinghua University is a key hub for China’s top AI start-ups, including "AI tigers" Zhipu AI, Baichuan AI, Moonshot AI, and MiniMax, founded by faculty and alumni.
Chinese large language models are closing the performance gap with U.S. models, with some excelling in bilingual benchmarks.
While private AI investment in China is lower than in the U.S., foreign investment, particularly from Saudi Arabia’s Aramco, is increasing in China’s generative AI sector.
State-directed funds and financial aid effectively support promising firms in underinvested regions of China.
The World Economic Forum published ChatWTO: An Analysis of Generative AI and International Trade. Some conclusions and recommendations are pertinent to WTO members, in particular in Asia, including those concerned about job losses, and the numerous regulatory approaches taken on a national level. The specific instance of trade sanctions imposed by the US on China involving semiconductor equipment and microchips is cited as direct trade in goods implications of the AI industry.
Governance
Singapore’s Infocomm Media Development Authority supported the Singapore Computer Society to publish the AI Ethics and Governance Body of Knowledge 2.0.
Australia published a voluntary AI safety standard for organizations. The ten points it recommends are:
1. Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
2. Establish and implement a risk management process to identify and mitigate risks.
3. Protect AI systems, and implement data governance measures to manage data quality and provenance.
4. Test AI models and systems to evaluate model performance and monitor the system once deployed.
5. Enable human control or intervention in an AI system to achieve meaningful human oversight.
6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
7. Establish processes for people impacted by AI systems to challenge use or outcomes.
8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
9. Keep and maintain records to allow third parties to assess compliance with guardrails.
10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.
Democracy, Rule of Law, Human Rights, Workforce, Environment
The Council of Europe Framework Convention on Artificial Intelligence and human rights, democracy and the rule of law is opened for signatures in September and was drafted by Council members, plus observers and non-observer participant states - from Asia including Japan and Australia. The seven fundamental principles included in the treaty are:
► Human dignity and individual autonomy ► Equality and non-discrimination ► Respect for privacy and personal data protection ► Transparency and oversight ► Accountability and responsibility ► Reliability ► Safe innovation
Taiwan’s Ministry of Digital Affairs and the Freedom Online Coalition conducted a seminar on human rights online. Department Director for MODA’s Democracy Network said:
In the age of artificial intelligence, trust is the key to ensuring that AI technologies can be effectively and safely utilized.
Australia’s Department of Home Affairs provides an insightful analysis of the potential impact of AI on democracy and rule of law in its submission to the government’s inquiry on adopting AI, earlier this year. Five areas it analyzes are:
1. Foreign Interference,
2. Data and security integrity,
3. cybersecurity,
4. dissemination of harmful content, and
5. democracy and trust in institutions.
This ChinaTalk blog post goes into why China will rely on nuclear power in a bid to get its compute energy consumption more “green”.
Multilateral
China and Russia set up a working group on AI cooperation and agreed to support efforts at BRICS, including the China-BRICS AI Development Center, announced earlier this year.
In a meeting with the Secretary General of the UN, China’s Xi Jinping pledged to continue support for strengthening AI governance and other UN initiatives.
In the Forum for China-Africa Cooperation, China hosted leaders from over 40 African countries, and signed MOUs and conducted meetings, as well as finalizing the Beijing Action Plan 2025-2027 including provisions on AI and cybersecurity:
The two sides believe that it is important to put equal emphasis on development and security, bridge the AI and digital divide… The two sides oppose drawing lines on an ideological basis or putting together exclusive blocs, and creating development barriers…
[The two sides will] strengthen international cooperation on capacity building of AI and promote exchanges in such areas as rules governing cross-border data flow, legitimate and safe application of new technologies, personal privacy protection, and internet laws and regulations within the international frameworks including the Global AI Governance Initiative, the China-Africa Initiative on Jointly Building a Community with a Shared Future in Cyberspace, and the Global Initiative on Data Security, if applicable so as to jointly advance rules-making for global digital governance.
The two sides will encourage the contact and communication between their national computer emergency response teams (CERT), carry out cross-border handling of cybersecurity cases, information sharing, and experience exchange, enhance cooperation on cybersecurity emergency response and make study trips.
In the News
Recent research shows the distribution of AI-compute around the world. Asian countries have major contributions, but also gaps, as the researchers note AI Compute North and South regions (similar to economic development divide of Global North and Global South).
Bloomberg reports on the issues faced by the business process outsourcing industry (BPO) as companies incorporate genAI tools, and cut jobs. The Philippines, where BPO accounts for 8% of GDP may be particularly exposed to job loss.
IEEE published The Evolution of AI Governance, contributing authors include representatives from National University of Singapore and AI Singapore.
The China Internet Civilization Conference was held in Chengdu in August. Li Shulei, a member of the Political Bureau of the Communist Party of China Central Committee and head of the Publicity Department of the CPC Central Committee delivered a keynote speech to the conference about China’s role in promoting AI safety cooperation.
Cybersecurity, Trust/Safety & Community
Singapore's Ministry of Digital Development and Information proposed a bill targeting deepfakes around elections. Key features of the Bill:
From issuance of the Writ to the close of polling on Polling Day, the Bill proposes to prohibit the publication of digitally generated or manipulated OEA that realistically depicts a candidate saying or doing something that he or she did not in fact say or do. This prohibition will only apply to OEA depicting persons who are running as candidates for an election.
The Returning Officer (RO) can issue corrective directions to individuals who publish such content, social media services, and Internet Access Service Providers to take down offending content, or to disable access by Singapore users to such content during the election period. Failure to comply with a corrective direction is an offence. This is punishable by a fine, or imprisonment, or both on conviction.
The Bill will allow candidates to make a request to the RO to review content that may breach the prohibition and issue corrective directions. Candidates who have been misrepresented by such content can make a declaration to attest to the veracity of his/her claim.
It will be an illegal practice for candidates to knowingly make a false or misleading declaration in a request about the impugned content. The consequences of committing an illegal practice, such as a fine or the vacation of an election, are set out in the PEA and PrEA.
South Korea co-hosted the second Responsible Artificial Intelligence in the Military Domain (REAIM) Conference this month along with Singapore, Netherlands, UK and Kenya.
REAIM 2024 builds on the discussions and outcomes of the first REAIM summit in 2023, which sought to increase political awareness regarding this topic as a means of ultimately moving towards the development of international agreements on the application of AI in the military domain. Stakeholder delegates from governments, businesses, academia and civil society of over eighty countries attended the summit.
Since the first REAIM summit, a Global Commission on REAIM has been set up, which is a body of preeminent AI scholars that seeks to support fundamental norm development and policy coherence across the globe related to military AI. Furthermore, several regional follow-up meetings were held as part of the REAIM process, in Asia, Africa, Latin America, Europe and North America, and the Middle East and Central Asia.
South Korean investigators are cooperating with France and Interpol around the investigation of the Telegram messaging app - whose CEO was recently detained in France. The app has platformed the distribution of nonconsensual deepfake intimate images, which are garnering more attention around the country (e.g. K-pop idol management companies are taking legal action against deepfake distributors and creators amidst a deepfake sex crime crisis targeting women and girls).
38 North reports on North Korea’s AI Development Network, including the extent that academic cooperation in particular may facilitate AI proliferation to the country affecting military use of AI and cybersecurity concerns:
…[D]espite sanctions, North Korea persists in academic partnerships for potential AI research, often relying heavily on China. Given China’s prominent role in the global AI landscape, it is crucial to consider how its collaborations with North Korea may influence North Korea’s AI capabilities. Specifically, monitoring cooperation between Chinese universities with known relations with North Korea and its institutions is crucial in assessing the North’s AI research direction, as well as monitoring sanctions compliance. Universities could also establish internal compliance programs (ICP) to ensure that all students’ and faculty members’ activities meet sanctions and nonproliferation regulations. Furthermore, other countries’ academia could be exploited by North Korea, highlighting the need for enhancing due diligence in international collaborations.
Advocacy
Australia is also conducting public consultation on the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings, open for four weeks, closing October 4.
Taiwan’s draft AI Basic Law is open for comment until September 13.
Singapore’s Cybersecurity Agency is conducting a public consultation on Securing AI Systems until September 15.
Australia’s Competition and Consumer Commission is conducting a public comment period until August 23 on various digital platform service issues, including AI.
China’s Ministry of Industry Information Technology issued a public comment period to collect use cases for the use of AI in industry development by September 13.
UNESCO is conducting a consultations on its paper on AI Regulation approaches globally until September 19.
Events
Digital Governance Asia staff will moderate a session at the UNDP’s Responsible Business and Human Rights Forum in Bangkok, Thailand, Sept 25. Register here.
The Internet Governance Forum (IGF) will be held in Riyadh, Saudi Arabia in December. Digital Governance Asia is supporting a session covering Asia’s Privacy and AI regulations. Register here.
RightsCon will come to Taipei, Taiwan in February 2025, covering all aspects of digital human rights issues.