#8 Asia AI Policy Monitor
Developments in AI regulations in Australia, New Zealand, Korea, India, Taiwan, Vietnam, Singapore, Pacific Islands...
Competition
Korea’s Fair Trade Commission will examine the domestic and foreign genAI industry for competition issues through a survey starting earlier this month.
Intellectual Property
Korea’s Presidential Committee on IP published a report on AI and IP issues, calling for further stakeholder engagement and international cooperation on the issues, in particular around genAI content.
India’s Bombay High Court issued a judgement in favor of a plaintiff who sued a genAI company for infringing his right of publicity/personality in his image, name and voice.
Japanese manga artist-turned lawmaker suggests that genAI providers set aside 1% of earning to share with artists:
Akamatsu [the lawmaker] said Japan should be "cautious" about legal restrictions on AI, not just because of the party's pro-business policies, but also to protect creators.
Privacy
Hong Kong’s Personal Data Privacy Commissioner published a guide to prevent deepfake fraud, including these 6 tips:
Be vigilant: Think twice before providing any personal data, verify the purpose of collection of such data and whether it is mandatory to provide them. Do not disclose personal data to others arbitrarily, avoid clicking or scanning suspicious links and QR codes, and do not log into any suspicious websites;
Keep an eye on your accounts and transaction records;
Password protection: Change the passwords of online banking accounts from time to time and enable two-factor authentication (if available). Never share passwords with anyone;
Smart use of social media and instant messaging apps: Minimise the sharing of biometric data.
Authenticate the identity of callers;
Fraud prevention information: Pay attention to the fraud prevention information published by the PCPD, the Police or relevant organisations.
Finance
Hong Kong Monetary Authority (HKMA) issued guidance for consumer protection in the use of genAI in the finance industry.
Additionally the HKMA is supporting a genAI sandbox approach to boost the use of AI tools in the financial industry. Per HKMA chief:
The new GenA.I. Sandbox is a pioneering initiative that promotes responsible innovation in GenA.I. across the banking industry. It will empower banks to pilot their novel GenA.I. use cases within a risk-managed framework, supported by essential technical assistance and targeted supervisory feedback. Banks are encouraged to make full use of this resource to unlock the power of GenA.I. in enhancing effective risk management, anti-fraud efforts and customer experience.
Trust and Safety
The Australian Office of the Information Commissioner issued a determination regarding ClearviewAI’s practice of the scraping of facial biometric data of Australians off the internet.
The UNODC finalized the Cybercrime Convention, which will impact AI-enabled cybercriminality in fraud, deepfake intimate images, and other criminal abuse material by genAI.
What we are thinking: Given Asia’s locus as a source and target for cybcercrime, policymakers should follow accession and support of the treaty.
India’s Ministry of Electronics and IT (MeitY) published guidance on preventing the use of deepfakes for misinformation:
Intermediary platforms [are] required to act expeditiously within the timelines prescribed under IT Rules, 2021, on grievances received…
Korea’s Ministry of Science and ICT is actively supporting the development and use of datasets related to cybersecurity and deployment of AI in the sector. According to the department spokesperson:
AI deployment is not an option, but a must, for evolving cyber threats.
Taiwan’s Ministry of Digital Affairs shared details of how they are addressing AI-fueled fraud:
Vice President Ying-Dar Lin of NICS stated we need to use AI to combat AI. According to AI detection results, approximately 16,000 fan page accounts have posted fraudulent advertisements….They post 5,000 to 10,000 fraudulent ads daily that only last 1-2 days, creating an illusion of diversity and popularity among audiences and exploiting echo chambers for free dissemination.
Singapore’s Cybersecurity Agency published a report on threats in 2023, noting the increased usage of genAI to enhance phishing through deepfake videos and audio:
Threat actors have weaponised AI to accelerate and scale up their malicious operations. The threat of AI-enabled attacks will only intensify as the technology improves, and it remains to be seen how threat actors will further exploit such technology for cyber-attacks on the horizon…
China’s CPC Central Committee 20th Third Plenum contained a statement on AI requiring:
…instituting oversight systems to ensure the safety of artificial intelligence.
Rights, Democracy, Environment
The AI and Human Rights Risk Management Profile was issued by the US State Dept.
What we are thinking: An interesting first step in this discussion, and more can be expected in this area per Business and Human Rights intersection, esp on supply chain due diligence in Asia as a node in AI infrastructure, development, deployment and use. Some gaps are the existing concerns around low cost labor used to train some models, and also the increasing tax on environmental sustainability. Digital Governance Asia will moderate a session at the UNDP Responsible Business and Human Rights Forum APAC on the topic.
Singapore’s Minister for Digital Development and Information Josephine Teo indicated that the country may target rules against the use of deepfakes near elections, similar to rules imposed by South Korea earlier this year, and to calls made in the Philippines for next year’s elections.
A Microsoft data center under construction in India has been sued by locals over allegations of Illegal dumping near the site, according to reporting by Rest of World.
What we are thinking: Environmental concerns around AI’s impact, in particular the increased use of electricity and water, are a growing concern globally and in Asia; Microsoft itself doubled electricity use from 2020-2023.
Multilateral
Japan and Vietnam sign an MOU on ICT cooperation including AI.
Japan and Costa Rica signed a memorandum on ICT, including provisions to promote the Hiroshima AI Process, AI governance and digital infrastructure.
The UK and India Technology Security Initiative was launched by the prime ministers of both countries, covering emerging technology, such as AI. The initiative covers joint university research, support for existing multilateral AI governance efforts including GPAI, G20, and the formation of a joint Centre for Responsible AI.
The Second US-Singapore Critical and Emerging Technology Dialogue took place, including a large section on AI-focused joint research, standards setting and convening of AI Safety Institutes. Further to these meetings, the US-Singapore Digital Economy Cooperation Roadmap includes important cooperation throughout the region:
The United States and Singapore have committed to establishing a Smart Cities Program on AI in February 2025 through the Singapore-US Third Country Training Program (TCTP) to deliver capacity-building to ASEAN and Pacific Islands Forum members.
Advocacy
Hong Kong’s Intellectual Property Department issued a public comment and consultation paper on Copyright and AI until September 9.
Taiwan’s draft AI Basic Law is open for comment until September 13.
Vietnam’s draft Law on Digital Technology Industry (including, but not limited to AI) is open for public comment until September 2.
Singapore’s Cybersecurity Agency is conducting a public consultation on Securing AI Systems until September 15.
Australia’s Competition and Consumer Commission is conducting a public comment period until August 23 on various digital platform service issues, including AI.
China’s Ministry of Industry and Information Technology is holding an public comment period until September 1 on IoT connected, smart or autonomous vehicles.
China’s Ministry of Industry Information Technology issued a public comment period to collect use cases for the use of AI in industry development by September 13.
UNESCO is conducting a public comment regarding its research on AI and the Judiciary until September 5.
Additionally, UNESCO is conducting a consultations on its paper on AI Regulation approaches globally until September 19.
To better understand the current AI governance environment, UNESCO has mapped the different regulatory approaches for AI. The consultation paper will be published as a policy brief to inform and guide parliamentarians in crafting evidence-based AI legislation.
The OECD (which includes Japan) is conducting a pilot survey of its International Code of Conduct for Organizations Developing Advanced AI Systems, based on the G7 Hiroshima Process, until September 6. The code of conduct for the G7 Hiroshima AI Process can be found here.
In the News
The New York Times reports on how China is leading in the deployment of autonomous vehicles in the city of Wuhan.
A recent report explore the problems that come from India’s deployment of facial recognition software in its massive railway system to fight crime. The rights implications to privacy, freedom of association, and movement are explored among other concerns around the technologies’ ability to detect emotions, micro-expressions, and even gaze directions.
Tech Policy Press has a great analysis out on the difference between Taiwan and India’s approach to disinformation - important in the context of AI-enabled disinformation:
These competing regulatory models illustrate a divide in technological governance as governments evolve strategies to deal with online harms. An important meta-question that regulators constantly grapple with is what is the appropriate level of intervention that state actors must exercise to ensure their public policy objectives.
The Taiwan example shows how state actors can be important stakeholders in directly combating online disinformation…
For countries with much larger and more complex information ecosystems like India, it is much more challenging to adopt a ‘whole of society’ approach. However, the kind of FCU response where a state-controlled body is the sole arbiter of truth for a society’s public discourse is deeply misguided…
404 Media published a great analysis of genAI content farm creation, which has flooded social media (eg Facebook) in the past few months. Much of the content is being made in India, Vietnam, and the Philippines, and viewed in the US.
China’s socialist chatbots may be doomed to failure. This of course points to the inherent issues with LLMs, confabulation (hallucination), and other alignment problems.
What we are thinking: as we have written previously, countries across Asia have focused on developing LLMs, primarily to have high quality products focused on local languages; but language and politics are never far apart.
Hong Kong’s Privacy Commissioner for Personal Data penned an op-ed for the South China Morning Post on the recently released AI Data Protection Framework published by her office.
China’s Cyber Administration published its 7th batch of genAI service providers.
China-based threat actor network was exposed, consisting of thousands of fake profiles on X recently by a cybersecurity firm:
Researchers believe the cluster of at least 5,000 unauthentic X accounts, dubbed the Green Cicada Network, is almost certainly controlled and coordinated by an artificial intelligence Large Language Model (LLM)-based system.
Government Policy
Australia’s Digital Transformation Agency released the “Policy for the responsible use of AI in government." The document requires agencies using AI (except those in defence and security) to disclose use of AI in their services.
Australia’s parliament passed a bill amending the criminal code to include provisions against deepfake sexual abuse material.
New Zealand’s Ministry of Science Technology and Innovation released a paper on recommendations for approaches to work on AI. This includes several recommendations to the cabinet such as implementing a strategic approach along the lines of OECD recommendations.
Korea establishes regulations for the National AI Council. Issue areas to examination by the committee would be: research and development, data center expansion, ethics, governance, labor and economic impact.
Analysis
The Center for Data Innovation published a report on the divergent views of Chinese and British experts on AI risk and collaboration:
Despite significant geopolitical differences, a series of interviews with AI experts in China and the United Kingdom reveals common AI safety priorities, shared understanding of the benefits and risks of open source AI, and agreement on the merits of closer collaboration—but also obstacles to closer partnerships. Fostering a closer relationship could help both countries achieve their objectives of developing innovative, safe, and reliable AI.
The AI Asia Pacific Institute published a report on the State of AI in the Pacific Islands:
Lessons from other regions point to the benefits of fostering digital literacy, developing comprehensive AI governance frameworks, and sharing resources and expertise. To address these needs, the report recommends the establishment of a Pacific Islands AI Technical Assistance Facility.
The Australian Institute of International Affairs published a report on The Indo-Pacific’s Artificial Intelligence Defence Innovation Race. In summarizing the strategies across the Indo-Pacific for AI and military use:
China is an exemplar of the guided innovation strategy. By percentage of GDP committed, China has the world’s largest national industrial plans and has influenced many to follow suit…
Singapore and Australia are less prescriptive. They cultivate—not steer—a defence AI innovation system albeit nested within national AI strategies…
South Korea and Taiwan use state-led S&T policy strategies. Both have very strong commercial information technology industries, use extensive industrial planning, have devised national AI strategies, and developed defence AI plans, but have left the defence AI innovation chain disjointed…
Japan has devised a range of AI national plans and programs primarily focussed on civilian industry and education. Defence AI has recently begun to be explored but budget allocations are small. On 2 July, the Japanese MoD released its first defence AI plans.
In a similar manner, India-devised national AI strategies mainly focussed on civilian industry and technology with consumer applications.
ConcordiaAI has translated and provided an analysis of China’s Third Plenum statements and associated study materials provided to party cadres regarding AI. Background of the explanations are the following:
Motivations for creating AI safety oversight systems are explained in terms of responding to rapid AI development, promoting high-quality development, and participating in global governance.
AI safety oversight should involve “forward-looking prevention and constraint-based guidance,” which suggests an active and potentially precautionary approach.
The text argues against putting development ahead of governance. Instead, it suggests that both should go hand in hand, progressing at the same time.
The section is supportive of AI governance efforts globally, referencing China’s Global AI Governance Initiative, the UK’s Global AI Safety Summit, EU AI safety legislation, and American AI safety standards.
The law firm Baker McKenzie published the APAC AI Governance Regulatory Primer with great information on the state of play regarding rules and regs within the region on AI.