Asia AI Policy #5
Seoul AI Safety Summit, Deepfake Regs, and Latest AI policy in privacy, IP, cybersecurity, environment, labor and human rights from Australia, Korea, Japan, Philippines, China, Singapore, Indonesia.
AI and Privacy
Korea's PIPC to release guidelines on use of publicly available personal information in AI training and services.
AI and Intellectual Property
Tokyo district court rules against AI inventorship in the DABUS case for an AI as inventor of a patent - another strike against Thaler in his global challenge to these AI and IP rules. The judge in the case called on the Diet to debate the issue, anticipating more AI-created IP in the future.
Korea’s Ministry of Science and ICT announces plans to address copyright to address AI, part of a new Master Plan for the Digital Era.
AI, Trust, Cybersecurity
Deepfake audio of Philippines President Marcos went viral in April. The recording appeared to be of the President issuing orders to soldiers to confront Chinese naval aggression in the South China Sea. Lawmakers called for investigation by the Department of Information and Communications Technology.
Taiwan’s Supreme Court ruled that deepfake pornography violates the Personal Data Privacy Act in a case against a vlogger who produced images of hundreds of celebrities.
Australian Government to ban deepfake pornography as part of protections against violence against women.
Korea to mandate watermarks for deepfakes as part of new Master Plan for Digital Era.
India’s Election Commission advised social media companies to prevent the wrongful use of deepfakes during the current election.
Australia, Fiji and South Korea as part of the Global Online Safety Regulators Network published a framework to address online harms, including deepfakes. Four areas for collaboration are:
Regulatory tools, including risk assessment and transparency reporting: Members will share methodologies and evaluation practices and work to develop common metrics.
User complaints functions and related systems: Members will share evidence to help identify and compare trends across regions, including issues of compliance.
Information requests to industry: Members will explore opportunities to coordinate the types of questions asked of industry to help reduce the compliance burden and produce more comparable global data.
Safety measures: Members will share experiences of good practice to identify a common set of reasonable steps services can take to address specific harms and risk factors.
Multilateral
The Seoul AI Summit - a follow-up to the UK Safety Summit at Bletchley Park last year, concluded with the following declaration.
Japan and the EU conducted the second Digital Partnership Council on digital regulation, including provisions to enhance cooperation between the EU AI Office and Japan’s AI Safety Institute.
Japan kicked off the convening of a group of 49 countries called the Hiroshima AI Process Friends Group at an OECD meeting. Japan will establish a GPAI Center in Tokyo and help address technical solutions around AI-generated disinformation. Countries from Asia in the group include: Australia, India, Japan, South Korea, Laos, Singapore, New Zealand and Thailand.
Japan and Denmark signed a memorandum of cooperation on safe, secure and reliable AI based on the Hiroshima AI Process.
South Korea joins Singapore, New Zealand and Chile in the Digital Economy Partnership Agreement (DEPA) - contains a notable chapter on AI.
China and France agree to work together on AI safety following a meeting between Macron and Xi in May. China committed to joining the AI Summit to be organized by France in 2025; and China invited France to join the High Level AI Governance Meeting to be held by China this year.
China and the US held talks on AI in Geneva this month. The Chinese side noted US export restrictions as a negative barrier for AI development, and advocated for the UN as a channel for global AI governance.
Analysis
Global Opinion on AI (GPO-AI) reveals that populations in developing economies in Asia are much more positive towards AI, while developed economies in Asia are more skeptical than the global average. 6 countries in Asia (China, India, Pakistan, Indonesia, Japan, Australia) surveyed out of 21 globally:
Asia is also generally more aware of deepfakes as a concept than the rest of the world.
Tech Policy Press continued its report on the use and regulation of technology during the current nation-wide elections in India.
Asia Business Law Journal published a comparative guide on AI rules across India, Japan, South Korea, Taiwan and Philippines.
In the News
Indonesia’s Kominfo minister declared the government is reviewing AI governance regulation under the UNESCO Readiness Assessment Measures. The minister said:
“A Horizontal approach will be through regulations in the Information and Electronic Transactions Law, Personal Data Protection Law and the Circular Letter of the Minister of Communication and Information concerning AI Ethics. Meanwhile, the vertical approach is sectoral, such as the financial and health sectors.”
Australian Privacy Commissioner Carly Kind advised caution and welcomes more tools to address privacy and AI, saying:
“There’s a sense that we’re not using AI right now, we’re missing out on an opportunity, which is squeezing out the time we need to think about what does it look like in a good way and how do existing laws and regulations apply.”
The Future of Privacy Forum’s Asia-Pacific Office launched a report on Generative AI Governance Frameworks in the Asia-Pacific region at the Seoul AI Summit this month.
Microsoft asks China-based AI staff to relocate, reflecting the geopolitics of the AI talent.
Former Pakistan Prime Minister Khan uses genAI to reach voters from jail.
CNA covers the use of AI in e-commerce in Southeast Asia - warns of AI-fueled fraud.
Japan’s AI Strategy Council to start discussions on Generative AI Bill this summer.
Singapore’s new Prime Minister Lawrence Wong made the following statements at the Seoul AI Safety Summit:
Broadly classifying all generative AI as ‘high-risk’, or regulating AI systems on the condition that they must not cause any harm, can be overly restrictive and will inevitably lead to less innovation…
Korea’s President Yoon, at the same summit said:
Korea can also help shape the rules of the road in conformity with the foundational values we collectively hold dear, such as human rights, the rule of law and freedom of speech…
Asian Language LLMs
Taiwan government funded LLM -Trustworthy AI Dialogue Engine (TAIDE) was updated and focuses on traditional Chinese characters, and local Taiwanese languages and dialects.
China’s Cyber Administration released a chatbot trained on Xi Jinping Thought - proving the use of LLMs for political propaganda purposes outright.
Singaporean authors not interested in having government-supported Singaporean LLM train on their works.
Japan’s Fugaku-LLM is a Japanese language LLM - powered by the Fugaku supercomputer.
What we are thinking: As anticipated at the beginning of the year - more Asian language LLMs will be developed with the support of governments around the region.
AI Compute in Asia
Microsoft announced new initiative to invest USD 2.2b in Malaysia for cloud and AI. Other reports indicate future Thailand data centers.
Rest of World reported on the “culture clash” at Taiwan’s TSMC new Arizona fab.
Singaporean Minister addresses the need for green AI compute. The country has over 100 data centers, and seeks to be a regional AI hub. The minister noted future energy efficient chip technology as a solution.
Korea to unveil USD7b+ investment in chips to challenge global leaders.
Advocacy
Singapore’s Ministry of Law and IP Office issued a public comment period until May 19 on exceptions for technical barriers to prevent copyright material from use in “computational analysis” (e.g. data training).
China’s National Technical Committee 260 (TC260) is conducting a public comment until June 2 on technical standards for data security in genAI pre-training and optimization.
Korea’s Privacy Commission (PIPC) opened a public comment on requirements to notify and allow rejection of automated decision making in certain circumstances until June 7.
Public comment opened until June 21 on Australia's statutory review of the Online Safety Act 2021. Includes points on how to address emerging harms caused by generative AI, deepfakes, algorithmic bias.
Events
June 5, DRAPAC, Introduction to Standards Developing Organizations, Online
May 31, AI Singapore, ATxSummit, Singapore