#31 Asia AI Policy Monitor
Korean court on privacy and Ai violation; Australia Agentic AI warning report; Korean privacy violations to train AI; India to host 2026 AI Impact Summit; Japan AI Copyright Suit and MORE!
Thanks for reading this month’s newsletter along with over 2,000 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Intellectual Property
Japan’s largest newspaper sues AI firm for copyright violation.
The filing claims that Perplexity accessed 119,467 articles on Yomiuri’s site between February and June of this year, based on an analysis of its company server logs. Yomiuri alleges the scraping has been used by Perplexity to reproduce the newspaper’s copyrighted articles in responses to user queries without authorization.
Australia’s Productivity Commission is seeking comments on AI including numerous recommendations such as TDM exceptions to the Copyright Act for AI training by Sept 15.
Is there a case for a text and data mining exception? Another option is to expand the existing ‘fair dealing’ regime, which provides certain exceptions to the requirement to obtain permission from the copyright holder (box 1.6). Currently, there is no exception that covers AI model training per se (The University of Notre Dame Australia 2024). However, depending on the case, a different exception could apply. For example, AI models built as part of research could fall within the scope of the ‘research or study’ exception.
In response, Australia’s arts community pushes the Labor party to take a stand against AI theft of creative works.
Arts, creative and media groups have demanded the government rule out allowing big tech companies to take Australian content to train their artificial intelligence models, with concerns such a shift would “sell out” Australian workers and lead to “rampant theft” of intellectual property.
The Albanese government has said it has no plans to change copyright law, but any changes must consider effects on artists and news media. The opposition leader, Sussan Ley, has demanded that copyrighted material must not be used without compensation.
Privacy
Hong Kong’s privacy regulator stresses current rules protect against deepfake pornography.
The head of Hong Kong’s privacy watchdog has said there is no immediate need to amend the law to specifically target the creation of AI-generated deepfake pornography, stressing that existing legislation is sufficient to handle offences.
Korea’s Privacy regulator released guidance on use of personal data in genAI.
Personal Information Protection Commission Chairman Koh Hak-soo said, “We expect that this clear guide will resolve legal uncertainty in the field and systematically reflect the perspective of personal information protection in the development and use of generative artificial intelligence.” He added, “Going forward, the Personal Information Protection Commission will establish a policy foundation to ensure that the two values of ‘privacy’ and ‘innovation’ can coexist.”
Australia’s privacy regulator examines the case of a medical imaging company’s sharing of data with an AI company.
Ultimately, the Commissioner was satisfied that the patient data shared with Annalise.ai was de-identified sufficiently that it was no longer personal information for the purposes of the Privacy Act. The Commissioner therefore ceased the inquiries, but decided to publish this report in the public interest to inform the community of the outcome of the inquiries and as a case study of good privacy practice. It is still open to the Commissioner to commence an investigation of I-MED with respect to these or other practices, and this case study should not be taken as an endorsement of I-MED’s acts or practices or an assurance of their broader compliance with the APPs.
Australia’s privacy regulator issued its strategic plan for 2025-26, including on emerging tech issues.
The OAIC will protect and uphold privacy and information access rights when dealing with new and emerging technologies with high impact, including:
Facial recognition technology and forms of biometric scanning
new surveillance technologies such as location data tracking in apps, cars and other devices
the preservation of both privacy and information access rights in government use of artificial intelligence and automated decision making.
Korean court finds online service provider violated privacy of chat program users with non-consent of AI ingestion.
On June 12, the Seoul Eastern District Court’s Civil Division partially ruled in favor of 246 plaintiffs who sued Scatter Lab, the developer of the AI chatbot Lee Luda, over personal data leaks. The court awarded damages ranging from 100,000 won ($72) to 400,000 won each, depending on the severity of the privacy violations.
The court found that 26 victims whose personal information had been exposed were entitled to 100,000 won for mental distress. Another 23 people whose sensitive personal information was leaked were granted 300,000 won. The court ordered 40 victims who suffered both types of breaches to receive 400,000 won each.
Finance
The Reserve Bank of India published its report on AI and finance.
The challenge with regulating AI is in striking the right balance, making sure that society stands to gain from what this technology has to offer, while mitigating its risks. Jurisdictions have adopted different approaches to AI policy and regulation based on their national priorities and institutional readiness. In the financial sector, AI has the potential to unlock new forms of customer engagement, enable alternate approaches to credit assessment, risk monitoring, fraud detection, and offer new supervisory tools. At the same time, increased adoption of AI could lead to new risks like bias and lack of explainability, as well as amplifying existing challenges to data protection, cybersecurity, among others.
Governance
Australia’s Digital Transformation Agency published rules on government use of AI.
The standard provides requirements and recommendations following three key phases of the AI system life cycle: Discover, Operate and Retire. The practices described at each phase in the lifecycle ensures the system is ethical, effective, and aligned with regulation from inception to decommission.
Under Discover, AI systems are conceptualised, designed, and prepared for deployment. The standard highlights the following elements for the systems to meet high quality thresholds.
Design: Define the system's purpose, objectives, and scope. This includes ethical risks, biases, fairness, government policies, human oversight and accountability structures.
Data: Identify the data make-up for building and using the system and ensure quality, privacy, and security measures are implemented. Apply governance practices to maintain compliance and manage AI bias.
Train: Creation, adaption and selection of algorithms and models for the AI, as well as their calibration, training, and context.
Evaluate: Evaluate the accuracy, reliability, and robustness of the AI. Conduct adversarial testing to identify risks and ensure compliance with guidelines.
Saudi Arabia launches national AI index to measure government readiness for AI.
The index is part of SDAIA's broader efforts as the national reference for data and AI in the Kingdom, overseeing their regulation, development, and application.
Australia published a report on cybersecurity issues raised by agentic AI.
The report, Risk analysis tools for governed LLM-based multi-agent systems, outlines failure modes that arise when multiple AI agents interact, including:
inconsistent performance of a single agent derailing complex processes
cascading communication breakdowns
shared blind spots and repeated mistakes
groupthink dynamics
coordination failures.
Trust, Safety and Human Rights
Australia’s Human Rights Commission calls for AI act to prevent bias/discrimination.
The Human Rights Commission has consistently advocated for an AI act, bolstering existing legislation, including the Privacy Act, and rigorous testing for bias in AI tools. Finlay said the government should urgently establish new legislative guardrails.
“Bias testing and auditing, ensuring proper human oversight review, you [do] need those variety of different measures in place,” she said.
There is growing evidence that there is bias in AI tools in Australia and overseas, in areas such as medicine and job recruitment.
Cybersecurity
China’s TC260 published standards on Cybersecurity Technology Artificial Intelligence Computing Platform Security Framework.
China’s Cyber Administration is continuing a two month “Clear and Bright” campaign against AI and related social media information management.
Implement the important thoughts of General Secretary Xi Jinping on building a cyber power, and thoroughly implement the spirit of the 20th National Congress of the Communist Party of China and the Third Plenary Session of the 20th Central Committee of the Communist Party of China. Through special operations, we will focus on rectifying the chaos of "self-media" publishing false information, and severely crack down on prominent problems such as malicious speculation to mislead the public, distorting facts through various means, not marking to pass off fake information as real, and false information in professional fields. We will supervise website platforms to establish and improve functional mechanisms such as technical identification and discovery, information source labeling, and professional qualification certification, further standardize the operation of "self-media", and continuously create a clear and clean cyberspace.
China’s Cyber Administration called NVIDIA to discuss H20 security issues.
China’s cybersecurity regulator has summoned Nvidia representatives to discuss the security risks of artificial-intelligence chips it sells in China.
The Cyberspace Administration of China wants Nvidia to explain the “backdoor security risks” associated with its H20 chips sold in China and submit relevant documents, it said Thursday.
NVIDIA claims that 520 chips are not compromised in rebuttal to Chinese government claims (above).
In response, a Nvidia spokesperson told CNBC that “cybersecurity is critically important to us. NVIDIA does not have ‘backdoors’ in our chips that would give anyone a remote way to access or control them.”
Nvidia on Tuesday similarly rejected Chinese accusations that its AI chips include a hardware function that could remotely deactivate the chips, also known as a “kill switch.”
Multilateral
India will host the next AI Impact Summit in February next year.
APEC Digital Ministers released a statement on AI and digital issues.
We recognize that as digital transformation advances, ensuring safety, security, accessibility, trustworthiness and reliability are key to realizing the benefits of digitalization for all. We emphasize the importance of developing robust policy and risk management strategies to protect business, individuals and workers from a range of digital threats and to strengthen trust and confidence in digital and AI ecosystems so that the opportunities offered by the latest ICT and digital technologies are fully leveraged. In line with these objectives, we support continued work among member economies to enhance trust, safety, fairness and confidence in the use of ICT and digital technologies such as AI, as well as the dissemination of knowledge and information. We will also continue our cooperation on facilitating the flow of data and strengthening consumer and business trust in digital transactions.
Additionally, APEC convened more than 200 policymakers, technologists and standards experts in Incheon for the APEC AI Standards Conference to drive convergence in AI governance and technical alignment across the region.
Participants discussed AI use cases across sectors and examined emerging frameworks to guide testing, conformity assessment and implementation and examined real-world applications of AI standards in areas such as ethics, human-AI interaction, risk management and environmental impact.
“Standardization is not just a technical process, it is a foundation for sustainable innovation and inclusive growth,” concluded Dr Kang Byung-Goo, Chair of the APEC Sub-Committee on Standards and Conformance (SCSC), which oversees the initiative.
News & Analysis
Recent research done on domestic violence decisions by courts in several Pacific Island Countries identifying bias in judicial decision making to be challenged and remediated by AI, an NGO claims.
Over the past decade, the center has manually reviewed more than 6,000 sentencing decisions, of which approximately 3,000 fit the research methodology and form the study’s core dataset. This analysis uncovered judicial bias in 52% of cases—contributing to reduced accountability for perpetrators and retraumatization for survivors.
“Our data has informed judicial directives in Fiji and legislative reforms in the Solomon Islands and Vanuatu,” says Jyoti Diwan, ICAAD’s director of data analytics and insights. “We’ve also supported institutionalization of our methodology within women’s rights organizations, and developed the Pacific region’s only gender-based violence case law dataset, visualized through our TrackGBV Pacific Dashboard.”
A research firm published the State of AI Safety in Singapore report.
Domestic Approach
• Singapore relies on voluntary frameworks and targeted legislation instead of a broad or national AI-specific law. The Model AI Governance Framework, first issued in 2019 for traditional AI and updated in 2024 for generative AI, provides broad voluntary guidelines for industry, while legislation is targeted and focuses on specific AI risks, such as new penalties for AI-generated election deepfakes. There is no clear move toward a national AI law at the moment.
• Policy instruments emphasize downstream testing and assurance rather than modellevel controls. Initiatives such as the “Starter Kit for Safety Testing of LLM Applications” and the “Global AI Assurance Sandbox” provide deployers with dedicated test cases and specific guidance on how to test different components of generative AI applications for safety risks. Because testing and evaluations are less well-explored at the application level than at the model level globally, Singapore’s focus on deployment testing positions the country to fill an important gap in global AI safety practice.
The Economist reports that in Southeast Asia, Malaysia and Singapore benefit the most.
Of all the countries in South-East Asia, Singapore and Malaysia benefit most from the AI race. Singapore, with its well-governed, stable economy, has deftly handled its relationships with America and China. In 2023 it awarded four new data-centre tenders: two to American firms (Equinix and Microsoft) and two to Chinese ones (GDS Holdings and a group led by Bytedance). It now hosts 60% of South-East Asia’s data-centre capacity.
Advocacy
Indonesia is conducting a public consultation on its AI strategy until August 22, send comments here; docs here.
Australia’s Productivity Commission is seeking comments on AI including numerous recommendations such as TDM exceptions to the Copyright Act for AI trianing by Sept 15.
India is calling for proposals for the next AI Impact Summit in 2026.
India’s Securities and Exchange Commission conducted a public comment on AI and securities this month.
SEBI has prescribed reporting requirements for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used by Stock Exchanges, Clearing Corporations, Depositories, Intermediaries and Mutual Funds. The intent of the circ ulars was to create an inventory of the AI / ML landscape in the Indian financial markets to gain an in depth understanding of the adoption of such technologies in the markets and to ensure preparedness for any AI / ML policies that may arise in the future.
Taiwan’s Fair Trade Commission has comments open for Gen AI and Competition rules until Sept 7.
The Paper specifies the market structure and market characteristics of generative AI, and describes the development of the hardware supply chain, model creation, and the deployment of AI applications in Taiwan. Additionally, the Paper outlines the four primary categories regulated by competition law, including unilaterally abusing market dominance, concerted action, market concentration, false advertising, and other unfair competition practices. Furthermore, the Paper examines the potential competition issues that may arise from generative AI, with specific questions listed in each section, in order to help the public focus on specific issues and provide responses and opinions.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.




