#34 Asia AI Policy Monitor
China AI Safety Framework 2.0; US Film Studios Sue Chinese AI firm; Asian voices on AI at the UN General Assembly & more!
Thanks for reading this month’s newsletter along with over 2,000 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Events
Be sure to join us in Seattle at AI Week next month!
Join a conversation with the editor of the Asia AI Policy Monitor newsletter on the latest trends in legislation and regulation of AI in the Asia-Pacific region, along with other public policy professionals, and interested stakeholders.
Register here! https://luma.com/cdrnidyq
Intellectual Property
Indian movie stars threaten video platforms for AI content infringing rights of personality.
In India, Bollywood stars are asking judges to protect their voice and persona in the era of artificial intelligence. One famous couple’s biggest target is Google’s video arm YouTube.
Abhishek Bachchan and his wife Aishwarya Rai Bachchan, known for her iconic Cannes Film Festival red carpet appearances, have asked a judge to remove and prohibit creation of AI videos infringing their intellectual property rights. But in a more far-reaching request, they also want Google ordered to have safeguards to ensure such YouTube videos uploaded anyway do not train other AI platforms, legal papers reviewed by Reuters show.
Tech majors urge Australia to relax copyright rules to aid AI.
In its submission, Google warned AI companies may divert major resources from Australia to other markets in the Asia-Pacific if the government does not relax copyright laws to make it easier for them to train their models.
It said regulatory uncertainty in Australia over copyright, tax and other laws could see businesses “operating at the cutting edge of AI” hesitate to commit significant resources to the country.
US film majors sue a Chinese AI firm (with Singaporean subsidiary) for copyright infringement in California court.
Privacy
Australia, New Zealand, Hong Kong, Macao and South Korean privacy regulators joined others from around the world to issue a joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protective AI.
Highlighting data protection authorities’ leading role in shaping data governance to address AI’s evolving challenges, we commit to the following:
1. To foster our shared understanding of lawful grounds for processing data in the context of AI training in our respective jurisdictions. Clear standards and requirements should be developed to ensure that AI training data is processed lawfully, whether based on consent, contractual necessity, legitimate interest, or other legal justifications. In doing so, attention should be paid to various relevant factors, including the specific purposes of AI development, the characteristics of the requisite data, the reasonable expectation of data subjects, and associated risk mitigation strategies.
2. To exchange information and establish a shared understanding of proportionate safety measures based on rigorous scientific and evidence-based assessments and tailored to diversity of use cases. The relevance of these measures should be regularly updated to keep pace with evolving AI data processing technologies and practices.
3. To continuously monitor both the technical and societal implications of AI and to leverage the expertise and experience of Data Protection Authorities and other relevant entities, including NGOs, public authorities, academia, and businesses, in AI-related policy matters when possible.
4. To reduce legal uncertainties and secure space for innovation where data processing is essential for the development and deployment of AI. This may include institutional measures, such as regulatory sandboxes, as well as tools for sharing best practices…
Governance
South Korea’s President argues for easing AI regulations.
Business leaders at the meeting called for deregulation in artificial intelligence (AI), autonomous driving, and robotics industries, to which the president expressed general agreement. For example, autonomous vehicle companies must blur pedestrians' faces in street view data before using it to train AI. The president said, “What's wrong with AI seeing faces? We all walk the streets seeing others' faces,” and added, “Isn't prohibiting the use of original footage for training—because of the risk of misuse—like saying, 'Since maggots might form, let's destroy the jar and just buy food instead?'”
On using public data for AI training, the president said, “In principle, public assets created with taxpayers’ money should be disclosed as much as possible.” He continued, “National exam questions and answers are already publicly available. Guidelines restricting their use should be revised to allow it.” Current laws require explicit consent from copyright holders.
Cybersecurity
China’s TC260 standards setting body issued guidance on cybersecurity on genAI pre-training data, to be implemented in November.
The national standard “Security Specification for Generative Artificial Intelligence Pre-training and Optimization Training Data for Cybersecurity Technology” is under the jurisdiction of TC260 (National Technical Committee for Cybersecurity Standardization), and the competent authority is the National Standards Committee .
The G7 issued a statement on cybersecurity and AI.
Implications for Safety, Soundness Compliance, and Supervision AI’s potential to both mitigate and amplify cyber risks directly affects regulated firms and supervisory authorities. • Operational Risk and Resilience: Adversarial AI may increase exposure to outages, data breaches, and fraud. • Human Oversight: Weak human oversight may delay incident detection or response. • Model Risk: Poorly trained or governed AI models may behave unpredictably or degrade over time. • Supply Chain Risk: AI systems often rely on third-party libraries, datasets, or cloud services. If these are compromised, they can introduce backdoors or vulnerabilities into cybersecurity defenses, amplifying risks across interconnected systems. • AI Literacy: Lack of institutional expertise can compromise effective deployment and oversight.
Google genAI tools fuel disinformation in India.
With Google’s new Nano Banana editor, you can drape yourself in a vintage saree, turn into a pocket-sized 3D figurine, or even hug your younger self. The same tool can also convert an Indian politician’s saree into hijab, or conjure George Soros into photos of political leaders. In India’s hyper-polarised social feeds, that shift from fun to dangerous disinformation takes a single prompt.
China’s cybersecurity standards committee adopted rules on genAI security responses.
This Practice Guide describes the classification and grading methods for security incidents of generative AI services, as well as the management measures and technical methods for the security emergency response process of generative AI services . It is applicable to generative AI service providers and relevant departments in carrying out security emergency response activities .
China bans use of Nvidia AI chips.
Meanwhile, in response to China's chip ban, Nvidia's CEO and founder, Jensen Huang, said during a press conference in the U.K. that he was disappointed by the development. The news caused the vendor's share price to drop by nearly 3%.
Some see the move by China's internal regulatory agency as either strategic or a bargaining tool.
China also released its AI Security Governance Framework 2.0.
Officials at the National Internet Emergency Center stated that the release of version 2.0 of the Framework aligns with the global trend of AI development, coordinates technological innovation with governance practices, and continuously deepens consensus on AI security, ethics, and governance. It promotes the formation of a secure, trustworthy, and controllable AI development ecosystem, and builds a collaborative governance framework across borders, fields, and industries. It will also help advance AI security governance cooperation under multilateral mechanisms, promote the universal sharing of technological achievements worldwide, and ensure that human society shares the benefits of AI development.
Trust & Safety
Australia’s federal court issues a decision in a case brought by the e-Safety Commissioner regarding deepfake platforms.
On 15 November 2022, the respondent contravened s 75 of the Online Safety Act 2021 (Cth) by posting on the [named website] a moving visual image, that appeared to be Depicted Person 1, without Depicted Person 1’s consent, depicting the person’s genital area and breasts and engaged in a sexual act of a kind not ordinarily done in public in circumstances in which an ordinary reasonable person would reasonably expect to be afforded privacy.
Hong Kong may adopt deepfake porn laws.
Speaking to select media on Monday, Tang said that the Security Bureau would launch a public consultation next year, aiming to pass legislative amendments by the end of the current administration’s term in June 2027.
UNGA80 - Asian Perspectives on AI
The Geneva Internet Platform has provided great reporting on countries’ focus on technology at the UN General Assembly; we repeat some of their reporting from Asian countries here:
AI and other technologies should adhere to the principles of people-centred development, technology for good and equitable benefits, and require improving relevant governance rules and strengthening global governance cooperation. (China)
Support is expressed for efforts to develop a governance framework to manage responsible use of A for development. (Solomon Islands)
AI‘s transformative force can aid conflict prevention, peacekeeping, and humanitarian actions, but early, constructive, and inclusive multilateral engagement is essential. However, AI requires guardrails so that it can be harnessed responsibly. (Singapore)
In the News & Analysis
A new Japanese political party will incorporate an AI leader, to help make decisions for the party.
Details about the AI are yet to be decided, including when and how it will be implemented, said the 25-year-old student at Kyoto University, who will nominally be the party’s leader.
The AI will not dictate political activities of party members but will focus on decisions such as distribution of its resources among members, for example, said Okumura, who recently won a party contest to succeed Ishimaru.
CSET published a finding on international efforts for AI governance, including from Asia, specifically Japan and Singapore.
[AI summary]
It includes Japan’s Ministry of Internal Affairs and Communications and METI’s “AI Guidelines for Business” (2024) as one of the frameworks in its harmonization analysis.
It also includes Singapore’s PDPC “Model AI Governance Framework (Second Edition)” (a key ASEAN-referenced document) in its dataset of 52 guidance documents
Nature publishes a review of China’s Deepseek’s R1 model, concluding that no information from OpenAI was used to train the model, all at a modest quarter million USD.
General reasoning represents a long-standing and formidable challenge in artificial intelligence (AI). Recent breakthroughs, exemplified by large language models (LLMs)1,2 and chain-of-thought (CoT) prompting3, have achieved considerable success on foundational reasoning tasks. However, this success is heavily contingent on extensive human-annotated demonstrations and the capabilities of models are still insufficient for more complex problems. Here we show that the reasoning abilities of LLMs can be incentivized through pure reinforcement learning (RL), obviating the need for human-labelled reasoning trajectories. The proposed RL framework facilitates the emergent development of advanced reasoning patterns, such as self-reflection, verification and dynamic strategy adaptation. Consequently, the trained model achieves superior performance on verifiable tasks such as mathematics, coding competitions and STEM fields, surpassing its counterparts trained through conventional supervised learning on human demonstrations. Moreover, the emergent reasoning patterns exhibited by these large-scale models can be systematically used to guide and enhance the reasoning capabilities of smaller models.
Rest of World reports on China’s push for data center dominance in the rest of the world, despite bans in the US and Europe. Our editor reviewed these cloud policies in Asia earlier this year for the Center for European Policy Analysis.
Barred from many Western markets by U.S.-led restrictions, Huaweii is doubling down on developing countries for its cloud and artificial intelligence businesses. The move has put Huawei on a collision course with American tech firms as the Trump administration steps up efforts to limit Chinese influence in the AI industry.
Advocacy
Singapore opened a public consultation on the use of genAI in the legal profession until Sept 30.
GenAI presents significant opportunities for legal professionals to enhance productivity and service delivery, offering new approaches to traditional workflows and service delivery models. However, many legal professionals have expressed uncertainty about adopting GenAI due to concerns over hallucinations, client confidentiality, and the lack of technical expertise to implement and manage associated risks.
Indonesia conducted consultations on its AI Road Map and Ethics Guide. Views can be sent here: kerjal.aikita@mail.komdigi.go.id and the documents found here.
The Public Consultation of the White Paper on the National Artificial Intelligence Roadmap and the Draft Guidelines for Artificial Intelligence Ethics is intended to obtain responses and input from relevant stakeholders to enrich the material of the White Paper on the Artificial Intelligence Roadmap and the Draft Guidelines for Artificial Intelligence Ethics, so that a comprehensive and accurate study is produced to support Artificial Intelligence in Indonesia.
India is calling for proposals for the next AI Impact Summit in 2026.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.



