#35 Asia AI Policy Monitor
NIST on China's DeepSeek; India Deepfake and Rights of Personality; Vietnam AI Act Draft; Australia Cyber Risk, Supply Chain & AI; India GenAI Content Comments;
Thanks for reading this month’s newsletter along with over 2,000 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Events
Be sure to join us in Seattle at AI Week October 29 in Bellevue, WA at 11:30am!
Join a conversation with the editor of the Asia AI Policy Monitor newsletter on the latest trends in legislation and regulation of AI in the Asia-Pacific region, along with other public policy professionals, and interested stakeholders.
Register here! https://luma.com/cdrnidyq
Legislation
Vietnam’s Draft AI Act is under consultation.
Vietnam’s National Assembly has released a comprehensive draft Artificial Intelligence Law (Luật Trí tuệ Nhân tạo) in 2025 — the first of its kind in Southeast Asia.
The law establishes a risk-based regulatory framework for AI systems, aligning with EU and OECD approaches, while emphasizing national sovereignty, ethical AI, and AI-driven economic transformation.
Four-tier risk model:
Banned: manipulation, social scoring, mass biometric surveillance, harmful deepfakes.
High-risk: healthcare, finance, education, justice, public services.
Medium & low risk: lighter oversight but must remain transparent.
Transparency & labeling: Required for all AI-generated or altered content.
National AI Commission: Chaired by the Prime Minister — coordinating strategy, oversight, and ethics.
AI Impact Assessments (AIIAs): Mandatory for high-risk and foundation models.
General-purpose AI oversight: Foundation and large models must document training data, safety, and IP compliance.
Innovation support:
Launch of National AI Development Fund and AI Voucher Program for SMEs.
Public–private partnerships and open-source contributions encouraged.
Infrastructure & sovereignty: National GPU cloud, AI data hubs, and local-data storage rules reinforce digital independence.
Ethical governance: A National AI Ethics Framework anchors fairness, transparency, and accountability.
Intellectual Property
Indian Court issues a decision on AI deepfakes and IP rules on right of personality.
In a significant move against fake videos, the Delhi High Court directed Meta, Google, and others to remove such videos of journalist Sudhir Chaudhary. The Court noted the misinformation and harm that can be caused by false AI-generated content, and ordered the platforms to take quick action.
Australian publishers press government NOT to adopt proposed copyright exceptions for AI training.
The government must resist being “seduced” into watering down copyright laws concerning AI or risk losing Australia’s culture, according to a News Corp executive.
News Corp Australasia executive chairman Michael Miller warned of the devastating impact artificial intelligence had already had and would have on the creator industry.
“The tech revolution’s gold rush, its first ‘big steal’, was built on the free use of other people’s quality and trusted work, and that should never have been allowed to happen,” he said in his Melbourne Press Club address on Wednesday.
Further pushback from copyright holders in Australia continued at a Senate hearing.
At a hearing for the national cultural policy inquiry in Canberra, examining the impact of AI on Australia’s creative landscape, senators lambasted the commission and accused it of “waving the white flag” on protecting artists.
Liberal senator Sarah Henderson condemned the commission and accused it of “abandoning creative industries” by writing in its interim report that it is not “realistic” to stop Australian data being used to train generative AI models overseas.
“You, I would put to you, are waving the white flag rather than standing up for our creative industries; you’re saying very clearly it’s not realistic that you could stop this. Copyright in this country is worth protecting,” Henderson said.
“Where is the benefit to Australian artists in having their work scraped by AI?”
At the same time, AI companies encourage copyright changes for AI training in Australia.
Australia’s AI infrastructure opportunity does not hinge on copyright reforms to allow text and data mining for AI training, according to OpenAI’s vice-president of global affairs Chris Lehane.
Speaking at SXSW Sydney ahead of meeting with government officials as work to develop an AI capability plan gathers pace, Mr Lehane said the ChatGPT maker would be “in Australia one way or the other”.
But he warned that countries that are unwilling to make changes risk sacrificing the opportunity to build cutting-edge frontier models and would ultimately have to settle for fine-tuned versions of existing models.
Competition
India’s Competition Commission published a report on AI.
Competition law remains a key instrument for addressing AI-driven anticompetitive practices. Global regulatory responses to AI-driven competition issues are also evolving to address the emerging and potential challenges. Competition law being sector and technology agnostic, can address several AI driven anti-competitive practices. India has adopted a balanced and forwardlooking strategy, reflected in the Competition (Amendment) Act, 2023, which introduces provisions for hub-and-spoke cartels, deal-value thresholds, and settlement and commitment mechanisms etc., thereby enabling the CCI to address new age market challenges effectively. Complementary efforts such as the Digital Personal Data Protection Act (DPDPA), 2023 and MeitY’s other initiatives reflect India’s approach of blending legal reforms, policy initiatives, and stakeholders’ involvement, to build globally competitive and vibrant digital markets, including those powered by AI.
Multilateral
The US AI Standards and Innovation Institute released a review of China’s DeepSeek model R1, which made headlines earlier this year.
DeepSeek performance lags behind the best U.S. reference models. The best U.S. model outperforms the best DeepSeek model (DeepSeek V3.1) across almost every benchmark. The gap is largest for software engineering and cyber tasks, where the best U.S. model solves 20-80% more tasks than the best DeepSeek model. V3.1, DeepSeek’s most recent model, nevertheless outperforms DeepSeek’s earlier R1 models. DeepSeek models cost more to use than comparable U.S. models.
One U.S. reference model cost 35% less on average than the best DeepSeek model to perform at a similar level across all 13 performance benchmarks tested. DeepSeek models are far more susceptible to agent hijacking attacks than frontier U.S. models. Agents based on DeepSeek’s most secure model (R1-0528) were, on average, 12 times likelier than evaluated U.S. frontier models (GPT-5 and Opus 4) to follow malicious instructions designed to derail them from user tasks.
DeepSeek models are far more susceptible to jailbreaking attacks than U.S. models. DeepSeek’s most secure model (R1-0528) complied with 94% of overtly malicious requests that used common jailbreaking techniques, compared to 8% of requests for U.S. reference models. DeepSeek models advance Chinese Communist Party (CCP) narratives much more frequently than U.S. models.
On a dataset of politically sensitive questions for the CCP, on average, DeepSeek models echoed 4 times as many inaccurate and misleading CCP narratives as U.S. reference models did. Adoption of PRC models has greatly increased since DeepSeek R1 was released. The release of DeepSeek R1 has driven adoption of PRC models across the AI ecosystem. Downloads of DeepSeek models on model sharing platforms have increased nearly 1000% since January.
Governance
Australia published recommended AI policies for companies to follow.
The template sets out a recommended structure and includes example content to help you get started.
You should:
align the statements with your organisation’s core values and mission
define roles and responsibilities that match your existing organisational and governance structures
modify terminology to align with your organisation’s internal language
seek feedback to ensure the AI policy is fit for purpose.
China’s Cyber Administration issued guidance on government use of AI systems in public service.
…fully implement General Secretary Xi Jinping’s important thoughts on building a cyber power, fully and accurately implement the new development concept, coordinate high-quality development and high-level security, adhere to systematic planning and intensive development, put people first, standardize application, build and share together, collaborate efficiently, be safe and prudent, and strive for practical results, and orderly promote the deployment, application and continuous optimization of artificial intelligence big model technologies, products and services in the government sector, give full play to the advantages of artificial intelligence big models in complex semantic understanding and reasoning, multimodal content generation, knowledge integration and analysis, provide efficient assistance to staff, provide convenient services to the public and enterprises, promote innovation and development in government affairs, improve governance efficiency, optimize service management, and assist scientific decision-making.
Cybersecurity, Trust & Safety
India’s MeitY published draft guidelines on synthetic AI content.
Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods—depicting individuals in acts or statements they never made. Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud. Globally and domestically, policymakers are increasingly concerned about fabricated or synthetic images, videos, and audio clips (commonly known as deepfakes) that are indistinguishable from real content, and are being blatantly used to: • Produce non-consensual intimate or obscene imagery; • Mislead the public with fabricated political or news content; • Commit fraud or impersonation for financial gain; and • Undermine trust in legitimate information ecosystems.
Australia published guidance on AI and supply chain risk.
Artificial intelligence (AI) and machine learning (ML) systems allow organisations to improve their efficiency in many areas. These systems can help inform decisions, streamline processes and improve customer experience. For an explanation of the AI-related terminology used in this guidance, refer to the Australian Signals Directorate’s (ASD) publication, Convoluted layers: An artificial intelligence primer.
Adopting AI or ML systems also brings unique supply chain risks, which can threaten the cyber security of an organisation if not securely managed. The use of pre-trained open-source models and datasets from public websites can make these risks more pronounced.
This guidance is intended for organisations and staff that deploy or develop AI or ML systems and components. This could range from entirely outsourcing an AI system where an organisation only provides the training data, to in-house AI development. This guidance aims to:
highlight the importance of AI and ML supply chain security
address key risks and mitigations that should be considered when developing or procuring an AI system.
Education
An Australian University uses AI to wrongly accuse students of using AI.
It took six months for ACU to clear the 22-year-old of any wrongdoing, but by that point she believes the damage was done.
While the university investigated, Madeleine’s academic transcript was marked “results withheld”.
Madeleine is convinced that incomplete document was part of the reason she was not offered a graduate position.
“It was really difficult to then get into the field as a nurse because most places require you to have a grad year,” she said.
“I didn’t know what to do. Do I go back and study? Do I just give up and do something that’s not nursing in a hospital?”
Environment
Singapore signed a 3GW power agreement with Malaysia for clean energy to power increased power demands for data centers.
Singapore has signed two cross-border power supply agreements with its neighbour Malaysia that could give it access to up to 3 gigawatts of low-carbon generation capacity, according to joint statements issued on Friday.
Singapore has granted conditional approval to Sembcorp Utilities Pte Ltd, in partnership with Malaysia’s Sarawak Energy Berhad, to import around 1 GW of low-carbon electricity from the state of Sarawak, according to Singapore’s Ministry of Trade and Industry and Malaysia’s Energy Transition and Water Transformation Ministry.
Labor
Indian AI firms disrupt call center job market.
India bets AI will create enough new opportunities to offset job losses
AI tools supplant jobs built on routine tasks in call centers, customer service
IT training centers shift focus to AI skills amid rising demand
In the News & Analysis
ORF publishes recommendations for a US-India AI Taskforce.
The AI ecosystems of the U.S. and India have several complementary strengths. Building on them, the U.S.–India Taskforce on AI will advance AI innovation and adoption through a series of targeted actions over its two-year term. Key short-term objectives include promoting and advocating for:
Upskilling the AI workforce by expanding education and training programs to equip communities with the skills to thrive in the new AI economy.
Support for AI startups in both countries, building and reinforcing bilateral bridges between investor and entrepreneur communities.
Bilateral data-sharing arrangements, particularly in the health and fintech sectors as initial areas of focus.
AI best practices, voluntary standards, and implementation tools that underpin trusted cross-border AI adoption.
In the longer term, the Taskforce could also consider promoting and advocating for:
Enhanced collaboration on responsible AI for defense, leveraging existing partnerships to advance research.
Industrial automation with AI to improve modernization and efficiency as well as enhance sustainability in manufacturing.
AI connectivity and telecommunication partnerships to secure 5G and accelerate 6G R&D to support future technological advancements.
Energy solutions to enable AI transformation, driving innovation through the development of AI energy efficiency metrics.
Childhood and AI in China is covered by Rest of World.
Adoption of AI models in education and tutoring has been especially fast, as the Chinese government pushes to accelerate the country’s technological progress against the U.S., and with anxious parents willing to try anything to help their children succeed.
Advocacy
Vietnam’s AI Act Draft is under consultation until Oct 20.
Indonesia conducted consultations on its AI Road Map and Ethics Guide. Views can be sent here: kerjal.aikita@mail.komdigi.go.id and the documents found here.
The Public Consultation of the White Paper on the National Artificial Intelligence Roadmap and the Draft Guidelines for Artificial Intelligence Ethics is intended to obtain responses and input from relevant stakeholders to enrich the material of the White Paper on the Artificial Intelligence Roadmap and the Draft Guidelines for Artificial Intelligence Ethics, so that a comprehensive and accurate study is produced to support Artificial Intelligence in Indonesia.
India’s MeitY is taking comments on rules for synthetic AI content.
The feedback/comments on the draft rules in a rule wise manner may be submitted by email to itrules.consultation@meity.gov.in in MS Word or PDF format by 6 th November, 2025.
India is calling for proposals for the next AI Impact Summit in 2026.
UN’s WSIS+20 UNGA side events are open for submission of ideas.
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.



