#41 Asia AI Policy Monitor
🇰🇷 AI training ruled illegal | 🇮🇳 Copyright licensing overhaul | 🇸🇬 Agentic AI framework | 🇵🇭 xAI investigation | 🇭🇰 Grok privacy probe | 🇨🇳 Deep-synthesis crackdown | 🇲🇾 ASEAN AI safety
Thanks for reading this month’s newsletter along with over 2,200 other AI policy professionals!
Do not hesitate to contact our editor if we missed any news on Asia’s AI policy at seth@apacgates.com.
Intellectual Property
A South Korean court rules that scraping a real estate data base is copyright infringement, implicating certain scraping practices for AI companies.
A South Korean court has ruled that unauthorized crawling (data extraction) of Naver's real estate database (DB) constitutes copyright infringement. The judgment clarifies that even if a party did not create the original data, the organization and processing of data with significant labor and cost can be protected under copyright law.
India published a proposal for copyright holders and AI industry to license for training.
As an alternative, the Committee proposes a hybrid model under which:
AI developers receive a blanket licence for the use of all lawfully accessed content for training purposes, without requiring individual negotiations;
Royalties become payable only upon commercialisation of the AI tools, with rates set by a government appointed committee. The rates would be subject to judicial review.
A centralised mechanism handles royalty collection and distribution aiming to reduce transaction costs, provide legal certainty, and support equitable access for both large and small AI developers.
Privacy
Australia’s privacy regulator advised the government on best practices in automatic decision making.
Findings from the Commissioner’s review of the 23 agencies include:
The use of ADM is permitted under legislation for all agencies.
All agencies publish IPS-related information on their websites.
4 agencies (17%) disclosed the use of ADM in decision-making in their IPS.
2 agencies (9%) were identified as ‘likely to be using ADM’ via external sources but had not disclosed use in their IPS information.
Information to confirm whether or not ADM was in use by the remaining 17 agencies (74%) was not able to be identified using external sources or IPS information.
Legislation
Kazakstan passed its Digital Code, covering AI.
The Code is a direct response to the rapid evolution of technologies such as artificial intelligence (AI), big data, and blockchain, which had previously outpaced the existing sectoral regulations. By creating a holistic framework, Kazakhstan aims to position itself as a leading digital hub in Central Asia and the broader Eurasian region, aligning its domestic standards with international best practices like those of the OECD and the European Union.
Governance
The South Korean government published protections for youth and women against digital sex crimes, including emerging AI use cases.
They plan to actively cooperate in establishing laws and systems that enable support for victims related to this, along with technical and managerial protective measures to block the creation and distribution of deepfake sexual crime materials that exploit AI.
Philippines government opened an investigation of xAI over illicit genAI images.
In compliance with President Ferdinand Marcos Jr.’s order to ensure a safe digital space for every Filipino, the DICT reported that an inter-agency technical assessment, led by the Cybercrime Investigation and Coordinating Center (CICC) and the National Telecommunications Commission (NTC), in coordination with relevant government agencies, found that Grok had previously been associated with the generation of non-consensual, sexually explicit, and manipulated images, which prompted regulatory actions in several other jurisdictions including Indonesia and Malaysia. These concerns raised serious implications for digital safety, privacy, and the protection of vulnerable sectors, especially women and children.
Singapore published its model AI governance for agentic AI.
The Model AI Governance Framework (MGF) for Agentic AI gives organisations a structured overview of the risks of agentic AI and emerging best practices in managing these risks. If risks are properly managed, organisations can adopt agentic AI with greater confidence. The MGF is targeted at organisations looking to deploy agentic AI, whether by developing AI agents in-house or using third-party agentic solutions. Building on our previous model governance frameworks, we have outlined key considerations for organisations in four areas when it comes to agents…
Hong Kong’s privacy regulator is investigating xAI’s Grok around indecent materials.
The Office of the Privacy Commissioner for Personal Data (PCPD) noted that artificial intelligence (AI) chatbot Grok can be used to generate indecent or malicious photos and videos. This issue has raised concerns in various jurisdictions. The PCPD is also concerned about the matter and is proactively contacting the relevant organisation to understand the situation.
The PCPD reminds members of the public that when providing personal data to AI chatbots to generate AI content, they must comply with the requirements of the Personal Data (Privacy) Ordinance (PDPO) and the relevant Data Protection Principles. Improper or malicious use of AI chatbots to generate indecent or malicious photos or videos may contravene the requirements of the PDPO and may constitute other criminal offences.
South Korea has requested X to set safety guardrails for children for the AI Chat service Grok.
The Korean government has requested X (formerly Twitter), which provides the artificial intelligence chatbot Grok service worldwide, to establish safety measures to protect youth. The Korea
Communications Commission (Chairman Kim Jong-cheol) announced on the 14th that it has requested X (formerly Twitter) to establish youth protection measures for its Grok service due to growing social concerns over the recent spread of sexually exploitative material and non-consensual sexual images through social media.
Australia’s productivity commission published its report on digital technologies impact including AI, last year.
Emerging technologies like artificial intelligence (AI) could transform the global economy and speed up productivity growth. The Productivity Commission considers that multifactor productivity gains above 2.3%, and labour productivity growth of about 4.3%, are likely over the next decade, although there is considerable uncertainty. But poorly designed regulation could stifle the adoption and development of AI. Australian governments should take an outcomes-based approach to AI regulation – using our existing laws and regulatory structures to minimise harms (which the Australian Government has committed to do in its National AI Plan) and introducing technology-specific regulations only as a last resort.
China’s CAC published its latest list of deepsynthesis providers.
Article 19 of the “Regulations on the Administration of Deep Synthesis in Internet Information Services” clearly stipulates that deep synthesis service providers with public opinion attributes or social mobilization capabilities shall complete the filing, modification, and cancellation procedures in accordance with the “Regulations on the Administration of Algorithm Recommendation for Internet Information Services.” Deep synthesis service technology supporters shall also follow these procedures. Deep synthesis service providers and technology supporters who have not yet completed the filing procedures are urged to apply for filing as soon as possible.
Multilateral
Italy and South Korea sign an agreement on technology, including on AI.
To this end, they commended the results of the ROK-Italy Business Forum which was jointly organized in Seoul on 5 September 2025, by the two sides focusing on four key domains: advanced industries(including artificial intelligence and industrial automation), energy transition and circular economy, infrastructure and transportation(including aerospace and automotive), and bio industry.
ASEAN’s AI Safety Network will be based out of Malaysia.
The secretariat for the Asean Artificial Intelligence (AI) Safety Network will be based in Kuala Lumpur, says digital minister Gobind Singh Deo.
He said this will reaffirm Malaysia’s commitment to supporting the region’s digital priorities and building trusted regional digital governance.
“The Asean AI Safety Network will serve as a regional platform to strengthen cooperation on capacity-building, regulatory preparedness and safeguard measures, ensuring that innovation in AI continues to advance while risks and misuse are effectively addressed,” he said in a statement, Bernama reported.
China is investigating the purchase of a major Chinese AI firm, Manus, by the US tech giant Meta.
The Chinese government consistently supports enterprises in conducting mutually beneficial transnational operations and international technological cooperation in accordance with laws and regulations. It should be noted that enterprises engaging in overseas investment, technology export, data transfer, cross-border mergers and acquisitions must comply with Chinese laws and regulations and follow legal procedures. The Ministry of Commerce, together with relevant departments, will conduct an assessment and investigation into the consistency of this acquisition with relevant laws and regulations concerning export controls, technology import and export, and overseas investment.
New Zealand and Australia issued guidance for small businesses to protect against AI security risks.
More small businesses are using AI through applications, websites and enterprise systems hosted in the public cloud like OpenAI’s ChatGPT, Google Gemini, Anthropic’s Claude, and Microsoft Copilot. AI adoption is growing fast in Australia. Based on data from the Department of Industry, Science and Resources (DISR), this is rising every year.
Cloud-based AI gives affordable access to advanced tools without the heavy investment. They help automate tasks, provide insights and improve customer experience.
As AI becomes part of small business operations, understanding the related cyber security risks is essential. Small businesses must take proactive steps to protect data, customer privacy and business systems. Having strong cyber security practices is crucial to reducing risks in an evolving and complex emerging technology space.
This guidance – authored by the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) in collaboration with the New Zealand National Cyber Security Centre (NCSC-NZ) and the Council of Small Business Organisations Australia (COSBOA) – explains the key cyber security risks of small business adopting cloud-based AI technologies and how to mitigate them. While traditional threats such as phishing, ransomware and insider threats are still relevant, this guide focuses on important cyber security risks related to AI.
In the News & Analysis
The Future of Free Speech published a report on free speech protection in LLMs, ranking Korea highest in Asia, and China lowest in the world.
By contrast, China was the weakest performer, with a regulatory framework that amounts to a state-imposed regime of strict control over AI-generated content. These measures impose ideological, technical, and political constraints, requiring AI systems to conform to “socialist core values,” censorship norms, and national security priorities through anticipatory censorship and political oversight.
The Republic of Korea ranks fourth in our assessment. It has fallen behind other developed countries in protecting freedom of expression, a trend that extends into the AI context. The strict application of defamation laws has curtailed online speech, including AI-generated content. The new AI Basic Act, modeled after the EU’s, aims to balance regulation and risk but does not always succeed in practice.
India ranked fifth. In the absence of a dedicated AI law, generative AI is governed through existing legislation. While the current framework promotes access and participation, it also risks over-removal of lawful speech, selective enforcement against alleged harmful content, and fragmented protections. India’s case highlights both the challenges and opportunities of aligning national priorities with a human rights baseline.
Rest of World reports that India is encouraging more licensing deals for AI training of copyrighted content.
With the world’s largest population, India has leverage that few other countries have. It is the second-biggest market for OpenAI’s ChatGPT after the U.S. It is one of the fastest-growing markets for Perplexity’s AI search engine, and the largest user base for WhatsApp and Facebook, where Meta is rolling out its AI tools. Microsoft, Google, and Amazon recently announced some $67 billion in AI infrastructure investments in the country.
India is therefore justified in demanding payment for its copyrighted data. Tech companies “will have to fit those payments into their deployment models — or give up this massive, lucrative market, and all of the scale advantages that being part of it confers,” James Grimmelmann, a professor of digital and information law at Cornell University, told Rest of World.
India’s linguistic diversity is another reason why AI companies need to treat the country differently, Grimmelmann said. The government is keen to develop multilingual large language models that can cater to the specific needs of businesses and individuals, which means companies need local data that belongs to local creators.
IAPP provides a view into the happenings of the up-coming India AI Impact Summit.
AI investors are going to want to see real value out of AI sooner than later. As a result, we are starting to see indications that 2026 may finally be the year of more nuanced-than-general AI. While general AI platforms will still be an important part of the AI development stack, creating specific applications for easy AI adoption will likely gain more traction. We saw earlier this month at the Consumer Electronics Show where NVIDIA and Open AI announced the development of 13 new models. Each model focuses on different uses ranging from self-driving vehicles to improved health care to advanced speech recognition….
Undergirding the foundations of the AI Impact Summit, India has emphasized the importance of discussing the impact AI will have on the Global South. While bias has long been a known risk for the AI community, these flags have often been raised about the harm brought to an individual or a group of individuals.
Advocacy
China has a consultation until Jan 25 on Anthrpomorphic AI.
Uzbekistan issued a public consultation on AI ethics guidelines.
Rights and obligations of developers and implementers of artificial intelligence systems
Developers and implementers of SI systems have the following rights in accordance with current legislative acts:
protect their intellectual property in accordance with the procedure established by law;
patenting innovative technologies and algorithms;
work under fair wages and decent working conditions;
Singapore’s Monetary Authority opened a consultation on AI and Risk Management until Jan 31.
The Monetary Authority of Singapore (MAS) is proposing to introduce Guidelines on Artificial Intelligence (AI) Risk Management (the “Guidelines”) 1 to enhance management of AI risks in financial institutions (FIs), and set out MAS’ supervisory expectations relating to AI risk management in the financial sector. The Guidelines focus on oversight of AI risk management in FIs, key AI risk management systems, policies and procedures, key AI life cycle controls, as well as capabilities and capacity needed for the use of AI.
UN’s WSIS+20 UNGA side events are open for submission of ideas.
The OECD is opening consultations on Global AI Governance.
Your contributions will inform global AI strategies and policies for AI in government through OECD reports and knowledge products, with top submission featured in the OECD.AI Policy Observatory through a new repository on AI in Government.
📥 Submit here by 27 February: https://oecd.ai/wonk/call-ai-in-gov
China’s TC260 opened a consultation on cybersecurity for AI training data cleansing standards.
To effectively address the new risks and challenges brought about by the rapid development and application of artificial intelligence technology, comprehensively improve the security level of artificial intelligence applications in various industries, and ensure the high-quality development of artificial intelligence, the Secretariat has organized the compilation of four draft guidelines for cybersecurity standards and practices, including the " General Principles of Artificial Intelligence Application Security Guidelines , " the " Artificial Intelligence Application Security Guidelines for Broadcasting, Television and Online Audiovisual, " the " User Security Guidelines for Using Artificial Intelligence," and the "Security Guidelines for Cleaning Artificial Intelligence Training Data . "
The Asia AI Policy Monitor is the monthly newsletter for Digital Governance Asia, a non-profit organization with staff in Taipei and Seattle. If you are interested in contributing news, analysis, or participating in advocacy to promote Asia’s rights-promoting innovation in AI, please reach out to our secretariat staff at APAC GATES or Seth Hays at seth@apacgates.com.




