#12 October 2 of 2
Privacy, Safety, Education, Finance, Environment in AI policy in Australia, China, Korea, Singapore, India, New Zealand
Thanks for reading this month’s newsletter along with over 1,200 other AI policy professionals globally across multiple platforms. Do not hesitate to contact our editors if we missed any big news in Asia’s AI policy at seth@apacgates.com!
Privacy
Australia’s privacy regulator (OAIC) published guidance on privacy and training generative AI models, and guidance on the use of commercially AI tools and privacy. Top 5 recommendations on training genAI models:
Developers must take reasonable steps to ensure accuracy in generative AI models, commensurate with the likely increased level of risk in an AI context, including through using high quality datasets and undertaking appropriate testing. The use of disclaimers to signal where AI models may require careful consideration and additional safeguards for certain high privacy risk uses may be appropriate.
Just because data is publicly available or otherwise accessible does not mean it can legally be used to train or fine-tune generative AI models or systems. Developers must consider whether data they intend to use or collect (including publicly available data) contains personal information, and comply with their privacy obligations. Developers may need to take additional steps (e.g. deleting information) to ensure they are complying with their privacy obligations.
Developers must take particular care with sensitive information, which generally requires consent to be collected. Many photographs or recordings of individuals (including artificially generated ones) contain sensitive information and therefore may not be able to be scraped from the web or collected from a third party dataset without establishing consent.
Where developers are seeking to use personal information that they already hold for the purpose of training an AI model, and this was not a primary purpose of collection, they need to carefully consider their privacy obligations. If they do not have consent for a secondary, AI-related purpose, they must be able to establish that this secondary use would be reasonably expected by the individual, taking particular account of their expectations at the time of collection, and that it is related (or directly related, for sensitive information) to the primary purpose or purposes (or another exception applies).
Where a developer cannot clearly establish that a secondary use for an AI-related purpose was within reasonable expectations and related to a primary purpose, to avoid regulatory risk they should seek consent for that use and/or offer individuals a meaningful and informed ability to opt-out of such a use.
Australia’s recommendations on privacy and commercially available AI tools:
Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information). When looking to adopt a commercially available product, organisations should conduct due diligence to ensure the product is suitable to its intended uses. This should include considering whether the product has been tested for such uses, how human oversight can be embedded into processes, the potential privacy and security risks, as well as who will have access to personal information input or generated by the entity when using the product.
Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI, including ensuring that any public facing AI tools (such as chatbots) are clearly identified as such to external users such as customers. They should establish policies and procedures for the use of AI systems to facilitate transparency and ensure good privacy governance.
If AI systems are used to generate or infer personal information, including images, this is a collection of personal information and must comply with APP 3. Entities must ensure that the generation of personal information by AI is reasonably necessary for their functions or activities and is only done by lawful and fair means. Inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes), where it is about an identified or reasonably identifiable individual, constitutes personal information and must be handled in accordance with the APPs.
If personal information is being input into an AI system, APP 6 requires entities to only use or disclose the information for the primary purpose for which it was collected, unless they have consent or can establish the secondary use would be reasonably expected by the individual, and is related (or directly related, for sensitive information) to the primary purpose. A secondary use may be within an individual’s reasonable expectations if it was expressly outlined in a notice at the time of collection and in your business’s privacy policy.
As a matter of best practice, the OAIC recommends that organisations do not enter personal information, and particularly sensitive information, into publicly available generative AI tools, due to the significant and complex privacy risks involved.
Intellectual Property
Tech Policy Press published an article cataloguing the economic and political state of play in Asia on the issue of copyright infringement and AI training in Australia, India, Japan, South Korea, Hong Kong and China, written by this newsletter’s editor.
Environment
Taiwan’s Premier Cho Jung-tai said in a recent interview that the country needs to reconsider its no-nuclear policy, given the energy needs of AI and chip manufacturing, saying the country is “very open” to new nuclear technologies. Currently, Taiwan relies heavily on fossil fuels for energy production.
Education
China’s Beijing city government issued guidance on use of AI in schools. The guide sets out regulations for 29 typical scenarios in six key areas of educational practices and is to be updated annually. Although unrelated to the this guidance, it is worth noting the 2019 UNESCO AI and Education guidance called the “Beijing Consensus on AI and Education”. One of the recommendations is to ensure that AI empowers teachers and teaching, supporting them in their responsibilities rather than replacing them. It also states that adequate capacity building needs to be in place to prepare teachers to work effectively in AI settings.
Finance
Australia’s Securities and Investment Commission urged financial services and credit licensees to ensure their governance practices keep pace with their accelerating adoption of artificial intelligence (AI) with recent guidance. The findings of the report are:
Use of AI
FINDING 1: The extent to which licensees used AI varied significantly. Some licensees had been using forms of AI for several years and others were early in their journey. Overall, adoption of AI is accelerating rapidly.
FINDING 2: While most current use cases used long-established, well-understood techniques, there is a shift towards more complex and opaque techniques. The adoption of generative AI, in particular, is increasing exponentially. This can present new challenges for risk management.
FINDING 3: Existing AI deployment strategies were mostly cautious, including for generative AI. AI augmented human decisions or increased efficiency; generally, AI did not make autonomous decisions. Most use cases did not directly interact with consumers.
Risk management and governance
FINDING 4: Not all licensees had adequate arrangements in place for managing AI risks.
FINDING 5: Some licensees assessed risks through the lens of the business rather than the consumer. We found some gaps in how licensees assessed risks, particularly risks to consumers that are specific to the use of AI, such as algorithmic bias.
FINDING 6: AI governance arrangements varied widely. We saw weaknesses that create the potential for gaps as AI use accelerates.
FINDING 7: The maturity of governance and risk management did not always align with the nature and scale of licensees’ AI use – in some cases, governance and risk management lagged the adoption of AI, creating the greatest risk of consumer harm.
FINDING 8: Many licensees relied heavily on third parties for their AI models, but not all had appropriate governance arrangements in place to manage the associated risks.
Trust Safety, Cybersecurity
South Korea’s Ministry of Science, Information, and Technology issued exceptions to personal data processing in the instance of using scam caller’s voices for real-time fraud detection systems which may use AI.
IndiaAI, under the India’s Ministry of Electronics and IT (MeitY), launched the CyberGuard AI Hackathon as part of its mission to democratize AI, promote India’s AI leadership, and ensure ethical AI use. Through the IndiaAI Application Development Initiative (IADI), the hackathon aims to foster AI innovation in cybersecurity, address the growing threat of cybercrime, and drive socio-economic transformation across sectors.
Singapore’s InfoMediaCom Development Authority (IMDA) issued website blocking orders against 10 websites which contained strategic misinformation about Singapore (hostile information campaigns), including generative AI content:
Investigations found that the majority of the articles published on this website were likely to have been written with AI tools. This website also published commentaries on socio-political issues, including one that falsely alleged that Singapore had allowed other countries to conduct their biological warfare research activities here.
Legislation and Governance
Singapore indicated a review of national security regulations to allow for preemptive website blocking, per an incident (above) about the blocking of 10 websites under the Broadcasting Act:
There are currently no provisions in the Foreign Interference (Countermeasures) Act 2021 to pre-emptively act against websites (whether inauthentic or not). For example, an Account Restriction Direction, which is an anticipatory direction, can only be given to a provider of a social media service and/or electronic service but not websites. The Government is reviewing the Act to see how this can be addressed.
Singapore’s IMDA announced the release of the Generative AI Sandbox 2.0 for December:
The GenAI Sandbox 2.0 is expected to provide approximately 15 GenAI solutions across these three solution categories and benefit over 300 SMEs across all sectors. These solutions were jointly curated by a panel of industry users and technical experts (in partnership with SGTech), based on suitability for SMEs.
Multilateral
17 data protection authorities (including Australia, New Zealand and Hong Kong in Asia) concluded an agreement on data scraping and privacy:
To effectively protect against unlawful scraping, organizations should deploy a combination of safeguarding measures, and those measures should be regularly reviewed and updated to keep pace with advances in scraping techniques and technologies.
While artificial intelligence (AI) is used by some sophisticated data scrapers to evade detection, it can also represent part of the solution, serving to enhance protections against unlawful scraping.
The obligation to protect against unlawful scraping applies to both large corporations and Small and Medium Enterprises (SMEs). There are lower-cost measures that SMEs can implement, with assistance from service providers, to meet this obligation.
Where SMCs and other organizations contractually-authorize scraping of personal data from their platforms, those contractual terms cannot, in and of themselves, render such scraping lawful; however, they can be an important safeguard.
Organizations who permit scraping of personal data for any purpose, including commercial and socially beneficial purposes, must ensure without limitation, that they have a lawful basis for doing so, are transparent about the scraping they allow, and obtain consent where required by law.
Organizations should also implement adequate measures, including contractual terms and associated monitoring and enforcement, to ensure that the contractually authorized use of scraped personal data is compliant with applicable data protection and privacy laws.
When an organization grants lawful permission for third parties to collect publicly accessible personal data from its platform, providing such access via an Application Programming Interface (API) can allow the organization greater control over the data, and facilitate the detection and mitigation of unauthorized scraping.
SMCs and other organizations that use scraped data sets and/or use data from their own platforms to train AI, such as Large Language Models, must comply with data protection and privacy laws as well as any AI-specific laws where those exist. Where regulators have made available guidelines and principles on the development and implementation of AI models, we expect organizations to follow that guidance.
G7 (inclusive of Japan) finance ministers issued a communique, including the impact of AI:
We remain committed to advancing our discussion on how to leverage AI in a safe, secure, and trustworthy way to increase productivity and growth while minimising the risks to the financial system and the wider economy. Following up on our Stresa shared policy agenda, we set up a High-Level Panel of Experts to identify the opportunities and challenges for economic and financial policymaking arising from the development and use of AI and to prepare a Report for the G7. The Panel is focusing on the implications of AI for policymakers on areas deemed at the core of the G7 Finance Track, including macroeconomic impact, the potential use of AI by governments and financial agencies, financial stability considerations, implications for skills of the labour force, and environmental sustainability. We look forward to the Panel’s assessment of how to harness the benefits of AI while mitigating the associated risks. We welcome the Panel Chair's update on the ongoing work and look forward to the Report on AI and Economic and Financial Policymaking.
BRICS countries (including from Asia - China, India, Iran and UAE) met in Russia this month. Chinese leader Xi Jinping mentioned AI initiatives in his remarks to the group:
—We should build a BRICS committed to innovation, and we must all act as pioneers of high-quality development. As the latest round of technological revolution and industrial transformation is advancing at an accelerated speed, we must keep pace with the times and foster new quality productive forces. China has recently launched a China-BRICS Artificial Intelligence Development and Cooperation Center. We are ready to deepen cooperation on innovation with all BRICS countries to unleash the dividends of AI development.
China’s Foreign Minister Wang Yi highlighted the AI Capacity-Building Action Plan for Good and for All plan, focused on Global South countries at the Summit of the Future at the UN last month The goals of the project are:
1. Promote AI and Digital Infrastructure Connectivity
Improve the global layout and interoperability of AI and digital infrastructure, actively assist all countries, especially those in the Global South, to develop AI technologies and services, and help the Global South truly access AI and keep up with the pace of AI advancements.
2. Empower Industries Through the AI Plus Application
Explore ways for AI to empower the real economy across all fields, chains and scenarios to advance the empowering application of AI in areas such as industrial manufacturing, traditional agriculture, green transition and development, climate change response, and biodiversity conservation, and build robust and diverse ecosystems that enable the sound development of AI for the greater good based on local realities.
3. Enhance AI Literacy and Strengthen Personnel Training
Actively promote the application of AI in education, carry out exchange and training of AI professionals, increase the sharing of expertise and best practices, promote AI literacy among the public, protect and strengthen the digital and AI rights of women and children, and share AI knowledge and experience.
4. Improve AI Data Security and Diversity
Jointly promote the orderly and free cross-border flow of data in accordance with the law, explore the possibility of the establishment of a global data-sharing platform and mechanism, and protect personal privacy and data security. Promote equality and diversity in AI data sets to eliminate racism, discrimination, and other forms of algorithmic bias, and promote, protect, and preserve cultural diversity.
5. Ensure AI Safety, Reliability and Controllability
Uphold the principles of fairness and nondiscrimination, and support the establishment of global, interoperable AI risk assessment frameworks, standards and governance system under the framework of the U.N. that take into account the interests of developing countries. Conduct joint risk assessment on AI R&D and applications, actively develop and improve technologies and policies to address AI risks, and ensure that the design, R&D, use and application of AI contribute to the well-being of humanity.
India’s Prime Minister Shri Narendra Modi made a pitch for a global framework for the use of digital technology and the ethical use of artificial intelligence (AI), at the opening ceremony of the UN’s ITU this month saying security cannot be an afterthought in an interconnected world:
The Prime Minister reiterated the importance of establishing a global framework for digital technology. He emphasized that this topic was raised by India during its G-20 Presidency and urged global institutions to recognize its significance for global governance. “The time has come for global institutions to accept the importance of global governance”, PM Modi said.
New Zealand joins the UK’s Bletchley Declaration on AI Safety (joining from Asia: Australia, China, India, Indonesia, Japan, Philippines, Korea, and Singapore). The full declaration can be read here - with some excerpts below:
Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential…
Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits….
In the News
South Korea launched its National AI Research Hub to make it one of the AI G3 - or top 3 AI countries in the world:
The National AI Research Hub, which will form one of the three pillars of national AI development alongside the National AI Committee and the AI Safety Research Institute, aims to foster domestic industry-academia-research collaborations and engage in global joint research projects. A budget of 94.6 billion won has been allocated for the hub until 2028, underscoring the government's commitment to AI advancement.
Nvidia’s Jensen Huang was in India this month, inking deals to sell AI chips and promote the AI Industry:
"In the future, India is going to be the country that will export AI," Huang said, by contrast with its role in software exports. "You have the fundamental ingredients - AI, data and AI infrastructure, and you have a large population of users."
"India is already world-class in designing chips, India already develops AI," Huang said. "Instead of being an outsourcer and a back office, India will become an exporter of AI."
"Today, India as part of Nvidia's revenue is small," Huang said. "But our hopes are large."
Advocacy
Japan’s Fair Trade Commission opened a public comment period until 22 November on Generative AI Market Dynamics and Competition:
Given the rapidly evolving and expanding generative AI sector, the JFTC has decided to publish this discussion paper to address potential issues and solicit information and opinions from a broad audience. The topics outlined in this paper aim to contribute to future discussions without presenting any predetermined conclusions or indicating that specific problems currently exist. The JFTC seeks insights from various stakeholders, including businesses involved in different layers of generative AI markets (infrastructure, model, and application layers as described in Section 2), industry organizations, and individuals with knowledge in the generative AI field.
Sri Lanka’s National AI Strategy is open for consultation until 6 Jan 2025.
Australia’s Treasury is issuing a public comment on a discussion paper of AI and Consumer Law until 12 November:
We are seeking views on how the ACL applies to AI-enabled goods and services, including:
how the existing principles apply
the remedies available to consumers where things go wrong
the mechanisms for allocating liability among manufacturers and suppliers.
China’s Ministry of Industry Information Technology issued a public comment on 198 LLM technical requirements until 12 November.
China’s TC260 opened comment on watermarking of genAI content until 13 November.
China’s national standards platform is also publishing rules for comment until 13 November regarding standards for internet safety and generative AI content.