On May 30, 2024, Singapore’s AI Verify Foundation and Infocomm Media Development Authority published its Model AI Governance Framework for Generative AI (GenAI Framework).
Was the GenAI Framework Consulted Upon?
Yes, a draft of the GenAI Framework was open for public consultation from January 16 to March 15, 2024. Our blog post on this can be found here.
Does the GenAI Framework Cover AI That Is Not Generative AI?
No, an earlier model governance framework was issued for traditional (non-generative) AI. Our blog post on this can be found here.
What Does the GenAI Framework Address?
- Accountability – This compels all parties throughout the entire multi-layered AI development chain to be responsible to end-users. A useful parallel can be drawn with today’s cloud and software development stacks, where initial practical steps can be taken by the various players across the tech stack.
- Data – As a core element to AI model development, what is fed to the model is important in terms of the quality of data used, which should be from trusted data sources. In cases where the use of data for model training is potentially contentious, such as personal data and copyright material, it is also important to give business clarity, ensure fair treatment and to do so in a pragmatic way.
- Trusted Development and Deployment – Notwithstanding the limited visibility that end-users may have, meaningful transparency around the baseline safety and hygiene measures undertaken is key. This involves industry adopting best practices in development, evaluation and over time, “food label”-type transparency and disclosure.
- Incident Reporting – Establishing regulatory notification structures and practices can help facilitate timely remediation of any incidents and support the continuous improvement of AI systems more broadly.
- Testing and Assurance – Third-party testing and assurance can play a complementary role to develop common and consistent standards around AI, and ultimately demonstrate trust with end-users. With independent verification already adopted in the finance and healthcare domains, such processes can be similarly adopted in AI even as an emerging field. It is important to develop common standards around AI testing to ensure quality and consistency.
- Security – Generative AI introduces the potential for new threat vectors against the models themselves. This goes beyond security risks inherent in any software stack. While this is a nascent area, existing frameworks for information security need to be adapted and new testing tools developed to address these risks.
- Content Provenance – Transparency about where and how content is generated is necessary, to avoid misinformation and fraud. Use of technical solutions like digital watermarking and cryptographic provenance must be considered in the right context.
- Safety and Alignment Research & Development (R&D) – Accelerated R&D investment is required to improve model alignment with human intention and values. Singapore hopes to achieve this alongside global cooperation among AI safety R&D institutes.
- AI for Public Good – Democratizing AI access, improving public sector adoption, as well as upskilling workers and developing systems sustainably, will steer AI towards outcomes for the public good.
Is the GenAI Framework a Mandatory Regulatory Requirement in Singapore?
No, this is a model governance framework which businesses developing or deploying generative AI can adapt for use. The document itself encourages stakeholders to view the relevant issues set out in the GenAI Framework “in a practical and holistic manner” and that “[n]o single intervention will be a single bullet.”
Businesses are therefore advised to tailor the relevant good practices offered in the GenAI Framework, based on their own unique characteristics such as the particular use case for the generative AI system, nature of business and associated risks arising therefrom or other circumstances.
Additionally, reliance on the GenAI Framework does not absolve a company from having to comply with other applicable laws including but not limited to intellectual property, data protection, online safety, consumer protection, competition, contract, tort and sectoral regulations.
Should you require any advice or assistance, feel free to reach out to your usual firm contact.
Disclaimer: The views and opinions expressed here are of the author(s) alone and do not necessarily reflect the opinion or position of Squire Patton Boggs or its clients. While every effort has been made to ensure that the information contained in this article is accurate, neither its author(s) nor Squire Patton Boggs accept responsibility for any errors or omissions. The content of this article is for general information only and is not intended to constitute or be relied upon as legal advice.