SymphonyAI has contributed to the Singapore government’s request for feedback
Many countries and jurisdictions have been working on measures around the implementation of generative AI. The EU recently passed the AI act, a landmark law that ensures safety and compliance with fundamental rights while boosting innovation, and the US announced an executive order around the safety and security of AI in 2023. Along these lines, The AI Verify Foundation (AIVF) and Singapore’s Infocomm Media Development Authority (IMDA) sought views on its Proposed Model AI Governance framework for Generative AI, to which SymphonyAI is contributing.
What is Singapore’s Proposed Model AI Governance framework for Generative AI?
Singapore’s Proposed Model AI Governance framework for generative AI seeks to expand upon the existing Model Governance Framework on traditional AI released in 2019 and updated in 2020.
The aim of the framework is to address concerns surrounding the use of generative AI while facilitating innovation. Seeking to take a balanced approach, the AIVF and IMDA have invited key stakeholders such as policymakers, the research community, and industry to submit insights on the proposed nine dimensions.
In the AIVF and IMDA’s own words, they are as follows:
- Accountability – Putting in place the right incentive for different players in the AI system development lifecycle to be responsible to end users
- Data – Ensuring data quality and addressing potentially contentious use of training data in a pragmatic way, as data is core to model development
- Trusted Development and Deployment – Enhancing transparency around baseline safety and hygiene measures based on industry best practices in development, evaluation and disclosure
- Incident Reporting – Implementing an incident management system for timely notification, remediation, and continuous improvements, as no AI system is foolproof
- Testing and Assurance – Providing external validation and added trust through third-party testing and developing common AI testing standards for consistency
- Security – Addressing new threat vectors that arise through generative AI models
- Content Provenance – Transparency about where and how content is generated enables end users to consume online content in an informed manner
- Safety and Alignment R&D – Accelerating R&D through global cooperation among AI Safety Institutes to improve model alignment with human intention and values
- AI for Public Good – Responsible AI includes harnessing AI to benefit the public by democratizing access, improving public sector adoption, upskilling workers, and developing AI systems sustainably.
The goal is to consider the nine dimensions of the framework ‘in totality, to enable and foster a trusted ecosystem.’
SymphonyAI contributes to Singapore’s proposed Model AI Governance framework
SymphonyAI is building the leading enterprise AI SaaS company for digital transformation across the most critical and resilient growth verticals, including retail, consumer packaged goods (CPG), finance, manufacturing, media, and IT/enterprise service management.
With many leading enterprises as clients, the company reported to the Singapore government to help create an appropriate framework for generative AI within the country, hoping to establish a blueprint for market-leading gen AI.
Supporting the nine elements put forward, SymphonyAI’s financial services division opted to provide feedback on three key areas – data, testing and assurance, and AI for the public good.
Already having a robust approach to the sensitivity of the data that SymphonyAI handles for customers fighting financial crime, it was the company’s intention to highlight how it currently builds sufficient safeguards to mitigate unintended consequences for customers and end consumers, which Singapore can learn from and adapt to their own requirements.
SymphonyAI’s feedback on data
Regarding the topic of data, SymphonyAI endorses the principle that AI is a core element of model development, particularly in the context of financial crime prevention.
The company emphasized the significance of using trusted data sources, such as transaction records and digital customer behaviors, to detect patterns. This indicates potential financial crimes or risks while also underscoring the critical need to protect personally identifying data, which is required in financial crime prevention.
One way to address this inherent tension of data vs privacy could be the use of privacy-enhancing technologies (PET) to enable cross-sector data sharing while safeguarding privacy. The Singapore government could help facilitate access to datasets for advanced detection methods using generative AI and predictive AI, which would help track illicit transactions.
To enable this, a balance between copyright and data access could work, allowing end users to benefit from relevant information sources. This could be possible via ‘fair use’ for third-party data.
The framework’s Trusted Development and Deployment dimension proposes the implementation of a ‘food label’ concept relating to data governance, disclosing the sources of data used in generative AI outputs. The importance of collaboration between stakeholders was also emphasized alongside the creation of sandboxes for testing novel risk detection approaches, which would involve government, industry, and other related institutions – elements supported by SymphonyAI to foster innovation and create trust among stakeholders.
SymphonyAI’s feedback on testing and assurance
Providing feedback on the testing and assurance dimension of the proposal, SymphonyAI financial services stressed the importance of independent testing to ensure the accuracy and fairness of models, particularly in financial crime detection, where third-party reviews are essential for verifying system effectiveness (both in design and execution) in meeting regulatory standards, and identifying, monitoring, and disrupting criminal activities in banking, payments, and loans.
The company endorsed standardizing testing methods involving model owners, organizations that apply the technology like SymphonyAI, and end-users such as banks, payments firms, and other related sectors. This approach builds trust among consumers, regulators, and technology deployers accountable for risk management.
SymphonyAI suggests involving education institutions like universities to test generative AI applications alongside industry testing, creating an independent voice to create trust and develop adaptive thinking on governance models. This collaboration would provide access to trusted data and sandboxes, allowing researchers to explore intended and unintended consequences of generative AI.
SymphonyAI’s feedback on AI for public good
The Singapore government’s framework notes that generative AI has a transformative potential to benefit communities. This is undoubtedly the case with financial crime prevention, where SymphonyAI is using the technology to combat bad actors. It is also a necessary use of the technology, considering criminals are already adopting this new technology to steal money from consumers and businesses.
SymphonyAI provided feedback in this area to explain how it was reducing the social harms of money laundering and terrorism financing using generative AI and its desire for the industry to adopt the technology en masse. Encouraging the framework to adopt the finance industry as an early adoption use case, the company explained how generative AI can transform financial crime investigation productivity and effectiveness (via innovations such as Sensa Investigation Hub and SensaAI for sanctions).
Alongside this, SymphonyAI suggested the idea of cross-sector partnerships on literacy about generative AI, helping raise awareness of criminal threats (such as scams) within communities and enabling cooperation in preventing financial crime.
To enable this endeavor, the company once again highlights the idea of data access and sharing among institutions, including the use of sandboxes in an anti-financial crime context.
Conclusion
SymphonyAI financial services welcomed the opportunity to offer feedback to the Singapore government’s Proposed Model AI Governance framework for Generative AI.
Legislation and regulatory guidance are necessary to keep pace with technological advancements and foster industry adoption of generative AI. Increasing awareness and facilitating informed decisions regarding the use of AI in financial services while ensuring consumer and business confidence in data protection will be paramount.
The ideal approach to this is to provide data access across industries and with educational facilities such as universities operating within sandbox environments to explore the intended and unintended consequences of generative AI.