Table of Contents
Discover how SymphonyAI uses AI responsibly in SensaAI for Sanctions
In part one of this series – “What is responsible AI and how can it address concerns within financial services” – we covered the basics on responsible AI.
Responsible AI at SymphonyAI
SymphonyAI is practicing responsible AI throughout its development of new financial crime prevention products. These cover the main principles outlined in part one – accountability, transparency, privacy, security, and reliability and safety.
SymphonyAI’s five responsible AI principles.
SensaAI for Sanctions is one piece of software that SymphonyAI is delivering responsibly, allowing financial institutions to upgrade their existing sanctions screening solution with AI.
SensaAI for Sanctions significantly improves screening match accuracy. The software uses generative AI to extract and classify relevant entities from unstructured free text found within payment messages. This provides context that otherwise wouldn’t be available, meaning that situations like mixing up names with places or names with vessels can be avoided, which are common sources of false positives. These extracted entities are then passed to a predictive AI model, which can do an exhaustive check looking at all the information available to be used (name differences, date of birth, addresses, etc.)
As with any AI solution, explainability is very important. In SensaAI, a prediction is returned alongside a probability of a match, with a human readable explanation on why the AI model came to its conclusion. Proof of concepts using the AI overlay have seen an 80% reduction in false positives while retaining 100% of true positives.
By increasing the quality of alerts going to investigators we can reduce investigator fatigue and help prevent mistakes in the investigation process. Because the AI model is not a rules-based system designed and understood by humans, the level of scrutiny and governance must be high.
Below, we have outlined SymphonyAI’s five principles of responsible AI and explained how they are being met with the design and deployment of SensaAI for Sanctions and other AI overlays.
Accountability
Accountability is a cornerstone of responsible AI development and deployment. As AI systems become increasingly integrated into our daily lives and critical decision-making processes, it is essential to establish clear lines of responsibility and ownership.
Accountability ensures that there are mechanisms in place to address potential mistakes, biases, or harmful outcomes resulting from AI systems. It involves identifying and holding accountable the individuals, teams, or organizations responsible for the design, implementation, and oversight of AI technologies. By fostering a culture of accountability, a company can promote ethical practices, encourage continuous improvement, and build trust between AI developers and end-users. This also involves creating frameworks for auditing AI systems, implementing checks and balances, and establishing legal and regulatory standards.
Ultimately, accountability in AI helps to mitigate risks, protect individuals and society from potential harm, and ensures that AI technologies are developed and used in alignment with human values and societal norms.
Within SensaAI for Sanctions
Model performance – To justify putting an AI model into production, the decision maker must see the performance of that model on a blind test, as measured by relevant metrics such as false positive reduction whilst keeping false negatives close to zero, which is included in the SensaAI model documentation.
Traceability – For each prediction, we know which model was used to make that prediction and what training data was used to train that model.
Transparency
Transparency is a vital part of responsible AI. It involves making the inner workings, decision-making processes, and data sources of AI algorithms accessible and understandable to stakeholders, including users, regulators, and customers.
Transparency helps to build trust by allowing for scrutiny and validation of AI systems, ensuring that they operate as intended and without hidden biases or unintended consequences. It enables users to understand how decisions are made, fostering informed consent and empowering individuals to challenge or appeal automated decisions when necessary. Alongside this, transparency facilitates collaboration among researchers, developers, and policymakers, which accelerates innovation and the development of best practices.
By promoting openness about AI capabilities and limitations, transparency helps manage expectations and prevents misuse of AI systems. It also supports efforts to identify and address potential issues, biases, or discriminatory outcomes, contributing to the overall fairness and reliability of the technology.
Within SensaAI for Sanctions
Model white boxing – Understanding what features are being used by the model and how it is using them, especially if it’s relying heavily on certain features, is a crucial part of model acceptance. Again, this is included in the SensaAI model documentation.
Model management – Controls are provided for determining which model is in production at any time and how much impact it’s having. Phased go-lives and phased model changes are recommended and supported. The ability to roll back to a previous model is important if something is found to be wrong with a challenger model.
Prediction explanations – SymphonyAI explains each prediction that the model makes in understandable text, which allows employees to understand why the AI model has come to the conclusion it has in each individual case.
Reliability and safety
Ensuring the reliability and safety of AI systems is paramount in responsible AI development. Reliability refers to the consistency and dependability of AI performance across various scenarios and over time. It involves rigorous testing, validation, and ongoing monitoring to ensure that AI systems function as intended and produce accurate, consistent results. Safety focuses on preventing harm to users and mitigating any potential risks. This includes considering potential unintended consequences during design and deployment, implementing fail-safes, and establishing protocols for human oversight and intervention when necessary. Reliability is key to ensuring that AI continues to perform to the best of its ability.
Safe AI systems are crucial for building trust and encouraging widespread adoption of the technology. They help minimize errors, reduce the risk of accidents, and ensure that AI can be depended upon in critical applications such as healthcare, transportation, and finance. By prioritizing safety, AI systems not only perform well but also align with behavioral standards and societal expectations.
Within SensaAI for Sanctions
Ongoing validation – A certain proportion of alerts are randomly held out from the auto-closure process, that would have otherwise been auto-closed by the model, to be investigated and validated as false positives. These are then reported on, and any true positives flagged.
Privacy
Privacy within AI is essential. As these systems often rely on vast amounts of data, including personal and sensitive information, protecting privacy is crucial. This involves implementing robust data protection measures, ensuring compliance with regulations, and adopting key principles (data minimization, limiting access, etc.) Privacy-preserving techniques, such as federated learning and differential privacy, can help balance the need for data-driven insights with individual privacy rights.
Within SensaAI for Sanctions
Compliance – SymphonyAI ensures that the company adheres to all local privacy regulations such as GDPR.
Certifications – SymphonyAI has achieved certifications – ISO 27K and SOC 2 Type 2 for cloud-based delivery – that highlight the organizational safeguards in place within the company.
Security
Security focuses on protecting AI systems and their associated data from unauthorized access, manipulation, or cybercrime. This is achieved by implementing strong encryption, access controls, and regular audits. As AI becomes more prevalent in critical infrastructure and decision-making processes, security becomes increasingly important to prevent potential misuse or exploitation. By prioritizing privacy and security, responsible AI development can help maintain public trust, protect individual rights, and safeguard against potential threats.
Within SensaAI for Sanctions
Certifications – SymphonyAI has achieved certifications – ISO 27K and SOC 2 Type 2 for cloud-based delivery – that highlight the organizational safeguards in place within the company.
Business continuity – SymphonyAI has achieved ISO 22301, the international standard for Business Continuity Management Systems (BCMS).
Why responsible AI is critical for businesses
Having a responsible AI policy benefits businesses because the knock-on effect with their customers is huge.
As the World Economic Forum notes, ‘AI systems designed with responsibility in mind can significantly enhance customer trust and brand reputation.’ Brand engagement is likely to increase if a customer knows that AI is being ethically used, which sustains profits long term.
Alongside this, research from Bain & Company has found that businesses with a comprehensive, responsible approach to AI accelerate and amplify the value they get from the technology, potentially doubling their profit.
By rapidly implementing use cases, and by doing so in a responsible manner, companies can enjoy positive innovations and sophisticated applications that futureproof their business as the finance industry evolves. This in turn puts them in an excellent position with regulators, bringing with it a significant competitive advantage; businesses that initiate a responsible AI policy now will be best placed later.
Conclusion
Responsible AI in financial services is crucial for balancing innovation with ethical considerations. As AI becomes more prevalent in areas like personalized financial advice, risk assessment and fraud detection, financial institutions must prioritize transparency, fairness, and accountability.
By implementing responsible AI practices, the industry can enhance efficiency and customer experience while mitigating risks associated with bias, privacy, and security concerns. This approach fosters trust among customers and regulators while also contributing to a more inclusive and stable financial ecosystem. As the sector evolves, and new innovations such as agentic AI become more common, collaboration among all stakeholders is essential to ensure AI remains a positive and responsible force in finance.