It is now well understood that in order to make Artificial Intelligence broadly useful, it is critical that humans can interact with and have confidence in the algorithms that are being used. This observation has led to the development of the notion of explainable AI (sometimes called XAI) which was initially a loosely defined concept that required explanations (of some type) for the algorithms being used. This concept has now matured to include a requirement for the output of an algorithmic decision to be justifiable, accessible, and error-aware. This is AI Transparency.
Right now amidst the COVID-19 pandemic, there will be many algorithms giving answers that are wrong because their training domain is no longer valid and therefore they do not represent the current reality. So, like Socrates, it should be possible to understand these systems and know what they do not know.
So what do explainable, justifiable, accessible, and error-aware look like in practice?
Let’s take an example. Imagine a holistic AI system within a bank that examines customer behavior and generates alerts based on certain activities that require action by the bank, e.i:
- Customer may be the victim of fraud
- Customer may be committing fraud
- Customer may be engaging in money laundering
- Customer is likely to default on a loan
- Customer is likely to churn
- There is an opportunity to sell product “X” to this customer
How can the AI approach be made transparent to the business and the regulator? Let’s take AML alerts as an example.
Explainable – what approach is used? What does the algorithm do? What information does it use? How does its accuracy compare against the old rules-based system that the regulator is very comfortable with? Does it capture the historic high-risk cases (SARs) in a backtest?
Justifiable – what information does it present to justify a decision? Is that information understandable to an analyst and to the regulator?
Accessible – exactly how does the model make a decision for this type of customer behavior? What about other types of behavior? How does the model make decisions across all behavior types that exist in the customer base? What about behavior it has never seen before? What types of behavior never generate alerts?
Error-aware – how accurate is a given decision? How many miss-classifications occur? What types of behavior or data leads to uncertainty and error? How do you make sure you are not missing anything important?
If all these questions can be readily answered then we have a transparent AI system. There has been good progress in explaining and justifying AI decisions, with even black-box models such as neural networks having some level of justification via algorithms such as SHAP (SHapeley Additive exPlanations).
Where many systems miss the mark is in accessibility and error-awareness. This requires a meta-analysis of the system itself. Enter topological data analysis. The great advantage of TDA is that it enables an unbiased global analysis of the entire decision system. It is an unsupervised technique so is not skewed by pre-existing bias and assumptions. Its aim is to map all the patterns that exist within the data and present them in an intuitive way via colorful visualizations so that human analysts can understand the global set of patterns. In this case, the set of patterns is all customer behavior types that exist in the data set. These patterns can then be overlaid with business metrics – one of which could be – examples where the model has made an incorrect decision – thus answering the question – “what types of behavior lead to uncertainty and error?”
The image below shows an example of applying this type of analysis to the decisions made by an AML transaction monitoring system. Each small circle in the diagram represents a microcluster of very similar customer behavior traits (based on transactional data, KYC, and other available data). A given circle might contain 100s or 1000s of customers that form a tight behavioral group. Circles are then connected to other circles if they share some similar behaviors – these connections then map the entire space of customer behavior. The map is then colored by the existing decisions that have been made by the TMS (a rules-based system). Blue dots indicate areas where the system generates alerts but they are never escalated so they are false positive alerts. Red dots show areas where at least one alert led to an escalation (true positives). The grey dots show areas of behavior where no alerts are ever generated so these parts of the population are never investigated.
This view of the data ultimately enables answers to all of the hard questions around accessibility and error-awareness and enables new strategies to be deployed. In AML – the big challenge is Unknown Unknowns – as criminal adversaries will continually adapt tactics to evade detection. This map immediately indicates areas where the system may have blind spots, where it is accurate, and where it is generating false positives.
This top-down view and the ability to drill up and down through scales and compare feature importance for different subgroups of behavior is incredibly powerful and enables human analysts to understand their data and decision systems with a new level of clarity – this is AI Transparency.
It is now well understood that in order to make Artificial Intelligence broadly useful, it is critical that humans can interact with and have confidence in the algorithms that are being used. This observation has led to the development of the notion of explainable AI (sometimes called XAI) which was initially a loosely defined concept that required explanations (of some type) for the algorithms being used. This concept has now matured to include a requirement for the output of an algorithmic decision to be justifiable, accessible, and error-aware. This is AI Transparency.
Right now amidst the COVID-19 pandemic, there will be many algorithms giving answers that are wrong because their training domain is no longer valid and therefore they do not represent the current reality. So, like Socrates, it should be possible to understand these systems and know what they do not know.
So what do explainable, justifiable, accessible, and error-aware look like in practice?
Let’s take an example. Imagine a holistic AI system within a bank that examines customer behavior and generates alerts based on certain activities that require action by the bank, e.i:
- Customer may be the victim of fraud
- Customer may be committing fraud
- Customer may be engaging in money laundering
- Customer is likely to default on a loan
- Customer is likely to churn
- There is an opportunity to sell product “X” to this customer
How can the AI approach be made transparent to the business and the regulator? Let’s take AML alerts as an example.
Explainable – what approach is used? What does the algorithm do? What information does it use? How does its accuracy compare against the old rules-based system that the regulator is very comfortable with? Does it capture the historic high-risk cases (SARs) in a backtest?
Justifiable – what information does it present to justify a decision? Is that information understandable to an analyst and to the regulator?
Accessible – exactly how does the model make a decision for this type of customer behavior? What about other types of behavior? How does the model make decisions across all behavior types that exist in the customer base? What about behavior it has never seen before? What types of behavior never generate alerts?
Error-aware – how accurate is a given decision? How many miss-classifications occur? What types of behavior or data leads to uncertainty and error? How do you make sure you are not missing anything important?
If all these questions can be readily answered then we have a transparent AI system. There has been good progress in explaining and justifying AI decisions, with even black-box models such as neural networks having some level of justification via algorithms such as SHAP (SHapeley Additive exPlanations).
Where many systems miss the mark is in accessibility and error-awareness. This requires a meta-analysis of the system itself. Enter topological data analysis. The great advantage of TDA is that it enables an unbiased global analysis of the entire decision system. It is an unsupervised technique so is not skewed by pre-existing bias and assumptions. Its aim is to map all the patterns that exist within the data and present them in an intuitive way via colorful visualizations so that human analysts can understand the global set of patterns. In this case, the set of patterns is all customer behavior types that exist in the data set. These patterns can then be overlaid with business metrics – one of which could be – examples where the model has made an incorrect decision – thus answering the question – “what types of behavior lead to uncertainty and error?”
The image below shows an example of applying this type of analysis to the decisions made by an AML transaction monitoring system. Each small circle in the diagram represents a microcluster of very similar customer behavior traits (based on transactional data, KYC, and other available data). A given circle might contain 100s or 1000s of customers that form a tight behavioral group. Circles are then connected to other circles if they share some similar behaviors – these connections then map the entire space of customer behavior. The map is then colored by the existing decisions that have been made by the TMS (a rules-based system). Blue dots indicate areas where the system generates alerts but they are never escalated so they are false positive alerts. Red dots show areas where at least one alert led to an escalation (true positives). The grey dots show areas of behavior where no alerts are ever generated so these parts of the population are never investigated.
This view of the data ultimately enables answers to all of the hard questions around accessibility and error-awareness and enables new strategies to be deployed. In AML – the big challenge is Unknown Unknowns – as criminal adversaries will continually adapt tactics to evade detection. This map immediately indicates areas where the system may have blind spots, where it is accurate, and where it is generating false positives.
This top-down view and the ability to drill up and down through scales and compare feature importance for different subgroups of behavior is incredibly powerful and enables human analysts to understand their data and decision systems with a new level of clarity – this is AI Transparency.