Blog

Designing AI for the Enterprise
 


12.04.2017 | By Mark Speyers
 

We are in an interesting time in the development of artificial intelligence (AI). We’ve moved beyond thinking about AI as just a technology that fuels robotics to understanding what has always been true: AI is a field in and of itself that enables computers to complete tasks that previously would have required human intelligence. With this progress, we’ve seen AI percolate into the enterprise—humans and intelligent machines are increasingly intertwined in the workplace.

However, AI is not here to replace humans; it is here to enhance the human experience. When we think about humans being empowered by AI rather than handing the task off to cold, unfeeling machines, we reframe the problem and the result is a broader domain in which to design systems that are more humane than what we usually think of as artificial intelligence.

At Ayasdi, we are focused on developing AI for the enterprise. More specifically, we are enabling our customers to realize greater value from vast amounts of data that is routinely collected in modern enterprises. Humans did not evolve to be great at discovering insights from large, complex datasets—and the amount of work required to extract value is staggering. This is a perfect scenario for human-machine collaboration.

From a design perspective this requires a delicate balance. Without enough context, you have a black box that results in a loss of confidence or trust. Too much detail results in confusion. Designing AI for the enterprise requires us to anticipate what information to provide, at what point in time, and in what construct—all in support of empowering the end-user to realize the benefits of AI quickly and move towards his or her ultimate goal: completing the task at hand.

As a designer for AI technologies, much of the design & user experience best practices, of course, still hold true. We strive to use familiar and accurate industry vernacular to minimize cognitive load and reduce opportunities for confusion. Likewise, we apply visual and information visualization best practices to communicate, organize, and illuminate key insights while minimizing visually distracting interface elements that impede comprehension.

Specific to large, complex data and AI, however, the Ayasdi design team has established two key design principles that we carry with us through all of our work, regardless of application or domain. These design principles speak to the experiences of our customers that we have found to be the significant barriers to the adoption of AI technologies in the enterprise.

Design to Build Trust

AI has a reputation for being a black box and trust is one of the first things to be challenged. A lack of trust can lead users down a rathole of investigation that is unrelated to their task at hand: Is the algorithm right? Was the original source data right? Can we validate what we see with other systems we’re more familiar with?

Trust comes from understanding and understanding develops through justification.  Justification is particularly important when AI surfaces information that is new or unexpected to the user.  Justification and trust need to occur with the system before humans feel comfortable acting on new information—the value that humans are seeking from AI. The concept of justification is core to our technology but representing that to the user must balance the explanatory detail required to build trust against overwhelming them.

For example, physicians come with years of experience diagnosing patients and can easily predict what might be ailing patients based on gut instinct. (This is often tied to factors like race, age range, socio-economic status, etc.) If AI software excludes these details in the results of its analysis or fails to explain why the ratio of these factors isn’t as expected, physicians won’t trust the results—there’s a mismatch with their expertise.

This is true in any industry. Domain experts require justification beyond a set of statistical metrics; uncovering and incorporating the contextual, supporting information required is core to what the design team does. If the user experience does not bring a new user into the fold in such a way that they trust the outputs – the impact will be constrained. Establishing trust by helping users make sense of data is how we empower them to use Ayasdi to solve their most complex challenges.

Use Data to Engage, Not Distract

A common misconception is that “more [data] is always better”. A more nuanced interpretation is that more data is better for finding an optimal solution, but not for understanding and interpreting results. Showing results in a progressive manner (instead of all up front) helps users process the information better and builds understanding. As designers, we aim to provide just the right amount of information so that we can leverage the richness of data to raise the user’s interest and decrease their cognitive load.

This principle is deeply tied to the previous one about trust but still important enough to warrant its own attention. In the same way that a lack of trust may lead users down a rathole of investigation, too much information can distract, overwhelm, or even worse, result in the wrong conclusion. When faced with too much information, misinterpretation can lead users down the wrong path, waste their time, or bring about spurious findings. Even quants in financial services can be overwhelmed (although our experience indicates their threshold is higher).

A well-designed experience needs to show the right data and provide the right context at the right time. 

For example, financial services analysts might know that certain high-volume investment customers are willing to take on more risk to gain greater returns. However, algorithms analyzing those customers might not segment them as willing to take higher risks based on a number of other variables being analyzed across high-dimensional data sets. For business analysts to the trust AI-driven customer segmentation, it’s critical that they are presented with the exact reasons in conjunction with each segmentation. Highlighting the most relevant, business-interpretable drivers of a segment and minimizing other potentially counter-intuitive, descriptive statistics in many cases may help analysts to interpret the machine-augmented analysis.

When it comes to designing AI for the enterprise, it’s not just about AI giving the right answer. It’s about AI empowering users to understand and act on the right answer. We accomplish this through the user experience—a force multiplier for the deployment of AI. Solving the UX challenge will facilitate organizational change, accelerate adoption and reduce friction. At Ayasdi, we’re committed to empowering enterprise customers to derive ever-increasing value from AI technologies by continuing to explore and define what it means to design a great user experience for AI. We accomplish this by incorporating design early in the process.

We are in an interesting time in the development of artificial intelligence (AI). We’ve moved beyond thinking about AI as just a technology that fuels robotics to understanding what has always been true: AI is a field in and of itself that enables computers to complete tasks that previously would have required human intelligence. With this progress, we’ve seen AI percolate into the enterprise—humans and intelligent machines are increasingly intertwined in the workplace.

However, AI is not here to replace humans; it is here to enhance the human experience. When we think about humans being empowered by AI rather than handing the task off to cold, unfeeling machines, we reframe the problem and the result is a broader domain in which to design systems that are more humane than what we usually think of as artificial intelligence.

At Ayasdi, we are focused on developing AI for the enterprise. More specifically, we are enabling our customers to realize greater value from vast amounts of data that is routinely collected in modern enterprises. Humans did not evolve to be great at discovering insights from large, complex datasets—and the amount of work required to extract value is staggering. This is a perfect scenario for human-machine collaboration.

From a design perspective this requires a delicate balance. Without enough context, you have a black box that results in a loss of confidence or trust. Too much detail results in confusion. Designing AI for the enterprise requires us to anticipate what information to provide, at what point in time, and in what construct—all in support of empowering the end-user to realize the benefits of AI quickly and move towards his or her ultimate goal: completing the task at hand.

As a designer for AI technologies, much of the design & user experience best practices, of course, still hold true. We strive to use familiar and accurate industry vernacular to minimize cognitive load and reduce opportunities for confusion. Likewise, we apply visual and information visualization best practices to communicate, organize, and illuminate key insights while minimizing visually distracting interface elements that impede comprehension.

Specific to large, complex data and AI, however, the Ayasdi design team has established two key design principles that we carry with us through all of our work, regardless of application or domain. These design principles speak to the experiences of our customers that we have found to be the significant barriers to the adoption of AI technologies in the enterprise.

Design to Build Trust

AI has a reputation for being a black box and trust is one of the first things to be challenged. A lack of trust can lead users down a rathole of investigation that is unrelated to their task at hand: Is the algorithm right? Was the original source data right? Can we validate what we see with other systems we’re more familiar with?

Trust comes from understanding and understanding develops through justification.  Justification is particularly important when AI surfaces information that is new or unexpected to the user.  Justification and trust need to occur with the system before humans feel comfortable acting on new information—the value that humans are seeking from AI. The concept of justification is core to our technology but representing that to the user must balance the explanatory detail required to build trust against overwhelming them.

For example, physicians come with years of experience diagnosing patients and can easily predict what might be ailing patients based on gut instinct. (This is often tied to factors like race, age range, socio-economic status, etc.) If AI software excludes these details in the results of its analysis or fails to explain why the ratio of these factors isn’t as expected, physicians won’t trust the results—there’s a mismatch with their expertise.

This is true in any industry. Domain experts require justification beyond a set of statistical metrics; uncovering and incorporating the contextual, supporting information required is core to what the design team does. If the user experience does not bring a new user into the fold in such a way that they trust the outputs – the impact will be constrained. Establishing trust by helping users make sense of data is how we empower them to use Ayasdi to solve their most complex challenges.

Use Data to Engage, Not Distract

A common misconception is that “more [data] is always better”. A more nuanced interpretation is that more data is better for finding an optimal solution, but not for understanding and interpreting results. Showing results in a progressive manner (instead of all up front) helps users process the information better and builds understanding. As designers, we aim to provide just the right amount of information so that we can leverage the richness of data to raise the user’s interest and decrease their cognitive load.

This principle is deeply tied to the previous one about trust but still important enough to warrant its own attention. In the same way that a lack of trust may lead users down a rathole of investigation, too much information can distract, overwhelm, or even worse, result in the wrong conclusion. When faced with too much information, misinterpretation can lead users down the wrong path, waste their time, or bring about spurious findings. Even quants in financial services can be overwhelmed (although our experience indicates their threshold is higher).

A well-designed experience needs to show the right data and provide the right context at the right time. 

For example, financial services analysts might know that certain high-volume investment customers are willing to take on more risk to gain greater returns. However, algorithms analyzing those customers might not segment them as willing to take higher risks based on a number of other variables being analyzed across high-dimensional data sets. For business analysts to the trust AI-driven customer segmentation, it’s critical that they are presented with the exact reasons in conjunction with each segmentation. Highlighting the most relevant, business-interpretable drivers of a segment and minimizing other potentially counter-intuitive, descriptive statistics in many cases may help analysts to interpret the machine-augmented analysis.

When it comes to designing AI for the enterprise, it’s not just about AI giving the right answer. It’s about AI empowering users to understand and act on the right answer. We accomplish this through the user experience—a force multiplier for the deployment of AI. Solving the UX challenge will facilitate organizational change, accelerate adoption and reduce friction. At Ayasdi, we’re committed to empowering enterprise customers to derive ever-increasing value from AI technologies by continuing to explore and define what it means to design a great user experience for AI. We accomplish this by incorporating design early in the process.

Latest Insights

On Demand Webinar - Accelerating FAST with AI for Media and Entertainment
 
05.10.2024 Webinar

Accelerating FAST with AI

Media Square Icon Svg
IRIS Foundry Platform
 
05.10.2024 Webinar

Industrial DataOps for effective analytics and decision-making

Industrial Square Icon Svg
Scale Data-Driven, Intelligent Operations with industrial DataOps
 
05.08.2024 Analyst report

The Path Towards Data-driven and Intelligent Operations

Industrial Square Icon Svg