How excited or scared should we be about generative AI?

04.14.2023 | Charmian Simmons

In the last six months, hundreds of millions of people have experienced first-hand the power of generative AI. The likes of OpenAI with ChatGPT and GPT 4 have opened up a world of AI and machine learning to the masses that can be used to naturally interact with search engines and information repositories and create new content, including text, code, images, audio, videos, and simulations.

With innovation comes risks and concerns. The public has certainly started trialling the technology and discovered some of its immediate downfalls. From poor learning habits, plagiarism, or lack of source referencing to hallucinatory answers/fake responses, bias in models, IP owners of content, and ethical standards, it seems like a growing list! Yet are we being too critical at the onset of a potential revolutionary innovation? And can we weather the storm to see the best it can deliver?

While generative AI may not solve all the world’s problems, it has the power to fast-forward through several business pain points across various industries into efficient, scalable, and personalized problem-solving capabilities. We need to get through the teething issues first to realize the plentiful options in front of us and how bright the future is with this technology.

What is generative AI?

Generative AI refers to a category of AI techniques that can generate new and original data and content. These techniques use large language and deep learning models to learn the patterns and structures of existing data and then use this knowledge to create new data and content.

Some of the key benefits of generative AI in a business context include:

  • greater access to knowledge
  • contextualized content over time
  • automation of tasks typically performed by humans that can be repetitive or time-consuming to conclude
  • efficiencies in dealing with large amounts of data quickly and accurately
  • personalization of content.

The risks and challenges of generative AI

Generative AI can have several risks, of which the most prominent are:

  • Bias if it is trained on biased datasets and therefore can’t generate unbiased content
  • Misuse if it is not governed properly to avoid fake, malicious, and harmful content that may unnecessarily spread misinformation
  • Legal and ethical issues over ownership, copyright, privacy, and data protection
  • Unintended consequences

The challenges of generative AI can be widespread or specific – widespread applying to multiple industries and broad-reaching, or specific, to a sector or use-case. The most common widespread challenges include:

Security and privacy:

  • Generated content could be used to breach security or invade privacy
  • Limit individuals’ ability to request data removal
  • Require mandated age consent for collected data
  • Disregard the right to forget and unlearn
  • Non-conformity with data privacy laws and regulations

Information management and leakage:

  • The current generative AI models’ ability to provide hallucinatory responses that require human verification and monitoring
  • When using public models, the potential for employees to disclose confidential information on clients or business strategy
  • Copyright of text/code
  • Interpretability and transparency, e.g., how outputs will be used, source origination traceability and referencing

Specific sector challenges are nuanced and can include:

  • Appetite for innovation and integration ability with older legacy technologies
  • Data source referencing
  • Lack of local regulatory, supervisory guidance and acceptability for the use of such innovation, especially for highly regulated sectors
  • Ensuring the uniqueness of generated content to be accurate, truthful, and fit-for-purpose.

Why the excitement and is it worth it?

The excitement with generative AI has undoubtedly been the immediate results from ‘how to’ or ‘create’ style requests and the succinct manner responses are crafted, as well as its automated capability. And the more it is queried and explored, the more thought is being given to what generative AI is good for today. In a wider business context, multiple industries aren’t fully there in answering this question yet. Though in some industries, such as in financial services or AML compliance, companies are seeing the potential and getting closer to answering this while balancing the risks vs. benefits, establishing baselines, and defining outcome measures for success.

So, is it worth it? Yes. We are no strangers to navigating the negative and positive impacts of innovation – we’ve done it before with big innovations, such as the advancement of the internet and text messaging/smartphones. Generative AI is another significant evolution. Our past is helping us understand our future – we know and are working to solve the risks and challenges of generative AI and, more widely, machine learning, open-source, and proprietary models. As long as we are realistic and don’t overestimate the ability of generative AI technology in a given use case, we’ll see faster adoption, shorter heuristic learning timeframes of the models, and quicker human-validated outputs to foster greater trust in the technology. We’ll also realize the benefits faster, especially in use cases where generative AI is used to automate repetitive and/or time-consuming tasks, freeing up workers to do things humans are good at!

Is generative AI a threat? Not necessarily. We should be cautious due to the risk and challenges noted above, though we should be excited more about the opportunities it brings. It’s more about augmented intelligence with models to scale content generation and aid business processes to be more efficient and scalable.

For example, a few successful use cases of generative AI in financial crime detection and prevention are:

  • Digital investigators that use a combination of proprietary and open-source generative AI models to collate relevant data into meaningful information in case files on a customer, transaction or groupings of behavioral patterns or in helping with ID verification in know your customer (KYC) processes
  • Digital writers who prepare narratives on suspicious activity reports (SARs) using various data source stores held internally in an organization, and in the template of a specific filing country
  • Adverse media agents who use focality models with generative AI to produce consolidated adverse media results on counterparties for due diligence activities
  • AML analysts and digital workers collaborate to optimize a transaction monitoring case by refining parameters and thresholds to increase efficiencies and pattern detection

While these examples are subjective to implementation and model learning success, their impact is felt in spades by reducing time spent on repetitive and time-consuming tasks, improving process steps and procedure efficiency, and re-focusing resources time on higher-risk, higher-skill tasks that require attention.

Taking control of the impacts of generative AI

This is the responsibility of those adopting this technology. Perhaps not easy to design and solve upfront, this can be managed via a variety of mechanisms that work with a company’s risk appetite, innovative culture, and digital optimization journey. For example: establishing guidelines on using generative AI, including what sources can be used and what content can be generated in your company, what constitutes harmful content, and what constitutes successful content; a model governance and data ethics framework to govern input data feeds and output uses; baselining information that is already considered misinformation in high-risk areas; defining and applying toxicity filters; a framework for trust and accountability.

We can be confident that as generative AI continues to mature, the identifiable risks, challenges, and impacts will actively be thought through, addressed, and shared as a community. Balancing these with the benefits of creativity, personalization, productivity, predictive insights, and scalability will serve us well as we embrace a new technology revolution.

To learn more about how AI can enrich your financial crime pain points, contact SymphonyAI Sensa-NetReveal.

About the author

Charmian Simmons is a financial crime and compliance expert at SymphonyAI Sensa-NetReveal. She has over 20 years of experience in the financial sector across risk management, financial crime, internal controls, and IT advisory. She is responsible for providing practitioner expertise, thought leadership, and analyzing key policy/regulatory/cultural/technology drivers transforming the compliance market. Charmian is CAMS, CRMA, CDPSE, and CISA certified.

Latest Insights

Harnessing the Power of AI in OTT Monetization
05.23.2024 Webinar

Harnessing the Power of AI in OTT Monetization

Media Square Icon Svg
ServiceOps 2024 Automation
05.23.2024 Analyst report

ServiceOps 2024: Automation and (gen)AI-powered IT service and operations

Enterprise IT Square Icon Svg
Improving detection efficiency with AI

Improving detection efficiency with AI

Financial Services Square Icon Svg