Using AI? The Rules of Effective Compliance Still Apply

    

Artificial intelligence (AI) has emerged as a powerful tool in compliance and technology. From automating routine tasks to analyzing large data sets, AI has been revolutionizing how organizations work and adhere to regulatory standards.  

Using AI in compliance effectively means embracing its potential, but also understanding potential risks and limitations. 

As MCO’s CEO Brian Fahey says, AI is a dominating topic across all industries, including compliance. He notes AI in compliance software has been around for a while, especially around using natural language processing (NLP) to interpret the text of regulations. Fahey expects that the use of NLP will continue, and the use of Generative AI and Large Language Models will accelerate in the ways they are used to help compliance. 

AI in compliance can drive efficiency and effectiveness 

  • AI systems can quickly process large amounts of data, significantly reducing the time and effort required for routine tasks

  • AI can identify patterns and anomalies that might indicate potential compliance issues

  • AI can reduce false positives

  • AI can detect security threats and malicious activity

  • AI can reduce costs by reducing the need for manual work and allowing firms to allocate resources more effectively and focus on strategic initiatives

  • AI can minimize the risk of human error and ensure that compliance procedures are consistently applied across the firm



AI in compliance carries new and evolving risks 

Any evolving technology brings with it a level of risk. However, because artificial intelligence is relatively new and AI capabilities are growing at breakneck speed, the potential risk can be even more significant. AI is transforming compliance but also creating new types of risk. 

  • Decisions made by AI can be opaque and lack the nuanced understanding that is part of human judgment—a key element when interpreting both regulations and employee behaviors 
  • Results can show bias or model drift if the data used is biased or incorrect or if circumstances have changed and the modeling hasn’t caught up 
  • Reliance on AI can create a false sense of security 
  • Technology built on AI that was rushed to market can have performance and quality issues 
  • AI systems can be targets for cyberattacks, potentially compromising sensitive employee and compliance data 

The authors of the MIT Center for Information Systems Research Briefing Building AI Explanation Capability for the AI-Powered Organization point out that “AI model results are not definitive. Treating them as such can be risky, especially if they are being applied to new cases or contexts.” 

 

Solid data remains critical to effective compliance   

“AI is a tool to get things done. To use it properly and generate value, organizations need the right capabilities — including a good understanding of data.” 

—AI Is Everybody’s Business | MIT Center for Information Systems Research 

 

AI systems are only as good as the data that they process. If they ‘learn’ on skewed, incomplete or biased data, results and conclusions can be just as problematic. 

Fahey also reinforces the importance of data, noting that the most challenging part is often getting all the right information consolidated in one place.  

 

 

Whether you’re using AI or not, the rules of effective compliance still apply 

Regulators may scrutinize organizations that rely heavily on AI for compliance, especially if there are concerns about transparency and accountability.  

As Keith Pyke, MCO’s Director of Solution Sales, points out during the webinar Maximizing Control Effectiveness: Identifying and Overcoming Gaps in Compliance Surveillance, regulators won’t be happy with “black box” compliance. They will want to understand why firms took a particular course of action. Can you explain it? Can you show the data behind it?  

 

Regulations are also evolving 

Given the rapid pace of development and deployment of artificial intelligence capabilities across business and society, governments and regulators are focusing on developing AI policy and guidelines. 

  • On August 1, 2024, the European Union’s Artificial Intelligence Act came into force, providing a landmark framework that classifies AI applications based on potential levels of risk.  
  • In June of 2024, the US and Singapore conducted a joint Roundtable on Artificial Intelligence discussing AI principles and objectives, safety and developing standardized risk management frameworks across jurisdictions.  

Artificial Intelligence capabilities will only continue to exponentially grow. Taking a measured and strategic approach to implementing them will help compliance maximize the benefits and minimize the risk. 

 

 

MCO helps firms achieve streamlined and defensible employee, transactional and third party compliance and effective oversight of compliance obligations. Interested in learning more? Contact us for a demo today!