Article

New AI Regulations, New Mindset

As we enter a new era of AI regulation, financial institutions can prepare for stricter oversight by building a trusted data foundation.

Simon Axon
Simon Axon
June 24, 2024 3 min read

Banks and financial institutions were early pioneers in the use of AI. Today, they continue to lead, exploring new use cases aimed at revolutionizing service delivery and operational efficiency. But as regulators develop new rules for AI, including generative AI, the industry is facing new questions about how to meet future obligations. In the face of an evolving regulatory landscape, financial services providers need to prepare now for stricter oversight. This will require a new mindset—one focused on AI governance and transparency—to support future compliance while promoting growth, innovation, and differentiation. 

Job number one: Understand risk levels 

Jurisdictions vary widely in their approaches to AI regulation, and progress is somewhat uneven. The EU was the first to establish a regulatory framework with its Artificial Intelligence Act, which takes a risk-based approach to balancing protection for citizens, guidance for businesses, and support for innovation.  Meanwhile, the U.S. is adopting a more decentralized “principles-based” model, and other nations are contemplating a hodgepodge of strategies and timetables. 

Even for institutions that aren’t subject to the EU’s regulations, job number one will be to identify the risk profile of each individual AI model in use and planned across the business. 

The EU AI Act presents four levels of risk: unacceptable, high, limited, and minimal. AI uses cases in the unacceptable risk category are prohibited, while those falling under minimal risk have no special requirements. In practice, most AI use cases in financial services will fall under high or limited risk. High-risk AI includes anything that could present a significant risk to health, safety, or fundamental rights. In financial services, AI use cases like credit scoring and insurance risk profiling will likely be considered high risk. By contrast, AI used to power anti-fraud applications, parse customer comments, or automate routine tasks are largely regarded as limited risk.  

Job number two: Build transparency

Financial institutions must also build transparency around AI to meet regulatory obligations and gain trust with customers. To demonstrate the fairness of AI decisions, organizations must use high-quality datasets for training and inference. They must develop robust, accurate models and reliable methods for tracking drift. It’s also critical to ensure that personal data is kept private and secure when used by AI models. In addition, AI regulations will overlap with other regulations, including the Digital Operations Resilience Act (DORA), as well as duty-of-care and ethical considerations for both individual and business customers. 

All this boils down to the question: How well do you know your AI models? Supervisory authorities (e.g., the European Central Bank) will have the power to ask to review specific AI models and the decisions they make. They’ll want to see the data a given model was trained on and what it used to make decisions. They’ll ask for evidence of human oversight and model management, and they’ll request automated logs showing performance. Given the “black box” nature of AI decision-making, the ability to present this information will be essential to complying with AI regulations. 

Job number three: Develop a strong data foundation

Financial institutions are accustomed to transparency and governance in the context of standard banking regulations, but it’s more challenging with AI because of the volume, variety, and velocity of the data involved. For AI to be trusted, the data must be trusted. To deliver AI that’s fair, safe, and secure, an organization must start with a trusted data foundation and a culture that nurtures data fluency.

Teradata’s robust, flexible, and scalable analytics and data platform provides the building blocks for Trusted AI. It harmonizes data across the enterprise, providing a single version of the truth and allowing users to query data across disparate sources. It provides the ability to prepare, store, and share reusable data features for model building. It also includes features for monitoring, maintaining, refreshing, and retiring models at enterprise scale. All these capabilities not only enable fast and effective deployment of AI at scale; they provide the transparency and governance needed to meet strict regulatory requirements and build trust with customers. 
 
As we enter a new era of AI regulation, the financial institutions that prepare now for stricter AI oversight will be the ones that benefit later. An ounce AI governance today will prevent a pound of regulatory remediation tomorrow. By adopting a data-centric mindset and building a strong data foundation, financial institutions will be well positioned to embrace AI’s opportunities while increasing transparency with regulatory agencies and customers alike. If you’re interested in learning how Teradata can help you leverage AI regulation to strengthen your company’s reputation, please get in touch.
 

Tags

About Simon Axon

Simon Axon leads the Financial Services Industry Strategy & Business Value Engineering practices across EMEA and APJ. His role is to help our customers drive more commercial value from their data by understanding the impact of integrated data and advanced analytics. Prior to his current role, Simon led the Data Science, Business Analysis, and Industry Consultancy practices in the UK and Ireland, applying his diverse experience across multiple industries to understand customers' needs and identify opportunities to leverage data and analytics to achieve high-impact business outcomes. Before joining Teradata in 2015, Simon worked for Sainsbury's and CACI Limited.

View all posts by Simon Axon

Stay in the know

Subscribe to get weekly insights delivered to your inbox.



I consent that Teradata Corporation, as provider of this website, may occasionally send me Teradata Marketing Communications emails with information regarding products, data analytics, and event and webinar invitations. I understand that I may unsubscribe at any time by following the unsubscribe link at the bottom of any email I receive.

Your privacy is important. Your personal information will be collected, stored, and processed in accordance with the Teradata Global Privacy Policy.