Select your language

Regulating artificial intelligence in the financial sector

The world's first comprehensive framework for artificial intelligence (hereinafter "AI") – Regulation (EU) 2024/1689 of the European Parliament and of the Council or the Artificial Intelligence Act (hereinafter the "AI Act") – took effect on 1 August 2024. It is directly applicable in all Member States of the European Union (hereinafter the "EU"). The AI Act harmonises the rules for the use of artificial intelligence in various areas, including financial services, by laying down a uniform legal framework across the EU, taking into account that the use of artificial intelligence enhances business process automation and productivity, optimises resources, improves customer experience and data quality, increases data processing speed, and boosts overall competitiveness.

The AI Act uses a risk-based approach and divides all artificial intelligence systems (hereinafter "AI systems") into four different risk categories:

  • unacceptable risk: AI systems that pose an unacceptable risk to fundamental rights and EU values are prohibited under Article 5 of the AI Act;
  • high risk: a set of requirements and obligations applies to AI systems that pose a high risk to health, safety, and fundamental rights. These systems are classified as "high-risk" systems in accordance with Article 6 of the AI Act in conjunction with Annexes I and III of the AI Act;
  • risk of opacity: transparency obligations established in Article 50 of the AI Act apply to AI systems that pose a risk of opacity;
  • minimal risk or no risk: AI systems that pose a minimal risk or no risk are not regulated. However, their providers and deployers can voluntarily adhere to codes of conduct of good practice (Article 95 of the AI Act).

The AI Act establishes rules regarding the development, trade, deployment, and use of AI systems, thus reducing the potential risks of AI systems in the area of financial services, including:

  • uniform rules for the placing on the market, the putting into service, and the use of AI systems in the EU;
  • prohibitions of specific high-risk and harmful AI practices;
  • requirements and responsibilities for developers, deployers, and users of high-risk AI systems;
  • transparency obligations for certain AI systems;
  • uniform rules for general-purpose AI models;
  • market surveillance and enforcement mechanisms, including coordination between the EU and the Member States;
  • measures to support innovation, particularly for small and medium enterprises and start-ups.

Financial market participants subject to AI usage requirements

The AI Act also provides for a series of harmonised rules and tasks to be implemented at the national level so that the requirements referred to therein are introduced into national legal acts by 2 August 2026. On 1 October 2025, the Law on Digital Operational Resilience and the Use of Artificial Intelligence in the Financial Market came into force. The AI usage requirements set out in the AI Act apply to the following financial market participants in accordance with Article 2 of Regulation (EU) 2024/1689:

  • alternative investment fund manager;
  • issuer of asset-referenced tokens;
  • ancillary insurance intermediary;
  • insurance undertaking;
  • insurance intermediary;
  • central counterparty;
  • central securities depository;
  • trade repository;
  • data reporting service provider;
  • electronic money institution;
  • investment firm;
  • investment management company;
  • crowdfunding service provider;
  • account information service provider;
  • credit institution;
  • credit rating agency;
  • crypto-asset service provider;
  • critical benchmark administrator;
  • payment institution;
  • reinsurance undertaking;
  • reinsurance intermediary;
  • private pension fund;
  • trading venue;
  • securitisation repository;
  • electronic money institution within the meaning of Section 1, Clause 21, Sub-clause "b" of the Law on Payment Services and Electronic Money;
  • financial holding company, mixed financial holding company, parent financial holding company in a Member State, EU parent financial holding company, parent mixed financial holding company in a Member State, EU parent mixed financial holding company;
  • payment institution within the meaning of Section 1, Clause 2, Sub-clause "b" of the Law on Payment Services and Electronic Money;
  • small and non-interconnected investment firm;
  • small private pension fund;
  • credit union.

Explanation of the AI system definition

The Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 include an explanation of the AI system definition within the meaning of Article 3(1) of the AI Act.

Article 3(1) of the AI Act defines an AI system as follows:

AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environment.

The definition of an AI system covers a wide range of systems. The determination of whether a software system constitutes an AI system should rely on its particular architecture and functionality as well as the elements of the definition referred to in Article 3(1) of the AI Act.

Explanation of prohibited artificial intelligence practices

Prohibited practices are explained in detail in the Commission Guidelines on prohibited artificial intelligence practices. For instance, financial credit scoring systems used by creditors or credit information agencies to assess the creditworthiness of customers, are outside the scope of Article 5(1)(c) of the AI Act if the assessment complies with consumer protection regulations and ensures fair treatment of consumers.

Examples of AI system and solution classifications in the financial sector 

High risk:

In the financial sector, many artificial intelligence solutions, such as creditworthiness assessment, fraud detection, customer behaviour prediction, and automated consultations, are considered high-risk. These systems have strict requirements for data management, transparency, documentation, human involvement in monitoring, and safety. In addition, AI systems used to evaluate the credit score or creditworthiness of natural persons should also be classified as high-risk AI systems, since they determine those persons' access to financial resources or essential services, such as housing, electricity, and telecommunication services.

AI systems intended to be used for risk assessment and pricing in relation to natural persons for health and life insurance can also have a significant impact on persons' livelihood and, if not duly designed, developed, and used, can infringe their fundamental rights and can lead to serious consequences for people's lives and health, including financial exclusion and discrimination.

Low risk:

AI systems provided for by EU law for the purpose of detecting fraud in the offering of financial services and for prudential purposes to calculate the capital requirements of credit institutions and insurance undertakings should not be considered high-risk.

AI systems specifically intended for gathering financial intelligence or carrying out administrative tasks related to analysing information in the area of anti-money laundering should not be classified as high-risk AI systems.

Supervision of the AI Act in the area of financial services

The AI Act complements the existing legal acts on financial services. This means that even in cases where financial market participants use AI systems, these solutions must comply with both the sector-specific regulatory requirements and the requirements established in the AI Act.

Latvijas Banka monitors the compliance with the AI usage requirements pursuant to Article 74(6) of Regulation (EU) 2024/1689. This applies to high-risk AI systems placed on the market, put into service, or used by financial institutions regulated by the EU legal acts on financial services, insofar as the placing on the market, putting into service, or the use of the AI system is in direct connection with the provision of those financial services.

Latvijas Banka provides the European Central Bank with information which is obtained from monitoring compliance with artificial intelligence usage requirements and could be useful for the prudential supervision of the credit institution.

Obligations of financial market participants

The AI Act focuses on early risk identification, strengthening responsibility, ensuring swift action, and promoting public trust in AI technology. According to Article 73 of the AI Act, developers of high-risk AI systems will be obliged to report serious incidents to the national competent authorities. To ensure compliance with the requirements of the AI Act in the financial sector, financial market participants inform Latvijas Banka about incidents, all significant changes, including any circumstances that may negatively affect the future operations of the financial entity, or any changes in the information it has provided to Latvijas Banka.

Article 73 of the AI Act "Reporting of serious incidents" specifies the following reporting deadlines:

  • reporting a widespread incident within two days;
  • reporting within two days in the event of a threat to the management and operation of critical infrastructure;
  • reporting within 10 days in the event of the death of a person;
  • reporting within 15 days in the event of a threat to personal health;
  • reporting within 15 days in the event of an environmental threat;
  • reporting within 15 days in the event of a threat to property.

The period for the reporting referred to in the first paragraph shall take account of the severity of the serious incident.

Where necessary to ensure timely reporting, the provider or, where applicable, the deployer, may submit an initial report that is incomplete, followed by a complete report.

Following the reporting of a serious incident pursuant to the first paragraph, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action.

The approach of Latvijas Banka and the European Union is also aligned with international initiatives, including the OECD's AI Incidents Monitor and Common Reporting Framework. While the rules will only become applicable from August 2026, it is already possible to download the draft guidance and reporting template. They will help AI system developers prepare for the new requirements in a timely manner. The guidance clarifies definitions, offers practical examples, and explains how the new requirements relate to other legal acts.

Application deadlines

The AI Act came into effect on 1 August 2024. It shall apply as of 2 August 2026 to ensure that the requirements referred to therein are transposed into national legal acts. However, its implementation and application will occur gradually:

  • the application of Chapter I "General provisions – subject matter, scope, definitions, and AI literacy" and Chapter II "Prohibited AI practices" has already commenced as of 2 February 2025;
  • the application of Section 4 "Notifying authorities and notified bodies" of Chapter III "High risk AI systems", Chapter V "General-purpose AI models", Chapter VII "Governance", Chapter XII "Penalties", and Article 78 "Confidentiality" (excluding Article 101 "Fines for providers of general-purpose AI models") has already commenced as of 2 August 2025;
  • Paragraph 1 of Article 6 "Classification rules for high-risk AI systems" and the corresponding obligations provided for in the AI Act (a third-party conformity assessment) shall be applied as of 2 August 2027;
  • by 2 August 2027, providers (pursuant to Article 111 "AI systems already placed on the market or put into service and general-purpose AI models already placed on the market") must take the necessary measures to comply with the obligations established in the AI Act regarding general-purpose AI models placed on the market before 2 August 2025;
  • by 31 December 2030, compliance with the AI Act shall be achieved for the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X to the AI Act and placed on the market or put into service before 2 August 2027;
  • providers and deployers shall take the necessary steps to comply with the requirements of the AI Act and obligations regarding high-risk AI systems intended to be used by public authorities (pursuant to Article 111) by 2 August 2030;
  • the AI Act shall apply to operators of high-risk AI systems that have been placed on the market or put into service before 2 August 2026 only if, as from that date, those systems are subject to significant changes in their designs (pursuant to Article 111).

Support to innovation of financial services

The AI Act includes requirements for supporting AI innovation in accordance with the goal set by the European Council for the EU – to promote a human-centric approach to AI and to become a global leader in developing safe, trustworthy, and ethical artificial intelligence.

Artificial intelligence is one of the driving forces behind the development of innovation with the potential to reshape the financial services sector and the services provided. Not only do artificial intelligence solutions provide better methods for data processing and enhancing customer experience, they also simplify, accelerate, and redefine the traditional processes, making them more efficient. For companies intending to introduce innovative financial services based on artificial intelligence technology, Latvijas Banka provides the opportunity to test them in its Regulatory Sandbox before offering new products on the financial market.

How valuable was this information for you?
Not valuable Very valuable
How can we improve your experience in our site

This page is protected by Google’s reCAPTCHA and visitors are subject to Google Terms of Service and Google Privacy Policy