Implementing the AI Act: What financial institutions need to know
Published on 10 December 2025
AI is reshaping financial services, enabling new capabilities in fraud detection, credit scoring, customer support, operational efficiency and many others. It also introduces challenges linked to governance, risk management and accountability, which become increasingly important as the EU Artificial Intelligence Act begins to apply. To support the sector, the ABBL and the CSSF organised a dedicated session on 9 December, gathering nearly 200 participants at the Chamber of Commerce for a practical discussion on implementation.
Summary
![]()
![]()
![]()
AI will shape the future of our financial sector. Our role as ABBL is to help members implement the right safeguards so that innovation remains safe, efficient and rooted in trust.
Ananda Kautz
Member of the Management Board of the ABBL
Innovation must go hand in hand with governance
CSSF experts outlined how extensively AI is already used across supervised entities, including process automation, information search, AML monitoring, cybersecurity and customer support. Throughout the session, speakers emphasised that while AI offers considerable potential, institutions must ensure that risks are properly identified, assessed and managed, with appropriate transparency and human oversight.
Institutions remain responsible for the AI systems they deploy, whether developed internally or provided by third parties. This applies to both traditional machine-learning applications and emerging generative AI tools.
Understanding the AI Act: risk-based rules and phased applicability
The session offered a structured overview of the AI Act’s requirements and timelines, including aspects that may evolve under the recently proposed Digital Omnibus Package, which will refine several provisions in the coming months. Key points highlighted included:
- the Act’s risk-based approach, with strict obligations for high-risk systems such as creditworthiness and credit scoring models
- early obligations already applicable, including rules on prohibited practices and AI literacy
- requirements relating to documentation, model robustness, monitoring, transparency and human oversight
- areas where further clarifications at EU level are expected, including clarifications on the definition of an AI system and the framework for incident reporting
Participants also discussed interactions between the AI Act and existing EU financial services legislation, and how supervisory practices may evolve as guidance is published.
Integrating AI into existing ICT and operational risk frameworks
Speakers encouraged financial institutions to build on their existing ICT and DORA frameworks, rather than approaching AI as a standalone discipline. Discussions emphasised the importance of:
- clearly assigning risk ownership
- multidisciplinary project teams combining IT, data science, business, legal, compliance and risk management
- documenting model behaviour and detecting drift
- maintaining governance processes that are proportionate to the scale and importance of each use case
Institutions were encouraged to begin with an AI use-case inventory, including materiality and risk classification.
National implementation and supervisory expectations
The draft Luxembourg AI Act implementation law foresees a multi-authority structure, with the CSSF acting as the Market Surveillance Authority for AI systems deployed by supervised financial entities, and the CNPD as central authority in charge.
Some elements of the national draft go further than the minimum EU requirements, meaning institutions should monitor developments closely as the legislative process evolves.
Speakers highlighted the role of the CSSF Innovation Hub, which provides a channel for institutions to ask questions, share challenges and maintain an open dialogue with the supervisor as they prepare for compliance.
Tools to support implementation
Participants were introduced to the European Commission’s AI Act Single Information Platform, which brings together:
- the AI Act Explorer
- the AI Act Compliance Checker
- The AI Act Service Desk
These resources will be updated as further implementing and delegated acts are published.
A strong focus on trust, skills and collaboration
In her closing remarks, Galina Miroshnichenko emphasised the need for collective readiness across Luxembourg’s financial sector.
“The opportunity for the financial sector is clear, but so is the need for proportionate governance, testing and human oversight. Through the ABBL AI Working Group, we are building shared understanding, mapping use cases, discussing challenges and engaging with authorities on behalf of our members.”
The session also underlined the importance of developing both technical and non-technical skills to support responsible AI adoption.
How ABBL supports its members
The ABBL continues to help members prepare for AI adoption and compliance through:
- the ABBL AI Working Group (40+ institutions);
- Analysis of use-cases and risk considerations;
- training initiatives in partnership with the House of Training;
- collaboration with Luxembourg’s AI ecosystem, including CNPD, Luxinnovation, AI Factory, REMI initiative, academia and the fintech/regtech community;
- ongoing dialogue with the CSSF, including through the Innovation Hub.
Members who wish to stay involved or seek clarification are invited to contact ABBL’s Member Relations team.
Galina Miroshnichenko
Adviser – Payments & Digital, ABBL
Published on 10 December 2025