AI-enabled fraud: evolving techniques and sector vigilance
Published on 23 February 2026
Artificial intelligence is increasingly used by cybercriminals to refine and scale existing fraud schemes. A recent Luxemburger Wort article widely reflected the ABBL’s assessment of this evolving threat. AI does not create a completely new form of cybercrime. It strengthens what already exists, particularly social engineering, by making scams more credible, more personalised and easier to automate.
Summary
A qualitative shift in fraud techniques
At European level, payment fraud amounted to around €4.2 billion in 2024, according to the ECB. While the fraud rate remains low compared to overall transaction volumes, the qualitative shift is clear.
A growing share of losses stems from the manipulation of customers themselves. AI enhances this dynamic by enabling:
- highly convincing phishing emails written in flawless language
- cloned voices used in impersonation or “CEO fraud” schemes
- deepfake videos promoting fraudulent investment opportunities
Spear-phishing campaigns can now also be automated and deployed at scale. A familiar voice or recognisable face can no longer be treated as reliable proof of authenticity. Trust in digital identity is increasingly being targeted.
Acceleration of existing patterns
The impact of AI is best understood as an accelerator of existing social engineering techniques rather than a fundamentally new threat category.
![]()
![]()
![]()
AI is reshaping the fraud landscape primarily by accelerating and professionalising existing social engineering techniques. The challenge lies in preserving trust in digital interactions while continuously adapting our defensive capabilities.
Arnaud Clément
Head of Payments and Innovation, ABBL
A coordinated response
Luxembourg’s financial institutions continue to invest in real-time transaction monitoring, anomaly detection and strengthened identity verification processes. AI itself is increasingly deployed as a defensive tool.
The regulatory framework further reinforces resilience through:
- Strong Customer Authentication under PSD2
- Verification of Payee
- the Digital Operational Resilience Act (DORA), applicable since January 2025
Within the ABBL, dedicated working groups on cybersecurity, fraud and digitalisation facilitate structured exchanges between member institutions. This coordination supports a coherent and proactive response to emerging risks.
Prevention and awareness
AI-enabled fraud relies heavily on psychological pressure and perceived legitimacy. Technology alone cannot eliminate this risk.
Basic safeguards remain essential: financial decisions should not be taken under pressure, unusually high returns should be questioned, offers should be verified through official channels, and authentication credentials must never be shared.
Through the Fondation ABBL pour l’éducation financière, the ABBL co-initiated Luxembourg’s first national awareness campaign against online fraud. Strengthening public awareness remains a key component of the overall response.
AI will continue to shape both financial innovation and criminal tactics. Anticipation, coordination and education remain essential to safeguarding trust in the digital financial ecosystem.
Arnaud Clément
Head of Payments and Innovation, ABBL
Published on 23 February 2026