Cutting through the noise
Financial services has more AI hype per square metre than any other industry. Every conference, every vendor pitch, every LinkedIn post promises that AI will revolutionise banking, insurance, and asset management.
Some of it is true. Much of it is premature. And a surprising amount of it is just rebranding existing analytics as "AI" to justify higher licence fees.
This article focuses on what UK financial services firms are actually deploying in 2026, what results they are seeing, and how the FCA's evolving approach to AI governance affects what you can and cannot do.
1. Fraud detection and prevention
The problem: Financial fraud is getting more sophisticated. Social engineering, synthetic identities, account takeover attacks, and payment fraud all continue to grow. Traditional rule-based detection systems catch known fraud patterns but miss novel ones.
What AI does: Machine learning models analyse transaction patterns in real time, identifying anomalies that rule-based systems miss. They consider hundreds of variables simultaneously - transaction amount, timing, location, device, recipient history, behavioural patterns - and score each transaction for fraud risk.
The real impact: AI-powered fraud detection typically reduces fraud losses by 40 - 60% compared to rule-based systems. Just as importantly, it reduces false positives by 50 - 70%. False positives matter because every legitimate transaction you block is a frustrated customer and a potential lost relationship.
What is working in the UK: Several UK challenger banks have moved entirely to AI-based fraud detection, processing millions of transactions per day with sub-second decision times. Traditional banks are increasingly layering AI on top of existing systems, using it to re-score transactions that rule-based systems flag.
2. Credit risk assessment
The problem: Traditional credit scoring relies on a limited set of data points - credit history, income, employment status, existing debts. This works for people with established credit histories, but it poorly serves thin-file customers and does not adapt quickly to changing circumstances.
What AI does: Incorporates a broader range of data points - transaction patterns, spending behaviour, income stability over time, even the timing and consistency of bill payments - to build a more nuanced picture of creditworthiness.
The real impact: AI credit models typically improve prediction accuracy by 15 - 25% compared to traditional scorecards. They are particularly effective at identifying creditworthy customers that traditional models reject (the "false negatives" that represent lost lending opportunities).
The regulatory angle: The FCA cares deeply about fairness in credit decisions. AI credit models must be explainable - you need to be able to tell a declined applicant why they were declined, in terms they can understand. Black-box models that produce a score without explanation are not acceptable in the UK regulatory environment.
Building credit models that are both accurate and explainable requires genuine expertise in AI strategy and a thorough understanding of the regulatory framework.
3. Customer onboarding - KYC and AML
The problem: Know Your Customer and Anti-Money Laundering checks are a regulatory requirement, but they are also a significant friction point in the customer journey. Manual KYC processes take days, require multiple document submissions, and lose customers at every step.
What AI does:
- Document verification - Reads and validates identity documents, cross-referencing against databases and checking for forgery indicators
- Facial matching - Compares selfies against ID photos with high accuracy
- Sanctions and PEP screening - Automated checks against sanctions lists, politically exposed persons databases, and adverse media
- Risk scoring - Assigns a risk level to each customer based on the totality of information gathered
- Ongoing monitoring - Continuously screens customers against updated sanctions lists and monitors for changes in risk profile
The real impact: AI-powered onboarding reduces KYC processing time from days to minutes for straightforward cases. High-risk cases are flagged for enhanced due diligence by human compliance teams. Drop-off rates during onboarding typically fall by 30 - 50%.
4. Portfolio management and investment
The problem: Analysing markets, identifying opportunities, and managing portfolio risk across thousands of securities and instruments generates more data than human analysts can process.
What AI does:
- Market analysis - Processes news, earnings reports, economic data, and alternative data sources to identify signals
- Portfolio optimisation - Continuously rebalances portfolios based on risk parameters, market conditions, and client objectives
- Sentiment analysis - Monitors social media, news, and analyst commentary to gauge market sentiment
- Scenario modelling - Runs thousands of scenarios to stress-test portfolios against various market conditions
The real impact: AI-assisted portfolio management does not guarantee better returns (nothing does), but it consistently improves risk management and reduces the lag between market events and portfolio adjustments. Wealth managers using AI tools report that they can manage 30 - 40% more client portfolios without sacrificing quality of oversight.
5. Claims processing
The problem: Insurance claims involve receiving documentation, verifying coverage, assessing damage, detecting fraud, and calculating settlements. It is document-heavy, repetitive, and slow.
What AI does: Reads and categorises claim documents. Extracts key information (date of incident, type of loss, claimed amount). Cross-references against policy terms. Flags potential fraud indicators. For straightforward claims, calculates settlement amounts automatically.
The real impact: AI-powered claims processing reduces average settlement times by 40 - 60% for straightforward claims. Complex claims still require human assessment, but the AI handles the initial triage and data extraction, so adjusters spend their time on analysis rather than administration.
Cost savings: One UK insurer reported a 25% reduction in claims processing costs after implementing AI-powered triage. The savings came from faster processing (fewer follow-up calls from policyholders), reduced fraud payouts, and better resource allocation.
6. Regulatory compliance
The problem: Financial services regulation is voluminous, complex, and constantly changing. Staying compliant requires monitoring regulatory updates, assessing their impact on your business, updating policies and procedures, and producing reports for regulators.
What AI does:
- Regulatory monitoring - Tracks changes from the FCA, PRA, FOS, and other relevant bodies
- Impact assessment - Analyses new regulations against your current policies to identify gaps
- Reporting automation - Generates regulatory reports from your data, formatted to regulatory specifications
- Communication monitoring - Scans employee communications for compliance breaches (with appropriate governance)
The real impact: Compliance teams using AI tools report spending 30 - 40% less time on monitoring and reporting, freeing capacity for advisory and strategic compliance work.
Automating regulatory compliance workflows requires careful integration with your existing systems - risk management platforms, client databases, communication archives. Building these automation workflows properly is critical because errors in compliance reporting carry serious consequences.
7. Market analysis and trading
The problem: Financial markets generate enormous volumes of data. Price movements, order flow, economic indicators, corporate actions, geopolitical events - the information landscape is too vast and too fast for human analysts to process comprehensively.
What AI does: Processes multiple data streams simultaneously to identify patterns, correlations, and anomalies. Natural language processing analyses earnings calls, central bank statements, and news in real time. Machine learning models identify trading signals based on historical patterns.
The real impact: AI does not replace experienced traders and analysts, but it dramatically improves their information processing capacity. Firms using AI-powered market analysis tools report that analysts can cover 2 - 3x more securities with the same depth of analysis.
8. Customer service automation
The problem: Financial services customer service is complex. Customers ask about account balances, transaction disputes, product features, regulatory requirements, and more. Getting answers wrong has regulatory implications.
What AI does: AI-powered agent development creates customer service tools that can handle routine enquiries - balance checks, transaction histories, product information, branch details - while securely routing complex queries to qualified human agents.
The critical difference: Unlike e-commerce chatbots, financial services AI must operate within strict guardrails. It cannot give investment advice, make product recommendations that constitute regulated advice, or access account data without proper authentication. Building AI customer service for financial services requires deep understanding of the regulatory boundaries.
The real impact: UK banks using AI customer service report handling 50 - 65% of enquiries without human intervention. Customer satisfaction scores have generally improved because the AI handles routine queries instantly, while human agents have more time for complex issues.
The FCA's approach to AI governance
The FCA has been increasingly vocal about its expectations for AI use in financial services. Here is what you need to know:
Key principles:
- Accountability - Firms must have clear governance around AI decision-making. Someone - a named individual - must be responsible for the outcomes of AI systems.
- Explainability - Particularly for consumer-facing decisions (credit, insurance pricing, claims), firms must be able to explain how AI reached its conclusions.
- Fairness - AI must not produce discriminatory outcomes, even unintentionally. Firms must test for and monitor bias in AI systems.
- Data quality - The FCA expects firms to ensure that the data feeding AI systems is accurate, complete, and appropriate.
- Resilience - AI systems must be robust, with fallback procedures for when they fail.
- Consumer Duty alignment - All AI implementations must be assessed against the Consumer Duty's requirement to deliver good outcomes for customers.
Practical implications:
- You need an AI governance framework before you deploy AI in customer-facing applications
- Model risk management processes must cover AI systems
- Regular bias testing and fairness monitoring is expected, not optional
- Audit trails must capture AI decision-making processes
- Board-level understanding of AI risks is expected
Building an AI strategy for financial services
The financial services firms getting the most value from AI share common traits:
- They start with specific problems - Not "let's use AI" but "our claims processing takes too long and costs too much"
- They address governance first - Before building anything, they establish the framework for responsible AI use
- They invest in data quality - AI is only as good as the data it processes. Most firms need to clean up their data before AI can deliver value.
- They build incrementally - Pilot, measure, adjust, scale. Not big-bang transformation.
- They maintain human oversight - AI augments human decision-making rather than replacing it.
If your firm is exploring AI but struggling to identify where to start, or you need help building a governance framework that satisfies regulatory expectations, contact us. We work with financial services firms to develop AI strategy that is both commercially valuable and regulatorily sound.
Need help with this?
Bloodstone Projects helps businesses implement the strategies covered in this article. Talk to us about our services.
Get in touch