Hogan Lovells logo
  • Our people
  • What we do
    Sectors Practices Legal Tech
    • Our sector offering
    • Aerospace and Defense
    • Automotive and Mobility
    • Consumer
    • Education
    • Energy
    • Financial Institutions
    • Insurance
    • Life Sciences and Health Care
    • Manufacturing and Industrials
    • Private Capital
    • Real Estate
    • Sports, Media and Entertainment
    • Technology
    • Transportation and Logistics
    • Corporate & Finance
    • Disputes
    • Global Regulatory
    • Intellectual Property
  • Case studies
  • Our thinking
    • All Our thinking
    • Comparative guides
    • Digital Client Solutions
    • Events and webinars
    • Podcasts
    News image_2

    Panoramic: Automotive and Mobility 2025

  • ESG
  • Careers
Search Search
close
Search Search Search
lang-sel-icon English
  • Deutsch
  • English
  • Español
  • Français
  • 日本語
  • 中文
False
people-new
Mobile area
  • About us
    • Our difference
    • Global management team
  • Where we are
    • Our locations
    • Law Firm Network
  • Media center
    • Media contacts
    • Press releases
    • Awards & rankings
  • Responsible Business
  • HL Inclusion
  • Alumni
LinkedIn
Youtube
twitter
Wechat
News

New developments for AI in UK financial services

27 January 2026
""
""
wechat x linkedin
hogan-lovells-logo
Share by email
Enter email
Enter Subject
Cancel
Send
News
New developments for AI in UK financial services
Chapter
  • Chapter

  • Chapter 1

    The FCA’s Mills Review into the long-term impact of AI on retail financial services
  • Chapter 2

    Treasury Committee report
  • Chapter 3

    AI Champions
  • Chapter 4

    UK Jurisdiction Taskforce consultation
  • Chapter 5

    Future developments

At the start of 2026, there have been a number of developments relating to the potential impact of artificial intelligence (AI) on the UK's financial services sector – including, in particular, the launch of a review by the Financial Conduct Authority (FCA) into the long-term impact of AI on retail financial services.  Other developments include criticisms of the UK government and regulators and steps to try and improve legal certainty in this area.  We can expect to see further developments during 2026. 

This article sets out the main developments and the AI-related issues that are likely to be of particular interest this year.

The main developments are:

  • The launch of the FCA’s Mills Review into the long-term impact of AI on retail financial services.
  • The publication of a report by the House of Commons Treasury Committee on “Artificial Intelligence in Financial Services”.
  • The appointment by the UK government of two “AI Champions”.
  • The launch of a consultation by the UK Jurisdiction Taskforce.

Each is considered in more detail below.

While not specific to financial services, it is also noteworthy that the UK Information Commissioner’s Office (ICO) published a report on agentic AI on 8 January 2026, addressing the data protection and privacy risks that the technology presents, and setting practical considerations for relevant stakeholders. This will be covered in a separate, subsequent Our Thinking article.

Chapter 1

The FCA’s Mills Review into the long-term impact of AI on retail financial services

expanded collapse

On 27 January 2026, the FCA launched the Mills Review, a review into how advances in AI could transform retail financial services. 

The process has begun with a Call for Input Engagement Paper, in which the FCA is posing questions based on four inter-related themes:

  1. Future evolution of AI technology - How AI could evolve in the future, including the development of more powerful, autonomous and agentic systems, assessing the whole AI value chain.
  2. Future impact of AI on markets and firms - How these developments could affect markets and firms, including changes to competition and market structure and UK competitiveness.
  3. Future consumer trends - The impact on consumers, including how AI could improve outcomes, create new risks, change behaviours, and alter demand for, and provision of, financial services.
  4. Future regulatory approach - How financial regulators may need to evolve to continue ensuring that retail financial markets work well.

The Engagement Paper also gives some indication of the FCA’s thinking on some of the main impacts of AI. Issues that will be considered as part of the Review include the following:

  • AI may cause market power to shift from financial services firms towards those AI firms who control consumer interfaces, own consumer data and design AI agents. This could move value chains beyond the FCA’s regulatory perimeter (which itself is a point of interest for the FCA) – or such AI firms may decide to enter into the regulated sector themselves. Drawing on its own work on the potential competition impacts of Big Tech entry, and the Competition and Markets Authority’s assessment of AI foundation models, the FCA says it will look at how AI could strengthen the advantages of firms with large amounts of data or control over key digital platforms, and what this might mean for competition in financial services.
  • Consumers could increasingly be interacting with financial services through AI-mediated interfaces rather than directly with firms. If a ‘proxy world’ of AI agents develops, competition within the market could shift significantly. Some firms may thrive, while others may struggle to adapt, as firms increasingly need to appeal to AI agents programmed to optimise for price, value, risk or fit rather than brand loyalty or marketing.
  • New and evolving risks must be expected. The use of AI may enable more sophisticated forms of financial crime, fraud and manipulation, presenting both firms and regulators with new challenges in detecting, preventing and mitigating harms. The Review will consider how supervisory approaches and regulatory technology capabilities may need to evolve to meet these new challenges, and how AI can assist regulators just as it assists firms.
  • As firms improve the way they use and present data, regulators will be able to interpret information more quickly and act sooner. This may change the balance between preventative supervision and reactive supervision. The Review will also consider the consequences for enforcement and redress mechanisms, which may also need to handle cases where harms scale rapidly, involve autonomous systems, or require analysis of complex, evolving technical evidence. The FCA says that it is likely that the FCA will have to enhance its enforcement toolkit as a result of AI.
  • Wholesale markets and broader societal impacts (e.g. employment effects) are outside the scope of the Review. Nevertheless, the Engagement Paper recognises that developments in these areas may indirectly influence retail financial services, and so they will be considered where relevant. For example, the widespread availability of AI investment tools could increase retail participation in capital markets, or disintermediate existing financial services firms.
  • The FCA will be looking to the progress of others – both financial services (FS) regulators and non-FS regulators – and consider best practice in other regulatory frameworks.

Future regulatory changes?

The notes to the FCA’s related press release say that the FCA does not plan to introduce AI-specific regulation, but will continue to rely on its existing, principles-based regulatory framework while considering how regulators need to evolve as AI becomes more embedded in financial services.

The Engagement Paper also states that the FCA does not seek to generate “new prescriptive rules” and that it “do[es] not aim – it would be premature – to explicitly recommend major changes to regulation or law.” However, it also states that the Review will consider whether the future regulatory approach, including whether existing frameworks – such as the Consumer Duty, SM&CR, Operational Resilience and the Critical Third Parties regime – remain flexible and sufficiently outcomes focused for an AI-enabled future; and how quickly articulation of how they apply in an AI-enabled world might be desired.

The FCA also notes the interest of the Treasury Committee in the current regulatory approach. See further below for more information on this.

Next steps

The deadline for comments is 24 February 2026.

Feedback will shape a series of recommendations to be reported to the FCA Board in summer 2026, informing how the FCA can guide and respond to AI-driven transformation. The FCA says that this will culminate in an external publication.

Chapter 2

Treasury Committee report

expanded collapse

The Treasury Committee published a paper on “Artificial intelligence in financial services” on 20 January 2026.

The Treasury Committee is a committee of MPs appointed by the House of Commons. In 2025, it launched an inquiry to examine the opportunities and risks posed by AI for the UK financial services sector. The Committee had been told that the financial services sector was substantially outpacing other sectors in AI adoption.

The Committee heard written and oral evidence from financial services representatives, academics, regulators and HM Treasury and has now published its report.

The Committee concluded that Al and wider technological developments could bring considerable benefits to consumers. However:

“the Financial Conduct Authority, the Bank of England and HM Treasury are not doing enough to manage the risks presented by Al. By taking a wait-and-see approach to Al in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm“

The report is relatively short and does not provide an overall view of the evidence received on most of the main issues. Nevertheless, there are some specific points that are worth noting:

  • The regulators (PRA and FCA) thought that the existing regulatory framework offered sufficient protection for consumers and financial stability against the risks posed by AI. The Treasury Committee were sceptical about that. However, the FCA did say that it will produce a code of practice for the use of AI for automated decision making. The Treasury Committee’s report also pre-dated the publication of the FCA’s Mills Review on AI (see further above).
  • The Committee appeared to be concerned that there was insufficient clarity about who was accountable for any harm caused to consumers – both in terms of which senior managers within a firm were responsible and whether the firm itself or a third party (e.g. the developer) was responsible.
  • The Bank of England and FCA do not conduct AI-specific cyber or market stress testing. The Bank of England acknowledged that this may be something that would be considered in future stress tests.
  • HM Treasury has not yet designated anyone as a critical third party (CTP) under the new regime introduced in 2023. Under that regime, HM Treasury is empowered to designate a third party service provider as “critical” if it considers that a failure in, or disruption to, the provision of the services could threaten the stability of, or confidence in, the UK financial system. If a provider is designated as a CTP, the regulators are given the power to make rules that apply to it, gather information from it and take enforcement action against it. The Committee noted that the financial services sector is reliant on large US technology firms for AI and cloud services, and questioned why no-one has yet been designated as a CTP under the new regime.

The Committee made the following recommendations:

  • By the end of 2026, the FCA should publish comprehensive, practical guidance for firms on (a) the application of existing consumer protection rules to their use of Al, and (b) accountability and the level of assurance expected from senior managers under the Senior Managers and Certification Regime for harm caused through the use of Al.
  • To build firms’ readiness for AI-driven market shocks, the Bank of England and the FCA must conduct AI-specific stress testing.
  • By the end of 2026, HM Treasury must designate the major AI and cloud providers as CTPs for the purposes of the CTP Regime.

While these recommendations are not binding on the government or the regulators, the report of the Committee may move AI-related issues higher up their agenda. Many of the issues raised by the Treasury Committee will be within the scope of the Mills Review, insofar as those issues relate to retail financial services.

It is, in any event, expected that HM Treasury will designate its first CTPs during the course of 2026 – although this does not necessarily mean that AI providers will be among the first to be designated. Based on previous HM Treasury publications, the process of designating CTPs is likely to take around six months, and will involve consultations with the regulators and the provider who is being considered for CTP designation.

Chapter 3

AI Champions

expanded collapse

The UK government has appointed two industry figures, described as “AI Champions”, to spearhead the rollout of AI in financial services.

This step had been recommended in the independent AI Opportunities Action Plan that was published in 2025.

The two AI Champions are Harriet Rees (Group Chief Information Officer of Starling Bank) and Dr Rohit Dhawan (Head of AI & Advanced Analytics at Lloyds Banking Group). They will report directly to the Economic Secretary to the Treasury.

The role of the AI Champions is described as being to turn rapid AI adoption into safe, scalable growth. Their tasks will include:

  • identifying where innovation can move faster;
  • tackling barriers holding firms back;
  • ensuring that firms can take the opportunities AI presents - improving customer outcomes, boosting productivity and competitiveness, and maintaining trust, resilience and strong consumer protection - while reinforcing the UK’s position as a global hub for financial services, technology and investment; and
  • helping to ensure the UK’s policy, regulatory, and investment environment is fit for purpose as AI becomes increasingly important in global competition.

The AI Champions will engage with industry, regulators and other stakeholders to provide HM Treasury Ministers and officials with “advice and guidance on areas of potential growth for AI in FS and what action could be taken to seize the opportunities”.

The appointment of the AI Champions commenced on 20 January 2026 and concludes in September 2026 (with the possibility of an extension).

Chapter 4

UK Jurisdiction Taskforce consultation

expanded collapse

The UK Jurisdiction Taskforce (UKJT) has launched a public consultation on the question of liability for AI harms

THE UKJT is an industry-led taskforce made up of experts in the legal and technology sectors which aims to clarify key questions regarding the legal status of, and basic legal principles applicable to, crypto assets, distributed ledger technology, smart contracts and associated technologies under English law.

On the question of AI, the UKJT has prepared a draft of what it describes as an authoritative Legal Statement on liability for non-deliberate AI harms under English law – and it is seeking input on the content of that Legal Statement.

Points of interest from the Legal Statement include the following:

  • There is a suggested definition of AI – namely “a technology that is autonomous”. The UKJT describes this definition as being technology-agnostic and as seeking to capture the key characteristics of AI that are novel and therefore give rise to perceived legal uncertainty.
  • The focus is on determining whether liability arises in a situation where there is no relevant governing contract – that is, under the laws of negligence.
  • Consideration of (and requests for views on) the following questions:
    • Does the principle of vicarious liability – under which a person can be liable for wrongs committed by another person (e.g. an employer being liable for the acts of an employee) – apply to loss caused by AI?
    • In what circumstances can a professional be liable for using or failing to use AI in the provision of their services? (Note that the paper considers that, in some situations, a professional could be liable for not using AI.)
    • If AI used in the provision of professional services produces erroneous output, is the professional liable for loss resulting from the error?
    • Can a person ever be liable for harms caused by AI in the absence of fault? The paper considers that that is unlikely, other than in a situation where the AI system is incorporated into a tangible product (for which the Consumer Protection Act 1987 imposes strict liability).
    • Does liability attach to false statements made by an AI chatbot?
  • In order for there to be common law liability, there must be causation – i.e. it must be shown that the negligence caused the loss in question. The paper considers the challenges that arise in showing causation in the context of AI.

The consultation closes on 13 February 2026 and the UKJT says it will publish a final version of the Legal Statement as soon as possible thereafter.

As has been the case with previous UKJT publications (such as its paper on the legal status of digital assets), it is likely that the UKJT report on AI will prove to be an important resource for policy makers, regulators and the courts.

Chapter 5

Future developments

expanded collapse

AI is likely to continue to be a focus for policy makers and regulators in the UK during 2026. 

The FCA has now set out a clear articulation of the issues that it will be considering in the context of retail financial services.

In addition to the concerns outlined above regarding the risks of AI, the UK government and regulators will also be looking at the position of the EU, where most of the provisions of the EU AI Act are likely to come into effect later this year.  

Other jurisdictions' approach to AI will also be of interest.  For example, the FCA has already entered into a strategic partnership on AI with the Monetary Authority of Singapore (MAS), a jurisdiction that is already very active in this area.  Take a look at the Singapore section of our AI Hub for the latest regulatory developments.

It was reported in summer 2025 that the UK government intends to introduce a comprehensive bill to regulate AI, to address concerns about issues including safety and copyright.  There is no current indication of the timetable for any such bill or indeed whether such a bill is still intended.

 

Authored by Dominic Hill and Virginia Montgomery.

Contacts

bio-image

Michael Thomas

Partner

location London

email Email me

bio-image

John Salmon

Partner

location London

email Email me

bio-image

Lydia Savill

Partner

location London

email Email me

bio-image

Mark Orton

Senior Associate

location London

email Email me

bio-image

Anahita Patwardhan

Senior Associate

location London

email Email me

bio-image

James Sharp

Senior Associate

location London

email Email me

View more

Related topics

  • Artificial Intelligence
  • Insurance / re-insurance
  • Financial Services Securities and Markets Regulation
  • FinTech
Load more

Related countries

  • United Kingdom
Load more

View more insights and analysis

arrow
arrow
"" ""
Digital Client Solutions
Empowering you to lead change through our digital solutions.
Learn more

Register now to receive personalized content and more!

 

Register
close
See benefits
Register
Hogan Lovells logo
Contact us
Quick Links
  • About us
  • Where we are
  • Media center
  • Responsible Business
  • HL Inclusion
  • Alumni
  • Contact us
  • Cookies
  • Disclaimer
  • Fraudulent and Scam Emails
  • Legal notices
  • Modern Slavery Statement
  • Our thinking terms of use
  • Privacy
  • RSS
Connect with us
LinkedIn
Youtube
Twitter
Wechat

© 2026 Hogan Lovells. All rights reserved. "Hogan Lovells" or the “firm” refers to the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses, each of which is a separate legal entity. Attorney advertising. Prior results do not guarantee a similar outcome.

Subscribe to Our thinking
Connect with us
LinkedIn
Youtube
Twitter
Wechat