Panoramic: Automotive and Mobility 2025
At the start of 2026, there have been a number of developments relating to the potential impact of artificial intelligence (AI) on the UK's financial services sector – including, in particular, the launch of a review by the Financial Conduct Authority (FCA) into the long-term impact of AI on retail financial services. Other developments include criticisms of the UK government and regulators and steps to try and improve legal certainty in this area. We can expect to see further developments during 2026.
This article sets out the main developments and the AI-related issues that are likely to be of particular interest this year.
The main developments are:
Each is considered in more detail below.
While not specific to financial services, it is also noteworthy that the UK Information Commissioner’s Office (ICO) published a report on agentic AI on 8 January 2026, addressing the data protection and privacy risks that the technology presents, and setting practical considerations for relevant stakeholders. This will be covered in a separate, subsequent Our Thinking article.
On 27 January 2026, the FCA launched the Mills Review, a review into how advances in AI could transform retail financial services.
The process has begun with a Call for Input Engagement Paper, in which the FCA is posing questions based on four inter-related themes:
The Engagement Paper also gives some indication of the FCA’s thinking on some of the main impacts of AI. Issues that will be considered as part of the Review include the following:
The notes to the FCA’s related press release say that the FCA does not plan to introduce AI-specific regulation, but will continue to rely on its existing, principles-based regulatory framework while considering how regulators need to evolve as AI becomes more embedded in financial services.
The Engagement Paper also states that the FCA does not seek to generate “new prescriptive rules” and that it “do[es] not aim – it would be premature – to explicitly recommend major changes to regulation or law.” However, it also states that the Review will consider whether the future regulatory approach, including whether existing frameworks – such as the Consumer Duty, SM&CR, Operational Resilience and the Critical Third Parties regime – remain flexible and sufficiently outcomes focused for an AI-enabled future; and how quickly articulation of how they apply in an AI-enabled world might be desired.
The FCA also notes the interest of the Treasury Committee in the current regulatory approach. See further below for more information on this.
The deadline for comments is 24 February 2026.
Feedback will shape a series of recommendations to be reported to the FCA Board in summer 2026, informing how the FCA can guide and respond to AI-driven transformation. The FCA says that this will culminate in an external publication.
The Treasury Committee published a paper on “Artificial intelligence in financial services” on 20 January 2026.
The Treasury Committee is a committee of MPs appointed by the House of Commons. In 2025, it launched an inquiry to examine the opportunities and risks posed by AI for the UK financial services sector. The Committee had been told that the financial services sector was substantially outpacing other sectors in AI adoption.
The Committee heard written and oral evidence from financial services representatives, academics, regulators and HM Treasury and has now published its report.
The Committee concluded that Al and wider technological developments could bring considerable benefits to consumers. However:
“the Financial Conduct Authority, the Bank of England and HM Treasury are not doing enough to manage the risks presented by Al. By taking a wait-and-see approach to Al in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm“
The report is relatively short and does not provide an overall view of the evidence received on most of the main issues. Nevertheless, there are some specific points that are worth noting:
The Committee made the following recommendations:
While these recommendations are not binding on the government or the regulators, the report of the Committee may move AI-related issues higher up their agenda. Many of the issues raised by the Treasury Committee will be within the scope of the Mills Review, insofar as those issues relate to retail financial services.
It is, in any event, expected that HM Treasury will designate its first CTPs during the course of 2026 – although this does not necessarily mean that AI providers will be among the first to be designated. Based on previous HM Treasury publications, the process of designating CTPs is likely to take around six months, and will involve consultations with the regulators and the provider who is being considered for CTP designation.
The UK government has appointed two industry figures, described as “AI Champions”, to spearhead the rollout of AI in financial services.
This step had been recommended in the independent AI Opportunities Action Plan that was published in 2025.
The two AI Champions are Harriet Rees (Group Chief Information Officer of Starling Bank) and Dr Rohit Dhawan (Head of AI & Advanced Analytics at Lloyds Banking Group). They will report directly to the Economic Secretary to the Treasury.
The role of the AI Champions is described as being to turn rapid AI adoption into safe, scalable growth. Their tasks will include:
The AI Champions will engage with industry, regulators and other stakeholders to provide HM Treasury Ministers and officials with “advice and guidance on areas of potential growth for AI in FS and what action could be taken to seize the opportunities”.
The appointment of the AI Champions commenced on 20 January 2026 and concludes in September 2026 (with the possibility of an extension).
The UK Jurisdiction Taskforce (UKJT) has launched a public consultation on the question of liability for AI harms
THE UKJT is an industry-led taskforce made up of experts in the legal and technology sectors which aims to clarify key questions regarding the legal status of, and basic legal principles applicable to, crypto assets, distributed ledger technology, smart contracts and associated technologies under English law.
On the question of AI, the UKJT has prepared a draft of what it describes as an authoritative Legal Statement on liability for non-deliberate AI harms under English law – and it is seeking input on the content of that Legal Statement.
Points of interest from the Legal Statement include the following:
The consultation closes on 13 February 2026 and the UKJT says it will publish a final version of the Legal Statement as soon as possible thereafter.
As has been the case with previous UKJT publications (such as its paper on the legal status of digital assets), it is likely that the UKJT report on AI will prove to be an important resource for policy makers, regulators and the courts.
AI is likely to continue to be a focus for policy makers and regulators in the UK during 2026.
The FCA has now set out a clear articulation of the issues that it will be considering in the context of retail financial services.
In addition to the concerns outlined above regarding the risks of AI, the UK government and regulators will also be looking at the position of the EU, where most of the provisions of the EU AI Act are likely to come into effect later this year.
Other jurisdictions' approach to AI will also be of interest. For example, the FCA has already entered into a strategic partnership on AI with the Monetary Authority of Singapore (MAS), a jurisdiction that is already very active in this area. Take a look at the Singapore section of our AI Hub for the latest regulatory developments.
It was reported in summer 2025 that the UK government intends to introduce a comprehensive bill to regulate AI, to address concerns about issues including safety and copyright. There is no current indication of the timetable for any such bill or indeed whether such a bill is still intended.
Authored by Dominic Hill and Virginia Montgomery.