I spend most of my days talking to data leaders across personal and general lines. Most of the time, I am the one doing the learning. But across the carriers I talk to, a pattern keeps coming up.
Fragmented data estates. Pricing decisions made on incomplete customer pictures. A single customer view that has been on the roadmap for years and keeps getting pushed back.
This is not a 2026 problem. It has been building through mergers that were integrated commercially but not technically, through short-term fixes that quietly became permanent, through AI ambitions that moved faster than the foundations beneath them could support.
The gap is not the AI. It never was.
1. Agentic AI is on every roadmap. The foundations are (mostly) not ready for it.
Building an AI agent has never been easier. What used to take months can now be done in days (or a few clicks in some instances!!). That sounds like progress, and in some ways it is. But it also means the barrier to building something that looks impressive has dropped to almost nothing, and that is where it gets complicated.
Most carriers are sitting on years of unstructured data, claims notes, call transcripts, customer correspondence that has barely been touched, that is genuinely ready to work with. The harder gap is in the structured, well-governed foundations that agentic workflows actually depend on.
A fraud scoring agent reviews a claim and flags it as high risk. The claim gets delayed. But the agent was pulling from a duplicate customer record, one tied to a different policy on a different system, with a different claims history attached to it. The actual customer has a clean record. Nobody catches it straight away because nobody can trace which record the agent used or why it reached that conclusion. The customer escalates. A handler unpicks it manually. What looked like an AI problem was a data problem all along.
Before any agent goes near a live workflow, the question worth asking is simple: if this agent makes a wrong decision at scale, what does it touch and how do we know? I do not see that question on most roadmaps.
2. Motor is in a soft market, but the pressure underneath is building.
Premium rates in private motor fell through much of 2025 after two years of steep increases. For consumers, welcome news. For insurers, a margin problem that data has to help solve.
Claims costs have not followed the same direction. Repair costs, supply chain pressures, and rising theft continue to bite. Pricing accuracy matters more in a soft market. The carriers who can segment their book with precision, knowing where they are underpricing risk and where they are losing good customers unnecessarily, are in a stronger position than those working from an incomplete picture.
The data to do this more accurately already exists through telematics. Granular driving behaviour, time of day, road type, real risk rather than proxy risk. The technology has been there for years. What has not worked is the customer side of it. Consent flows that ask too much too early, pricing outputs that customers cannot interrogate or trust, products positioned as a niche offering for young drivers rather than a mainstream data strategy.
There is a regulatory layer to this too. The FCA's scrutiny of data-driven pricing models means carriers cannot simply deploy richer data without thinking carefully about fairness, explainability, and how pricing decisions get justified. That is not a reason to avoid it. It is a reason to build the governance around it properly from the start.
The carriers who figure out both sides of that problem, the customer proposition and the regulatory framework, will have a pricing edge that is genuinely hard to replicate. A soft market is a reasonable time to build it. When conditions tighten again, and they will, that advantage will show.
3. Home has a climate data problem that is getting harder to ignore.
The UK property market paid out a record £6.1 billion in claims in 2025, according to the ABI. The average home claim rose 15% year on year to £6,000. The average flood payout reached £30,000, up 60% on the previous year. Flooding and storm events that would once have been modelled as rare are happening with enough regularity that "tail risk" no longer feels like the right frame.
Some carriers are moving from postcode-level flood assessment to building footprint level, factoring in elevation, surface water drainage, and proximity to flood defences. It is a meaningful shift. Most are still working from assumptions that were built for a different climate and have not been seriously revisited.
The external data to do this better does exist. Providers supply granular flood risk and climate exposure data at property level. The harder problem is internal. Most carriers simply don’t have enough historical loss data in the right areas to validate those external sources against. A carrier with limited flood claims history in a given region has no internal baseline to calibrate from. So the data is available in principle and incomplete in practice, and that gap is what makes building a credible climate pricing model harder than the technology conversation tends to suggest.
4. Consolidation has made the single customer view harder, not easier.
Every acquisition and merger makes commercial sense on paper. The data integration almost never keeps pace.
What I keep seeing is carriers running two or three policy administration systems, acquired at different times, connected rather than unified. Motor in one system, home in another. The customer holding both products is not always the same record across them. Sometimes they are duplicated. Sometimes the data conflicts. And because nobody owned the problem at the point of acquisition, it gets inherited.
Pricing suffers. You are working from an incomplete risk picture. Retention suffers too, because you cannot see clearly which customers hold multiple products and what they are actually worth to retain. Any AI ambition sitting on top of that is working with one hand tied behind its back.
A true customer 360 has been the goal for years. M&A activity has made it harder to reach, and the honest answer is that closing the gap requires integration work that is unglamorous and slow and tends to lose out to things that are easier to put on a slide.
5. Getting to production is where most projects actually die.
A working prototype can be built in days. That is genuinely true (and it wasn't the case two/three years ago). But a prototype is not a production system, and in a regulated environment the distance between the two is significant.
Claims decisions, pricing calls, fraud flags. Each one needs more than a working model. Data lineage, so every input can be traced back to its source. Explainability, so when a regulator or a customer asks how a decision was reached, someone can actually answer. Model governance, security controls, and audit trails that hold up under scrutiny. Not once, in a workshop. Every time, in production, under pressure.
Most organisations underestimate how much of that challenge sits in the data layer rather than the model itself. The model can be good. If the data feeding it is not well-governed and well-documented, you still cannot stand behind it with confidence when it matters. That is where the real work is:
Not the build. The hardening.
And the cost of skipping it is well documented. The FCA issued £176 million in fines in 2024, the majority related to governance and management control failures. Consumer Duty applies directly to AI-driven decisions in insurance. A wrong decision in claims or pricing that cannot be clearly explained is not just a customer service problem. The regulatory framework to act on it already exists. And it is often costly.
Conclusion
The carriers taking the data foundations seriously now are building something that will be difficult for others to close later. Not because the technology is complex, but because this kind of work requires sustained attention over time, and it rarely makes it onto a highlight reel.
It is years of decisions that prioritised speed over foundations. Mergers closed without data strategies. Tactical fixes that nobody went back to revisit. Roadmaps built around what looked impressive rather than what was ready. That is what makes it hard to fix. And it is also what makes fixing it properly worth doing.
When it goes wrong the cost is rarely just the fine. It is the handlers pulled off other work to unpick decisions nobody documented. The customer who was wrongly declined and tells ten people. The internal review that takes three months and finds the root cause was a duplicate record from an acquisition four years ago. None of that shows up on a demo. It shows up when the AI actually needs to work, in production, under pressure, when something goes wrong and someone needs to explain why.
References
1. ABI, Adverse weather pushes property insurance payouts to £6.1 billion in 2025, February 2026. https://www.abi.org.uk/news/news-articles/2026/2/adverse-weather-pushes-property-insurance-payouts-to-6.1-billion-in-2025/
2. FCA, 2024 Fines, Financial Conduct Authority. https://www.fca.org.uk/news/news-stories/2024-fines
Topics Covered :
Author
Cem Zekai
Strategic Account Executive - Insurance