Blog — Advancing Analytics

3 Teams. 3 Challenges. 1 Question: What are the real challenges?

Written by Gavita Regunath | Mar 2, 2026 2:55:59 PM

Last week, we got the team together in our London office and ran an AI hackathon - and something interesting happened when we stopped talking about tech stacks and model tuning, and started talking about ROI instead.

We spend a lot of time in insurance, financial services, and other regulated sectors. In these spaces, the pattern is always the same: huge enthusiasm for AI, but not much clarity on where to begin.

So we tried something different. We split into three (biscuit-based) teams - Biscuit Bandits, Custard Cream Collective, and The Hobnobs - and asked each one to tackle a real, recurring client pain point. They had one day to build something that could show value within four weeks. Not a vision deck, or a future roadmap, but a working prototype with a measurable return.

Here’s how it went.

The Setup: Pain First, Tech Second

Every team had to answer the same four questions before opening a single tool:

  • What’s the business problem?
  • Where does the friction or failure actually sit?
  • What does it cost to ignore?
  • If we fix this one thing, what impact can we measure?

Only when those answers were clear could anyone touch code. Because the biggest mistake we see in AI delivery isn’t choosing the wrong model - it’s solving the wrong problem. Teams spend six months building something impressive that quietly saves the business £10K a year. We’re not doing that.

What Made This Hackathon Different

Instead of asking, “What AI should we build?”, we flipped the conversation. We looked at the manual processes clients lose the most time and money to, and built around those first. Because our work largely involves regulated industries, everything had to be designed with the realities of compliance: explainability baked in, transparent audit trails, and human‑in‑the‑loop checks where they actually matter.

We also kept the scope disciplined. Every idea had to be something that could run as a four‑week pilot - installed in week one, trained in weeks two and three, live in week four. That forced everyone to focus on solving a real problem, not writing a case study.

And when it came to technical choices, we avoided the trap of reaching for whatever sounded impressive. Some problems needed orchestration. Others needed retrieval. Some just needed structured automation. The tech served the problem, not the other way around.

The Biggest Lesson

By the end of the day, one theme was impossible to ignore: the best AI projects rarely start with technology. They start with a frustrated human. Someone stuck doing the same repetitive, manual task day after day, fully aware of where the process breaks, how much time it wastes, and what 'better' would actually look like.

Those people don’t need a demo of the latest model. They need the bottleneck removed, and that’s our job, our bread-and-butter. AI is the hammer; the problem is the loose nail. And if you can’t explain which nail you're hitting - and why it matters - you’re not ready to build anything.

What’s Next

Over the next few weeks, we’ll share the details of what each team built. Our hackathon was about the approach, because most AI projects fail long before a model is deployed - not because teams lack skill, but because they start by choosing a hammer and then go searching for nails.

We did it the other way around. We found the nails first. Then we built the right tools. Watch this space to see what we built.