loader

Scaling Generative AI: Productionising LLMs with Databricks LLMOps

Prototypes are easy - production is where it counts. With 42% of organisations lacking generative AI expertise and fragmented toolchains creating friction, productionising and scaling LLMs is a challenge.
 
This session dives into Databricks LLMOps - covering evaluation, registry, pricing, and serving strategies. Learn how to deploy and scale generative models with confidence, clarity, and control.
 
Attendees will gain access to our resource 'The LLM Production Deployment Checklist'. A practical deployment checklist that breaks down the complete journey from prototype to production LLM, with detailed breakdowns showing exactly what needs to happen at each stage.
Group (67)
Date and time

18 November, 2025

4-4:30pm UTC+01:00 | 11-11:30am ET

pin tp
VENUE

Online via StreamYard

 

 

Generative AI is evolving fast, everyone knows this - but scaling it is another story. While 42% of organisations cite a lack of expertise in deploying generative models, many more struggle with fragmented toolchains and operational overhead. Moving from prototype to production isn’t just a technical challenge - it’s a strategic one.
 
We’ll explore the realities of scaling LLMs, share best practices in LLMOps, and walk through real-world deployment patterns that work. You’ll also get a closer look at Databricks’ Model Gateway and serving strategies that simplify productionising AI at scale.
 
If you're looking to lead the charge in AI deployment, this session will help you do it with clarity and confidence.
 

What You’ll Learn

  • Challenges in Deploying and Scaling LLMs: Understand the common pitfalls and how to avoid them.
  • LLMOps Best Practices: Learn how to evaluate models, manage registries, and optimise pricing.
  • Model Gateway & Serving Strategies: Explore scalable approaches to model deployment and inference.
  • Real-World Deployment Patterns: See how leading teams are operationalising AI across industries.

What You'll Get

Attendees will gain exclusive access to our resource, The LLM Production Deployment Checklist. This includes:

  • The benefits and best practices of LLMOps, to get your LLMs can be moved into production securely and efficiently, with the ability to scale.
  • A 4-stage checklist of how to deploy an LLM, with a detailed breakdown to cover everything needed
  • All based on our workflow when we deploy LLMs into production with clients. 

Why Attend?

  • Get practical guidance on scaling generative AI
  • Learn how Databricks simplifies MLOps and LLMOps

Speakers

 

Dr Gavi Regunath, Chief AI Officer

Gavi is a hands-on AI leader and recognised Databricks MVP, specialising in designing and delivering GenAI and ML solutions on Databricks. Gavi works directly with technical decision-makers to turn prototypes into operational services the business can trust.

Terry McCann, CEO

Terry is known for a pragmatic, outcome-focused approach to Databricks adoption. Terry has helped hundreds of organisations move from ambition to actual results on Databricks, always with a clear eye on business value.

Gavi-circle

 

terry-circle