loader
Untitled design - 2024-06-04T164017.831 pattents-img pattents-img

Resources

Welcome to the Advancing Analytics Library of good stuff. 

We’ve worked so hard to build up our expertise, we’re happy to shout it from the rooftops. We’re proud of everything we know about data, and we love sharing our knowledge. After all, exchanging good ideas is the way we all learn.

 

pattern
white-dice-img
left-vector
triangle

Recent Blogs

Here, you’ll find a library of blog, articles, podcasts and videos on topics ranging from Data Science to Data Engineering and from AI to DataOps. Fill your boots. And your mind.

Must Attend Data Conferences for CDAOs in Retail

Why attend events? Unless you are new around here you will be aware that at Advancing Analytics, we attend a lot of events! The data world is fast paced and keeping up is incredibly difficult. I personally use events as a way to stay up-to-date with the latest trends in Data & AI which will impact our customers the most! When we work with industry leaders in RCPG I am regularly asked "As a CDAO/CTO/CDO of a leading Retail / CPG busines, what events should I attend?". I hope this list helps summarise the events I peronally look forward to. Have I missed one? Please add a comment. Which events should you attend? As a Chief Data and Analytics Officer (CDAO), Chief Data Officer (CDO), or Chief Technology Officer (CTO), staying abreast of the latest trends, technologies, and best practices is paramount. Attending industry events and conferences is an excellent way to network, learn from peers, and keep your organisation at the cutting edge. Here’s a list of key events that should be on your radar. Gartner Data & Analytics Summit Location: Florida, London & Global (Check event website for your local event) Date: US typically March, London typically May. Why Attend: This summit is a powerhouse of insights for data and analytics leaders. It covers a broad spectrum of topics from advanced analytics, artificial intelligence, and machine learning, to data governance and data literacy. Networking opportunities with industry leaders and peers are abundant. Now I caveat this, because if you are from a deep development background this one is going to be too high level for you. Sometimes I find the technical sessions at Gartner miss the mark for me. High-level stuff is great! Website: https://www.gartner.com/en/conferences Big Data London Location: London, Hammersmith Date: September. Why Attend: Honestly, Big Data London is my favourite event of the year! Big Data LDN is the UK’s leading data and analytics event, featuring a lineup of speakers from top-tier companies and data & AI vendors. The event covers all aspects of data analytics, from governance and quality to advanced analytics and data science. There is a great mix of content aimed at leaders and developers alike. Each year it gets better. We have spoken and sponsored for the last 5 years and will continue to do so. Get to sessions early as they always sell out! Website: https://www.bigdataldn.com/ NRF: Retail’s Big Show Location: New York, NY, USA Date: January Why Attend: The National Retail Federation’s annual event is a cornerstone for anyone in the retail industry. It covers the latest trends in retail technology, consumer behaviour, and data analytics. It’s a must-attend for RCPG professionals looking to stay ahead in the competitive retail landscape. You can expect to attend sessions from the world's most iconic brands on how they are using technology to improve the customer experience. A great blend of thought-leadership and technical deep dives. NRF focuses on a lot more than just data & AI. Plus in 2024, you could eat a pizza made by a Robot! Website: https://nrfbigshow.nrf.com/ Shoptalk Location: Las Vegas, NV, USA & Europe Date: March US, June Europe Why Attend: Shoptalk is one of the largest retail conferences, bringing together industry leaders to discuss the future of retail. It covers a wide range of topics, including data analytics, customer experience, and digital transformation. RCPG professionals will find valuable insights into consumer trends, omni-channel strategies, and innovative data applications. Sometimes this one can feel like a replay of NRF Bigshow. There is a version for both the US and Europe. Again you can expect sessions on retail thought-leadership here. Less focused on the developer. You will leaver ShopTalk with 100 new ideas. Website: https://shoptalkeurope.com Groceryshop Location: Las Vegas, NV, USA Date: September Why Attend: Groceryshop focuses on the future of grocery retail, bringing together leaders from established and startup companies. It covers a broad range of topics including data analytics, digital transformation, and consumer trends. It's a key event for RCPG professionals aiming to stay ahead in the grocery sector, offering insights into the latest technologies and strategies for enhancing the consumer experience. Website: https://groceryshop.com/ Databricks Data + AI Summit Location: San Francisco, CA, USA Date: June Why Attend: This is one more for anyone who already uses Databricks or is considering how a cloud scale analytics platform might improve their data operations. Hosted by Databricks, this summit is essential for anyone serious about data and AI. It covers the latest advancements in data engineering, data science, and machine learning. For RCPG professionals, the summit offers deep dives into how to harness the power of data to drive innovation and efficiency in retail and consumer goods. Each year there are RCPG summits, think of this as a conference in a confernece. Focused on RCPG, this typically is a keynote, deep dive and panel sessions by Databricks and RCPG leaders. Website: https://www.databricks.com/dataaisummit Other notable conferences: Chief Data & Analytics Officer Exchange - Miami, FL. This style of event is run all over the world and is a chance for CDAOs to connect. ODSC US / Europe - Data Science conference - More for technical developers, but a lot of good content. What have we missed? Let us know and we will update the list.

Can GraphRAG finally solve the Kendrick & Drake dispute?

Following on from Part 1 where I introduced GraphQL and it’s popularity let’s take that a step further and look at how we can include a GraphQL service like neo4j into a RAG implementation using OpenAI’ GPT-4o and how it improves the contextual RAG responses. Introduction In the vibrant landscape of hip-hop, feuds are not just battles of words but also of narratives and public perceptions. The feud between Kendrick Lamar and Drake is a prime example, filled with subtexts, direct and indirect messages, and a broad influence across fans and media. To dissect this complex interplay, we turn to GraphRAG (Graph Retrieval Augmented Generation), which combines the analytic precision of graph databases with the intuitive understanding of language models. Setting Up the Analysis Environment Firstly, setting up the right tools is essential for any data-driven analysis. For this exploration, we utilize Neo4j, a graph database that excels in handling connected data, alongside LangChain's capabilities to interface with OpenAI's models: import os from langchain_community.graphs import Neo4jGraph os.environ["OPENAI_API_KEY"] = "your_openai_api_key" # Replace with actual key os.environ["NEO4J_URI"] = "your_neo4j_uri" os.environ["NEO4J_USERNAME"] = "neo4j" os.environ["NEO4J_PASSWORD"] = "your_neo4j_password" graph = Neo4jGraph() Theoretical Background Graph theory is adept at representing complex systems of relationships and interactions through nodes and edges, providing a structured way to visualize and analyse relationships. Graph RAG leverages this structure, enhancing the retrieval capabilities typically found in RAG systems by integrating the contextual depth that graphs provide. This dual approach enables a more nuanced understanding and retrieval of information, perfect for dissecting the Kendrick-Drake narrative. Methodology Data Collection Our analysis begins with the meticulous collection of data points—lyrics, tweets, interviews—anything where Kendrick and Drake reference each other, either directly or indirectly. These data points are then loaded and preprocessed using LangChain tools: from langchain.document_loaders import WikipediaLoader from langchain.text_splitter import TokenTextSplitter raw_documents = WikipediaLoader(query="Kendrick Lamar").load() text_splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=24) documents = text_splitter.split_documents(raw_documents[:3]) Graph Construction We translate this prepared textual data into a structured graph format, employing LLMs to automatically recognize and encode relationships between entities mentioned in the data: from langchain_openai import ChatOpenAI from langchain_experimental.graph_transformers import LLMGraphTransformer llm = ChatOpenAI(temperature=0, model_name="gpt-4") llm_transformer = LLMGraphTransformer(llm=llm) graph_documents = llm_transformer.convert_to_graph_documents(documents) graph.add_graph_documents(graph_documents, baseEntityLabel=True, include_source=True) Let’s have a look at a section of the database as a graph. Hybrid Retrieval for RAG With the graph constructed, our next step involves a hybrid retrieval approach. This method enhances the retrieval of contextually relevant information by combining vector search through unstructured text with structured graph data: from langchain_community.vectorstores import Neo4jVector from langchain_openai import OpenAIEmbeddings vector_index = Neo4jVector.from_existing_graph( OpenAIEmbeddings(), search_type="hybrid", node_label="Document", text_node_properties=["text"], embedding_node_property="embedding" ) Results To analyse the feud, we pose queries directly related to the artists' interactions, using the graph to retrieve detailed contexts and understand the underlying dynamics: from langchain_core.runnables import RunnableParallel, RunnablePassthrough, ChatPromptTemplate, StrOutputParser template = """Answer the question based only on the following context: {context} Question: {question} Use natural language and be concise. Answer:""" prompt = ChatPromptTemplate.from_template(template) chain = ( RunnableParallel( { "context": _search_query | retriever, "question": RunnablePassthrough(), } ) | prompt | llm | StrOutputParser() ) result = chain.invoke({"question": "How do you win a beef in rap?"}) print(result) Response: 'Winning a beef in rap often involves delivering more impactful diss tracks, gaining public and critical support, and effectively countering accusations. In the context of the Drake and Kendrick Lamar feud, critics and social media users have generally cited Lamar as leading or winning due to his strong responses and the support from major outlets like Pitchfork, The Ringer, and Rolling Stone.' We can even delve into the sub context of who Kendrick Lamar is take this for example: print(structured_retriever("Who is k.dot?")) We can see the association through the models response: K.Dot - RELEASED -> Y.H.N.I.C. (Hub City Threat: Minor Of The Year) K.Dot - RELEASED -> Training Day K.Dot - RELEASED -> C4 Kendrick Lamar - ALIAS -> K.Dot Now for the big question “Can you imperially prove that Kendrick Lamar won the beef against drake?” Let’s have a look at the GraphRAG’ input and response. chain.invoke( { "question": "Can you imperically prove that Kendrick Lamar won the beef against Drake?", "chat_history": [ ( "How do you win a beef in rap?", "Winning a beef in rap often involves delivering more impactful diss tracks, gaining public and critical support, and effectively countering accusations. In the context of the Drake and Kendrick Lamar feud, critics and social media users have generally cited Lamar as leading or winning due to his strong responses and the support from major outlets like Pitchfork, The Ringer, and Rolling Stone." ) ], } ) “Yes, critics from major outlets like Pitchfork, The Ringer, and Rolling Stone have generally cited Kendrick Lamar as the winner of the beef against Drake. Additionally, social media users and music critics have also considered Lamar the winner, indicating strong public and critical support for his position in the feud. ” — GraphRAG Conclusion GraphRAG provides a deep and nuanced view into the layers of the Kendrick and Drake feud, uncovering not just what was said, but the broader context of why it was said, thus enabling a clearer understanding of their interactions. While it may not solve their feud, it does offer a powerful tool for analyzing similar conflicts in the music industry and beyond. Future Applications: GraphRAG in Business and Legal Document Analysis GraphRAG's capabilities are broadly applicable across various business domains where complex data relationships need to be deciphered quickly and accurately. For instance, in sectors like finance and healthcare, GraphRAG can enhance decision-making by providing comprehensive insights through the analysis of interconnected data points. This could include identifying trends from financial reports or understanding patient outcomes from medical records, all organized in an intuitive graph-based format. This approach not only streamlines data processing but also enriches the context and depth of the information retrieved, aiding strategic business decisions. In the legal field, GraphRAG introduces a revolutionary way to handle and interpret dense legal documents. By building knowledge graphs that map out the relationships between cases, statutes, and legal principles, the tool dramatically reduces the time required for legal research and enhances the precision of legal advice. Lawyers can leverage this technology to quickly find relevant case law and statutory references, ensuring a comprehensive understanding of all pertinent legal texts. The implementation of GraphRAG within legal practices particularly highlights its potential to manage the complexity of legal language and the intricate network of legal rulings. As such, this technology not only boosts the efficiency of legal research but also provides deeper analytical insights, potentially transforming the traditional methodologies of legal analysis and impacting future legal proceedings and outcomes. Ready to Transform Your business with GraphRAG? At Advancing Analytics, we specialise in implementing advanced analytical solutions that drive decision-making and innovation. If you're interested in exploring how GraphRAG can enhance your data analytics capabilities or want to integrate the technology into your existing systems, we're here to help. Contact Us Today to schedule a consultation and discover how our expertise can empower your team to uncover actionable insights from complex data. Let's unlock the potential of your data together.

IDAHOBIT Day: A Beacon for Equality and the Prelude to Pride Season

Introduction The International Day Against Homophobia, Biphobia, and Transphobia (IDAHOBIT) is observed annually on 17th May. This date holds historical significance as it commemorates the day in 1990 when the World Health Organisation (WHO) removed homosexuality from the International Classification of Diseases. IDAHOBIT serves as a powerful reminder of the ongoing struggle against discrimination and the importance of celebrating diversity and freedom. Founded in 2004, IDAHOBIT was established to raise awareness of the violence, discrimination, and repression experienced by LGBT+ individuals globally. It is a day that unites people across the world to stand against prejudice and to promote the rights and well-being of the LGBT+ community. IDAHOBIT is not just a day of reflection but also a call to action, where individuals and institutions are urged to commit to creating a society where everyone can live freely and authentically. 'No one left behind' In 2024, the theme is "No one left behind". In the UK, although many people in the British Gay & Lesbian communities enjoy greater rights & acceptance than ever before, the wider LGBTQI+ community still have significant legal and societal barriers in their daily lives. Only as one community can we all move forward. As the may17.org website says - "this year's IDAHOBIT theme is a call for unity: only through solidarity for each other will we create a world without injustice, where no one is left behind". IDAHOBIT also marks the beginning of Pride Season, a period of vibrant celebrations, parades, and events that mark the LGBTQI+ community's history, achievements, and diversity. However, Pride began as, and still is a protest, striving for the rights of everyone in our community and around the world. As the rainbow flags start to appear on streets and homes around the UK, the 17th May reminds us that there is still so much more to do. At Advancing Analytics, we have a diverse team where sexual orientation, gender identity and gender expression is an important part of bringing our whole selves to the workplace. One of our core values is "come as you are". This reflects the importance of bringing diverse identities, thoughts and viewpoints into the business, and our LGBTQI+ employee community is an important part of that.

The Rise of GraphQL

Introduction: With a world wide industry market cap of $7.1 trillion and substantial investments totalling $22.3 billion, let’s delve into the basics around GraphQL and why you should be including it in your infrastructure. 2012 was a transformative year with the release of the transformers paper that underpins much of the recent hype behind ‘LLM’. There was another technology created in the depths of Facebook—GraphQL. It was originally designed to describe the capabilities of data models for client-server applications. For the next three years, it stayed internally at Facebook till in 2015 the development of the open standard initiated and then it was licensed in 2017. Two short years later, the GraphQL Foundation formed. What is it? GraphQL is a query language for APIs, and a runtime for fulfilling those queries with your existing data. Unlike traditional REST API strategies that require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request. Apps using GraphQL are fast and stable because they control the data they get, not the server. At its core, GraphQL enables declarative data fetching where a client can specify exactly what data it needs from an API. Instead of multiple endpoints that return fixed data structures, a GraphQL server only exposes a single endpoint and responds with precisely the data a client asked for. Example GraphQL Query: Consider a scenario where you need to fetch specific user information and related posts. A GraphQL query for this might look like: query {   User(id: "Chris Durow") {     name     bio     posts {       title       content     }   } } Using this query this is what my response would look like: { "data": { "User": { "name": "Chris Durow", "bio": "AI Consultant with a cool shed.", "posts": [ { "title": "The Rise of Graph QL", "content": "An intro to what GraphQL is and its possible applications" }, { "title": "How GraphQL can supercharge your RAG implementation", "content": "A look into integrating GQL into your RAG implementation to surpercharge those respones! }, { "title": "Let's talk domain specific LLM'", "content": "An introduction to how you can harness open source domain specific LLM' to help with legal tasks" } ] } } } Why use it? GraphQL offers several advantages over traditional REST APIs. It allows clients to request exactly the data they need, nothing more and nothing less. This efficiency in data fetching not only speeds up applications but also reduces the bandwidth usage. GraphQL APIs are strongly typed, and the type system ensures that the API only performs operations that are possible, providing a form of built-in documentation. Here are the four main value propositions of GraphQL: 1. Efficient and Flexible: GraphQL allows clients to request only the data they need. This can lead to faster network requests and better app performance. Additionally, GraphQL is highly flexible, allowing clients to request data from multiple sources and aggregate it in a single query. 2. Strong typing and validation: GraphQL schemas provide strong typing and validation of data, ensuring that clients can only request valid data and reducing the risk of runtime errors. 3. Client-driven development: GraphQL allows clients to drive the development of the API, as clients can specify exactly what data they need and how it should be returned. This can lead to a more collaborative and iterative development process between front-end and back-end teams. 4. API consolidation: GraphQL provides the ability to combine data from multiple sources in a single query which can make it easier to consolidate multiple APIs into a single GraphQL API, reducing complexity and improving performance. What Scenarios would I use it in? 1. Single-Page Applications (SPAs): SPAs benefit immensely from GraphQL's ability to aggregate data from multiple sources with a single API call. This reduces the need to maintain multiple endpoints and simplifies development. 2. Microservices Architecture: In a system where different microservices own different pieces of data, GraphQL acts as a unified facade that simplifies client interaction with these services. 3. Real-Time Applications: GraphQL subscriptions support real-time updates to the client, which are essential for applications like live chat, real-time feeds, and collaborative platforms. Conclusion GraphQL’s capability to fetch precisely what’s needed and nothing more, its efficient handling of real-time data, and the ease of integrating with modern architectures make it a compelling choice for modern web and mobile applications. As developers seek more efficiency and better performance from their applications, GraphQL is increasingly becoming the go-to technology for API development. Don’t hesitate to get in touch if you want to chat about how LLMs can revolutionise your processes.

Partnering with dbt Labs: enhancing our analytical engineering...

Advancing Analytics are thrilled to announce that we have partnered with dbt Labs, the company behind the premier analytics engineering product: dbt. dbt is fast becoming the new standard for data transformation workflows, that power and populate our curated layers. Why dbt? dbt Lab’s platform enables organisations to easily build Analytics Engineering best practice into the processes and operations throughout their business, improving data trust due to consistency, and shipping data products faster thereby providing value quicker. We have partnered with dbt for several reasons. First and foremost, dbt is a leading platform for analytics engineering, offering a range of tools and capabilities for building and deploying data models. Additionally, the platform is user-friendly and intuitive — it covers the full lifecycle of a data product and provides value to teams. Furthermore, dbt allows teams to develop data products faster, work from the same set of assumptions, deploy with confidence, and do all of this in a way that is secure and well-governed. What's even more exciting is that dbt can be integrated with Microsoft Fabric and Databricks, two of our other valued partners. We truly believe this partnership will drive innovation in the field of Analytics Engineering and help us continue delivering high-quality services to our customers. Why is this important for our clients? By partnering with dbt, we are able to offer our customers access to the latest and greatest in Analytics Engineering tools. This means that we can provide more advanced and sophisticated solutions to meet their business needs. Additionally, dbt’s platform is built for companies looking to democratise analytics across their organisation, enabling data experts and domain experts to work together to build analytics into their daily operations. We are excited to work with dbt and even more excited to bring value to our customers. Stay tuned for more exciting updates! To read more about dbt, have a look at their website.

LLMOps With PromptFlow

Imagine stepping into a world where artificial intelligence (AI) and machine learning (ML) aren't just about complex algorithms and code but about making things run smoother, safer, and on a bigger scale. Welcome to the world of PromptFlow, a tool that's not merely cutting edge—it's redefining the edge itself within the Azure Machine Learning & AI studio. What is LLMOps? LLMOps stands for Large Language Model Operations, a crucial framework that ensures the smooth operation of large language models from their development phase to their deployment and ongoing management. It involves crafting effective prompts for accurate responses, deploying these complex models seamlessly into production, and continuously monitoring their performance to guarantee they remain effective, safe, and within ethical boundaries. By integrating LLMOps principles, organizations can harness the full potential of their large language models, ensuring they're not only powerful but also responsibly managed. LLMOps VS MLOps MLOps is all about managing the entire life cycle of machine learning models, from cradle to grave. We're talking data preparation, model training, deployment, monitoring, and even retraining when necessary. It's a comprehensive approach to ensuring your ML models are performing at their best, no matter what stage they're in. LLMOps, on the other hand, is a bit more specialized. It's focused specifically on the life cycle of large language models (LLMs). These bad boys are trained on massive amounts of text data and can be used for all sorts of cool stuff, like generating text, translating languages, and even answering questions in a way that's actually informative (imagine that!). So, while MLOps is more of a catch-all for ML model management, LLMOps is tailored specifically for the needs of those massive language models. But hey, whether you're dealing with MLOps or LLMOps, the goal is the same: ensuring your models are running smoothly and delivering top-notch performance. Implementing LLMOps with PromptFlow & Github Actions With Azure ML CLI & YAML configurations for Github actions, automating PromptFlow workflows becomes a breeze. Think of it as setting up a series of dominoes; once you tip the first one, everything else follows smoothly. Here’s how it goes: Kick things off by checking out the repository. Get Azure ML CLI extension onboard—this is like adding a turbocharger to your Azure CLI. Log into Azure with Service Principal, using secrets as your backstage pass. Set up Python—choose your version, like picking the right gear for a road trip. Install PromptFlowdependencies—think of it as packing your bag with everything you need for the journey. Run PromptFlow—start your engines and begin the adventure, logging everything along the way. Set the run name—it's like naming your vessel for the voyage. Show off the current run name—a little bragging about your ship's name. Display PromptFlow results—time to look at the snapshots from your trip. But why stop there? It's time to move on to evaluating results and registering models. It's somewhat like reaching your destination, checking if everything's as you expected, and then making your mark. Taking it Further - a second action could then assert the evaluation results and do the model registration Start with checking out the repository again—returning to base. Install that Azure ML CLI extension once more—because why fix what isn't broken? Log back into Azure—hello again, old friend. Set up Python—still got to have the right gear. Configure Azure CLI with your subscription—like choosing the right map for the journey ahead. Install those dependencies again—can't embark without your essentials. Assert evaluation results— Execute a Python script to assert if the evaluation results meet a predefined quality threshold, set an output variable based on the assertion result. Register the PromptFlow model— If the assertion in the previous step is true (indicating the model meets the quality criteria), register the model using configurations from defined in your YAML file. Follow up with a final action that automates the process of deploying & testing a PF workflow to Azure Machine Learning as an online-managed Endpoint. Check out the repository. Install Azure ML CLI extension. Azure login. Set up Python. Set default subscription. Create a unique hash: Generate a short hash to ensure the endpoint name is unique. Create and display a unique endpoint name: Modify the endpoint name to include the generated hash for uniqueness and displays it. Setup the online endpoint: Create an online endpoint using the YAML configuration. Update deployment configuration: Dynamically update the deployment YAML with specific Azure resource identifiers. Setup the deployment: Create an online deployment on the previously created endpoint, directing all traffic to it. Retrieve and store the principal ID: Obtain the principal ID of the managed identity associated with the endpoint and stores it for later use. Assign RBAC Role to Managed Identity: Assign the "Azure Machine Learning Workspace Connection Secrets Reader" role to the managed identity to grant necessary permissions. Wait for role assignment to propagate: Pause the workflow for 5 or 10 minutes to ensure the role assignment has taken effect. Check the status of the endpoint: Verify the operational status of the online endpoint. Check the status of the deployment: Retrieve and display logs for the deployment to ensure it is operating as expected. Invoke the model: Test the endpoint by invoking it with a sample request. With these 3 action examples you can achieve a full LLMOps solution, but you must also consider the best practices as well as scaling up to meet future needs. LLMOps Best Practices: 1. Crafting Your Team’s Blueprint - Think of this as assembling your superhero squad, where everyone from data scientists to security gurus knows their mission. And don’t forget to build a treasure chest (a.k.a. a documentation repository) where all your maps and tools are safely kept. 2. Centralize Data Management - Data is your goldmine, so manage it like a pro. Use a system that can handle the load without breaking a sweat, set rules to keep the gold polished (quality and privacy, anyone?), and keep track of your treasure’s versions for easy comebacks. 3. Automated Deployment and Continuous Monitoring- Make deploying your models as easy as waving a wand by using automation. Keep an eye on your creations with smart monitoring and let CI/CD pipelines be your fast track to improvement. 4. Fortifying Your Castle - Implement Security Measures - Security is key. Only let the trusted knights into your castle, put up magical shields around your data, and follow the kingdom’s laws (hello GDPR and CCPA) to protect the realm. 5. The Quest for Understanding -Promote Explainability - Use the powers of Explainable AI to peel back the curtain on your model’s decisions, hunt down biases like dragons, and foster a kingdom where transparency reigns supreme. 6. Embracing the Journey of Knowledge - Stay curious, collect insights like precious gems, and always be ready to adapt. Your models are living entities, growing and changing with the landscape. 7. The Ethical Compass - Adhere to Data Ethics - Navigate the high seas with honor. Follow the stars of ethical guidelines, ponder the impact of your actions, and keep a logbook to chart your course and hold yourself accountable. PromptFlow is fantastic at deploying models for real-time scoring, but as your needs grow, so should your tools. Large-scale LLM applications might require additional mechanisms for scaling and load balancing across multiple instances of the Azure OpenAI Service. Enter Azure API Management for scaling and balancing, ensuring your application runs smoothly, no matter the load. Let’s explore following example illustrated by Andre Dewes in his Smart load balancing for OpenAI endpoints and Azure API Management blog post. This sample leverages Azure API Management to implement a static, round-robin load balancing technique across multiple instances of the Azure OpenAI Service. The key benefits of this approach include: Support for multiple Azure OpenAI Service deployments behind a single Azure API Management endpoint. Abstraction of Azure OpenAI Service instance and API key management from application code, using Azure API Management policies and Azure Key Vault. Built-in retry logic for failed requests between Azure OpenAI Service instances. By integrating Azure API Management, PromptFlow users can achieve scalability and high availability for their LLM applications, ensuring seamless operation even under heavy load. In a nutshell, PromptFlow is not just a tool; it's a revolution in LLMOps, making the management of large language models not just possible but efficient, secure, and scalable. Its seamless integration with Azure DevOps and GitHub, coupled with its adaptability across different project stages, makes it a premier solution for anyone looking to enhance their AI operations Are you struggling with taking your LLM application into production? Reach out to discover the expansive potential of LLMs for your enterprise. To learn more about implementing LLMs in your enterprise, download our LLM Workshop flyer or get in touch with us today.

Comparison of Prompt Flow, Semantic Kernel, and LangChain for AI...

Introduction The world of AI is moving super fast! It's like we're on a speedy train, watching the landscape of artificial intelligence change and grow every day. With all this growth, we've got some cool tools popping up to help us develop even smarter large language model (LLM) apps powered by generative AI. Recently, I looked into the options within Azure, and as always there is more than one option. I started my journey in the world of LLM app development using LangChain. I then came across Prompt Flow and Semantic Kernel (SK). These are pretty much the three big players when it comes to LLM app development. Think of them as the superheroes of the LLM app development world, each with their own special powers. But why do we have so many of these tools, and why do they seem to be doing the same thing? Well, even though they aim to make applications that are smarter and more helpful, each one has its own unique functionalities. Let's dive in and explore what makes LangChain, Semantic Kernel, and Prompt Flow stand out in the bustling world of AI. It's going to be a fun ride, so buckle up! Introduction to LangChain LangChain is a framework that simplifies integrating LLM capabilities into your application. It supports models by OpenAI, Cohere, HuggingFace, Anthropic and many more. The essence of LangChain is captured in its name, which combines "Lang" (short for language) with "Chain," reflecting the framework's core functionality of linking various components to harness LLMs for creating sophisticated language-based applications. The framework is built to support a wide array of systems, tools, and services, positioning LangChain as a versatile and comprehensive solution for developing applications that leverage language models. It provides developers with a suite of features for tasks like document handling, code generation, data analysis, debugging, and integrating with databases and various data sources. Pros: Accessible Learning Curve: Easy for beginners to grasp and start using quickly. Comprehensive Features: Packed with out-of-the-box plugins for a wide range of AI applications. Ideal for Prototyping: Encourages quick experimentation and real-world application testing. Consistent Updates: The community around LangChain is active, and the framework is regularly updated to incorporate the latest developments in LLMs and best practices in AI. Cons: Documentation Variability: Can be inconsistent, with some documentation being outdated or unclear. Overcomplication Concerns: Sometimes seen as adding unnecessary complexity to simpler tasks. Production Reluctance: Due to its expansive nature and frequent updates, some developers may be cautious about using LangChain in production environments. For those interested in exploring LangChain further, extensive documentation is available to help you dive in. Additionally, the project's GitHub repository offers a collection of example code, demonstrating how to build applications using LangChain effectively. Introduction to Semantic Kernel (SK) Semantic Kernel is an open-source SDK from Microsoft that makes it easy to build AI agents that can interact with a wide range of LLMs as well as call existing application functions. It is available in Python, C#, and Java. Semantic Kernel revolves around a Kernel that handles events using plugins and functions to complete a task. It also includes a Planner that can automatically decide which plugins and functions to use given a prompt. Pros: Microsoft Support and .NET Integration: Offers reliability and seamless integration within the Microsoft ecosystem. Streamlined Prompt Management: Simplifies the creation and management of AI interactions. Versatility in Application: Suitable for both experimentation and deployment in real-world scenarios. Plugin Compatibility: Allows for easy transformation of SK plugins into OpenAI plugins. Cons: Adaptation Requirement: Needs regular updates to keep pace with AI advancements. Resource and Documentation Scarcity: Limited support materials can hinder learning and application. Integration Learning Curve: Presents a steeper learning curve due to its unique approach. Plugin and Extension Availability: Offers a narrower range of plugins and extensions compared to competitors. To get started with SK, Microsoft has published a GitHub repo where you can find notebooks in both Python, C# and Java, along with pre-built functions and plugins. Semantic Kernel vs Langchain Semantic Kernel is very similar to LangChain, however LangChain now has more features than Semantic Kernel in practical scenarios and has a far bigger community behind it. Choosing between LangChain and Semantic Kernel depends on specific project requirements, preferred programming languages, and the desired level of integration and flexibility. LangChain is well-suited for projects that benefit from its wide range of out-of-the-box tools and active developer engagement. In contrast, Semantic Kernel may be the better choice for projects within the .NET ecosystem or those that require a lightweight framework with strong planning and memory management capabilities. Introduction to Prompt Flow Image from Azure platform Prompt Flow is a toolkit designed to make the development of AI apps powered by Large Language Models (LLMs) more streamlined from prototyping, testing, evaluation, deployment and monitoring. It makes prompt engineering much easier and enables developers to build high-quality LLM applications. It's available as an open-source project complete with its own SDK and Visual Studio Code extension. However, its static nature means we often need something more - like LangChain or Semantic Kernel - to bring dynamic orchestration into the mix. For instance, in a customer service scenario, a static system might provide generic answers to frequent questions without considering the specific details of a customer's issue. On the other hand, a dynamic orchestration powered by LangChain or Semantic Kernel would analyse the customer's query in real-time, pull specific data related to their account or issue from a claims database, and generate a response tailored to their unique situation. Example of Dynamic Orchestration: Without Dynamic Orchestration: Customer: "I'm having issues with my product warranty claim." Response: "Here's our warranty policy [link]. Please follow the steps mentioned for claims." With Dynamic Orchestration (LangChain/Semantic Kernel): Customer: "I'm having issues with my product warranty claim." AI System: Analyses the query, identifies the product and customer account from the database, checks the status of the warranty claim. Response: "We see that your claim for [Product Name] is currently under review. It was submitted on [Date], and the average processing time is [X] days. You'll receive an update by [Estimated Date]." Pros: Enhanced Coherence: Ensures AI outputs are both coherent and relevant. Improved Control: Allows users to direct conversations or content generation towards specific goals. Flexibility: Adaptable across various applications, from creative writing to solving technical issues. Orchestrated Executable Flows: Simplifies complex workflows through a visual graph, enhancing AI output effectiveness. Enhanced Debugging and Collaboration: Facilitates teamwork and iterative improvements on executable flows. Dynamic Prompt Variation: Enables experimentation with prompt variants to optimise AI interactions. One-click deployment: Easy to deploy the application to a real-time endpoint Cons: Complexity in Design: Demands careful planning and understanding of AI model interpretations. Potential for Derailment: Risks off-topic or irrelevant AI responses without proper management. To get started with Promptflow, have a look at samples of prompt flow and explore the step-by-step tutorials. Semantic Kernel or LangChain in PromptFlow The good news is that it is possible to integrate LangChain and semantic kernel into prompt flow to build a powerful ai system. LangChain Integration: By incorporating LangChain into PromptFlow, you can leverage its modular approach to customize and extend the AI’s abilities. LangChain supports a broad range of language models and tools, enabling the creation of complex, multi-step processes that can adapt to various tasks and user needs. For example, a content creation platform can utilise LangChain to dynamically chain together processes for researching, drafting, and refining written content, based on user prompts and feedback. Semantic Kernel Integration: Semantic Kernel's integration brings structured decision-making and AI-driven task automation into Prompt Flow. It allows for the creation of intelligent agents that can understand context, make decisions, and execute tasks by interacting with databases, APIs, and other system components. An eCommerce support chatbot enhanced with Semantic Kernel, for instance, could automatically handle inquiries, access user purchase histories, initiate returns, or escalate complex issues to human agents, all within a seamless conversation flow. Real-World Applications: Case Studies To truly grasp the utility of these tools, let's consider two hypothetical scenarios: an AI-driven customer support system and a content generation platform for digital marketing. These examples illustrate how Prompt Flow, LangChain, and Semantic Kernel can be applied to solve real-world problems, from improving customer service interactions to automating content creation across various platforms. Case Study 1: Automated Customer Support Scenario: A tech company wants to improve its customer support system by integrating an AI that can handle inquiries through chat. Prompt Flow Application: The company designs a series of prompts to guide the AI in handling common customer issues, such as password resets or troubleshooting service outages. This ensures that the AI maintains coherence and stays on topic throughout the interaction. However, the static nature of prompt flow might limit the AI's ability to handle unexpected or complex queries without manual intervention or reprogramming. LangChain Application: By utilising LangChain, the company chains together different tasks like understanding the customer's issue, searching the knowledge base for solutions, and generating human-like responses. LangChain's modularity allows the AI to adapt to a wide range of inquiries by dynamically selecting the appropriate chain of tasks for each situation. This flexibility makes the customer support system more robust and capable of handling complex issues without human intervention. Semantic Kernel Application: The company uses SK to build an AI agent that can interact with both the customer and the company's systems. SK's event handling and automatic decision-making capabilities allow the AI to understand the context of customer inquiries, decide the best course of action, and even perform tasks like resetting passwords or initiating service diagnostics. The integration with .NET and support for various LLMs make SK an ideal choice for companies already invested in Microsoft technologies. Case Study 2: Content Generation for Digital Marketing Scenario: A digital marketing agency wants to automate the creation of content for various platforms, requiring nuanced tone and style adjustments. Prompt Flow Application: The agency uses a detailed prompt flow to guide the AI in generating content that aligns with different brand voices and marketing goals. This approach ensures that each piece of content is coherent and on-brand. However, the need for extensive and detailed prompt flows for each brand voice and style can become labour-intensive. LangChain Application: The agency leverages LangChain to chain together tasks for analysing the target audience, generating content ideas, and writing drafts in specified tones and styles. LangChain's ability to integrate different LLM capabilities and plugins allows for more dynamic content generation, making it easier to adapt to various brands and marketing strategies without extensive manual input. Semantic Kernel Application: By utilising SK, the agency creates an AI agent capable of understanding the nuances of different brand voices and marketing objectives. The Kernel's event handling and Planner's decision-making processes enable the AI to automatically adjust its writing style and content focus based on the brand's guidelines and the campaign's goals. SK's clean prompt implementation and plugin system allow for seamless integration into the agency's existing workflows. Conclusion Prompt Flow is excellent for maintaining coherence and control in predictable scenarios but may require adjustments for complex or unexpected inquiries. LangChain excels in applications requiring flexibility and the integration of multiple tasks or data sources, making it ideal for dynamic and complex scenarios. Semantic Kernel offers robust event handling and decision-making capabilities, suitable for scenarios where integration with existing systems and nuanced control over AI behavior are critical. Choosing between Prompt Flow, LangChain, and Semantic Kernel depends on the specific needs and challenges of your project. Whether it's the flexibility and orchestration capabilities of LangChain, the integration and streamlined management offered by Semantic Kernel, or the coherent and controlled environment facilitated by Prompt Flow, each tool has its place in the AI development toolkit. Understanding their strengths and limitations is key to leveraging the right tool for the right task in the ever-evolving world of artificial intelligence.

An Ultimate Guide to Databricks Unity Catalog

Databricks Unity Catalog (UC) has gained significant attention lately, with Databricks making huge investments and shifting to make it the default choice for all new Databricks accounts. What is Databricks Unity Catalog? Unity Catalog is Databricks’ governance solution and serves as a unified system for managing data assets. It acts as a central storage repository for all metadata assets, accompanied by tools for governing data, access control, auditing, and lineage. Unity Catalog streamlines the security and governance of the data by providing a central place to administer and audit data access. It maintains an extensive audit log of actions performed on data across all Databricks workspaces and has effective data discovery capabilities. Essentially, it brings all your Databricks workspaces together, offering fine-grained management of data assets and access. This not only streamlines operations by reducing maintenance overheads but also accelerates processes, increases efficiency and productivity. Why Databricks Unity Catalog? Databricks gained popularity for introducing the Lakehouse architecture, establishing a resilient platform for data storage and processing. The Lakehouse concept relies on the Delta file format as its driving force, effectively tackling data management challenges within Lakehouse platforms. However, there remained gaps in data discovery and governance, as well as a lack of fine-grained security controls within these Lakehouse data platforms. Databricks introduced Unity Catalog to bridge these gaps and to eliminate the need for external catalogs and governance tools. Crucially, Databricks is making these capabilities available by default as part of its platform – potentially offering a huge win for customers who adopt Unity Catalog. Key Features of Databricks Unity Catalog The key features of Databricks Unity Catalog revolve around data discovery, governance, access, lineage, and auditing. Let's explore these features: Data Discovery Unity Catalog offers a structured way to manage metadata, with enhanced search capabilities while ensuring security is based on user permissions. It allows tagging and documenting data assets, offers a comprehensive search interface, and utilises lineage metadata to represent relationships within the data. Users can explore data objects in Unity Catalog through Catalog Explorer, based on their permission levels. They can use languages like SQL and Python to query and create datasets, models, and dashboards from the available data objects. Users can also check field details, read comments, and preview sample data, along with reviewing the full history and lineage of the objects. Data Discovery – Catalog Explorer Data Discovery – Search Data Governance Unity Catalog acts as a central repository for various data assets, such as files, tables, views, volumes, etc. It incorporates a data governance framework and maintains an audit log of actions performed on data stored within a Databricks account. Data Governance – Grant Privileges Data Access Unity Catalog provides a centralized platform to manage data access policies that are applied across all relevant workspaces and data assets. The access control mechanisms use identity federation, allowing Databricks users to be service principals, individual users, or groups. In addition, SQL-based syntax or the Databricks UI can be used to manage and control access, based on tables, rows, and columns, with the attribute level controls coming soon. Fine-grained privileges operate across various levels of the metastore's three-level namespace, with inheritance occurring downward in the namespace hierarchy. You can control who can access things like clusters, DLT pipelines, SQL warehouses, and notebooks at the workspace level using Access Control Lists (ACLs). Admin users and users with ACL management privileges can manage these ACLs. Through Unity Catalog's access control management, you enhance security by eliminating direct access to the underlying data storage, therefore adding an additional layer of protection. Data Access – Permissions and Grants Data Lineage Unity Catalog has end-to-end data lineages for all workloads, giving visibility into how data flows and is consumed. Data lineage has become vital in understanding data movement, tracking, monitoring jobs, debugging failures and tracing transformation rules. The lineage feature provides a comprehensive view of both upstream and downstream dependencies, including the data type of each field. Users can easily follow the data flow through different stages, gaining insights into the relationships between tables and fields. The lineage tracks the individual fields within a dataset, and not just at the table level. This enables users to examine the transformations applied at the field level and understand the components of the field sources. This makes the feature extremely valuable and effective. As data lineage holds critical information about the data flow, it uses the same governance and security model to restrict access based on users' privileges. Data Lineage Data Auditing Unity Catalog automatically captures user-level audit logs and records the data access activities. These logs encompass a wide range of events associated with the catalog, such as creating, deleting, and altering various components within the metastore, including the metastore itself. Additionally, they cover actions related to storing and retrieving credentials, managing access control lists, handling data-sharing requests, and more. The built-in system tables let you easily access and query the account's operational data, including audit logs, billable usage details, and lineage information. Data Auditing - System Tables Key Components of Databricks Unity Catalog Unity Catalog Architecture Unity Catalog has a metastore, similar to Hive metastores, but this layer of abstraction allows more efficient and improved categorization of data assets. The metastore serves as the primary container for objects in Unity Catalog, holding crucial information about data assets such as tables, databases, and other data objects, making it easier to manage and discover data. The metadata contains essential information such as schema definitions, data types, data source locations, and the permissions governing access to these data assets. There is a well-defined and efficient upgrade path designed for migrating from Hive metastore to the Unity Catalog metastore, simplifying the adoption of Unity Catalog. This streamlined process ensures a smooth migration, minimising disruptions, and maximising the benefits of Unity Catalog features immediately. Unity Catalog is restricted to having only one metastore per region. A key factor to keep in mind about the metastore is the location of its underlying storage container. This is significant because various catalogs across different business areas may need to access the centrally hosted container. To enable Unity Catalog in Azure Databricks, an Azure AD Global Administrator is required initially. The initial Databricks account admin will be the Azure AD Global Administrator, and then additional account admins can be assigned without specific Azure AD roles. The Azure Databricks account must be on the Premium plan. Unity Catalog adopts a three-level namespace structure for various types of data assets in the catalog. The below illustrates the structure of the object model. Unity Catalog Objects Model Structure Cloud Storage To efficiently handle connections between Databricks and Cloud Storage such as Azure Data Lake Storage, Databricks recommends configuring access to cloud storage exclusively through Unity Catalog. The following concepts were introduced to manage relationships: Storage credentials serve as the credential that provides access to the storage account, such as Azure Data Lake Storage. It utilises either an Azure managed identity (recommended) or a service principal. External locations consist of a reference to a storage credential and a path within the storage account. It provides the authorisation for access to the specified storage path using the storage credential. Managed storage locations serve as the default storage location for both managed tables and volumes. It represents the locations in the storage account linked to a metastore, catalog, or schema. This allows you to easily secure your data and storage accounts, as it adds another layer of protection. External Data - External Locations Access Connector for Azure Databricks Configuring a managed identity for Unity Catalog in Azure, involves creating an access connector specifically for Azure Databricks. By default, the access connector comes with a system-assigned managed identity, however, you have the option to link a user-assigned managed identity. Following this, the managed identity must be given appropriate access to the Storage Account, i.e. Storage Blob Data Contributor/Owner. Unity Catalog Objects Model The primary flow of data objects begins from the metastore to tables or volumes. All data objects are referenced using the three-tier namespace, the catalog, the schema, and the asset. The following is the format: .. The below illustrates the hierarchy of each data object. Unity Catalog Object Model Hierarchy Let's explore each data object: Metastore Serves as the top-tier container for metadata, employing a three-level namespace to organize the data. To utilize Unity Catalog, a workspace must be linked to a Unity Catalog metastore. A separate metastore is needed for each region and should be assigned to Databricks workspaces operating in that region. Catalog This functions as the initial layer in the object hierarchy and three-level namespace. It organises the data assets and contains schemas (databases), tables, views, volumes, models, and functions. Schema (Database) This forms the second layer of the object hierarchy and three-level namespace. Contains the tables and views held in the schema. Schema is also referred to as databases. Tables This forms the third layer of the object hierarchy and three-level namespace. This can be Managed or External Tables and contains rows and columns of data. Managed tables allow Unity Catalog to manage the data lifecycle and file layout, this is the default table type. Therefore, when managed tables are dropped, the underlying data files are deleted. Data is stored in the root storage location by default, but a specific storage location can be provided at the time of creation. External tables, in contrast, do not give Unity Catalog the ability to manage data lifecycle and file layout. Therefore, when external tables are dropped, the underlying data files are not deleted. Views This forms the third layer of the object hierarchy and three-level namespace. A read-only object that is derived from one or more tables and views in the metastore. Users can create dynamic views, enabling row- and column-level permissions. Views are similar to SQL Views, the data is not persisted in storage. They serve as virtual snapshots that dynamically produce output when executed. This provides a flexible and efficient means to interact with real-time and up-to-date data without the need to persist it. Volumes (Still in Public Preview as of Jan 2024) Volumes provide a new approach to load and link external object storage into Unity Catalog. While they store external objects in the data storage directory, they function differently from registered tables. Volumes can be managed or external and serve as a conceptual data volume in cloud storage. Unlike tables, volumes offer flexibility to store various file types, including unstructured data like CSVs, PDFs, XML, as well as images, video, audio, and structured or semi-structured data. They resemble the traditional storage mounts in Databricks, but Volumes offer enhanced security and controls. The traditional storage mounts were accessible to anyone in the workspace and therefore not very secure. Volumes adhere to the same access and governance policies as schemas and tables, ensuring a higher level of security and control. Functions (Still in Public Preview as of Jan 2024) This forms the third layer of the object hierarchy and three-level namespace. Registering a custom function to a schema, enables the ability to reuse specific custom logic across the entire environment. Models This forms the third layer of the object hierarchy and three-level namespace. A model refers to a machine learning model registered in the MLflow Model Registry. Users need specific privileges to create models within the catalogs and schemas. Summary To sum it up, Unity Catalog from Databricks is a powerful tool for managing and controlling data assets in the Databricks ecosystem. It offers a range of features for discovering, governing, tracking, and securing data, all while maintaining strict access controls and thorough auditing capabilities. Unity Catalog provides a reliable and efficient way to handle data assets and is a must-have for any organization that wants to make the most of its data while ensuring compliance and security. Considering Databricks' significant investment in Unity Catalog and its continuous new features, it is likely to become the default choice for setting up new Databricks workspaces. Therefore, organizations should consider either establishing new Databricks platforms using Unity Catalog or migrating existing ones to Unity Catalog. Advancing Analytics can help with your migration to Unity Catalog - get in touch to find out more. References https://docs.databricks.com/en/data-governance/unity-catalog/index.html#the-unity-catalog-object-model https://docs.databricks.com/en/data-governance/index.html https://docs.databricks.com/en/security/auth-authz/access-control/index.html https://docs.databricks.com/en/data-governance/unity-catalog/data-lineage.html https://docs.databricks.com/en/lakehouse/collaboration.html https://docs.databricks.com/en/data-governance/unity-catalog/audit.html

The 4 Best Vector Database Options for your LLM Projects

As the potential of Generative AI and Large Language (LLM) models continue to grow at a frightening pace, it can be hard to know where to get started and get your head around all the tools needed for a successful LLM project! One key tool in any successful implementation is Vector Database, these databases are used to efficiently store and retrieve vector representations of the text or other data used with your model. I’m going to walk you through what I think are the 4 best options available, but first we should probably answer a few basic questions: what is an LLM? What is a vector database? How do they work? What is an LLM? Pretty much, an LLM is a type of artificial intelligence model designed to understand and generate human-like language. Trained on massive datasets with billions of words, these models wield deep learning magic to excel at tasks such as text completion, summarisation, translation, and question-answering. They are even better than their NLP predecessors due to their generative capability, allowing them to produce human-like text based on context, making them a fantastic tool across a broad spectrum of applications, from content creation to natural language conversations. What is a Vector Database? A vector Database is a specific type of Database that indexes and stores vector embeddings for fast retrieval and similarity search, in this context, a "vector" is a mathematical representation of an object or data point in a multi-dimensional space. These databases are optimised for tasks where the relationships and similarities between data points are crucial. In addition to numerical vectors, vector databases are increasingly relevant for storing and managing vector representations of textual data generated by LLMs, expanding their applications into the realm of natural language understanding. Tokenisation and Embedding Tokenisation is the process of breaking down blocks text into smaller units, typically words or subwords, facilitating the analysis of language. On the other hand, embedding involves representing these tokens as vectors in a high-dimensional space, capturing semantic relationships between words. These embedded tokens form the basis for an LLMs understanding of language. Their contextual nature, influenced by surrounding tokens, enables LLMs to process input text, capture semantic relationships, and generate coherent and contextually relevant responses in a wide range of natural language processing applications. Vector Databases Right, on to the good stuff! Let’s take a look at the 4 best Vector Databases for use in LLM projects! Azure AI Search Up first, Azure AI Search, formally known as Azure Cognitive Search! This absolute powerhouse of a Vector Database is a fully managed, cloud-based, AI-powered information retrieval platform from Microsoft Azure. It is a fantastic option as it allows you to add powerful search capabilities to your LLM project without the need for extensive infrastructure management. One of the key benefits this brings is that it can be highly scalable, allowing you to easily index and search through whatever volume of data your business has and easily support high-traffic loads. Not only is it super scalable, as it is a part of the Azure offering it integrates seamlessly with other services in your Azure AI project making it incredibly easy to implement right off the bat. Finally, it offers state of the art searching capabilities, utilizing hybrid retrieval that combines both vector and keyword retrieval to bring you better, faster results. Overall, Azure AI Search is an incredible option for an LLM project that requires powerful search capabilities and scalability without the need for extensive infrastructure management. Pinecone Pinecone is another fully managed, cloud-based vector database designed for efficiently storing, indexing, and querying high-dimensional vector data. Specifically, Pinecone is focused on providing a robust solution for similarity search in large datasets. As another strong Vector Database option, Pinecone offers many of the same benefits as Azure AI Search in terms of scalability, infrastructure management, along with also offering hybrid search to provide fast and relevant search results. It is however cloud agnostic so can be used with Microsoft Azure, AWS, and Google Cloud making it a great choice for multi-cloud solutions. In a sentence Pinecone is a great option for an LLM project that requires similarity search in large datasets, and needs to be cloud-agnostic. Chroma Chroma is an open-source vector database designed for storing and retrieving vector embeddings. As with all of the Vector Databases in this list, they wouldn’t be ‘the best’ if their search capabilities were slow and provided results with poor relevance, so naturally the search capabilities of Chroma are lightening fast and provide excellent results. One of the key strengths of Chroma however is its simplicity, it is very easy to use, it is fairly much just a case of pip installing it, importing the library and you are good to go! With just a few lines of code you can begin adding our text documents to the collection, which will automatically handle tokenization, embedding and indexing for you making it super easy to integrate into any LLM project. Being open-source has its benefits, in that it is often a lot cheaper to host as you do not need to pay for a managed service, however that will come with the overheads of having to manage infrastructure yourself if you wish to use it on a large scale. If you are working on an LLM project that requires a simple and easy-to-use vector database that can be self-hosted, look no further than Chroma. Weaviate Weaviate is another open-source vector database created by SeMI Technologies and is designed to handle high-dimensional vector data efficiently and provide a platform for building applications that involve searching and analyzing complex data structures… such as an LLM! Not to sound like a broken record, but Weaviate again offers a rich vector search, easy development, and high performance. It also brings a number of modules with out-of-the-box support for vectorization and allows you to pick from a wide variety of well-known neural search frameworks using Weaviate integrations. While being open-source and giving you the option to self-host and manage Weaviate, it also has a cloud solution that can be serverless in the Weaviate Cloud, or ‘bring your own cloud’ and useable with inside Azure, AWS, and GCP. Finally, Weaviate os a superb option for any LLM project that requires a high-performance vector database with out-of-the-box support for vectorization and a wide variety of neural search frameworks. Conclusion All of the vector databases outlined above are incredibly powerful tools that will take your LLM project to the next level. The main difference between them all is the hosting options, and whether you prefer open-source or fully managed solutions. Packing a huge punch in terms of the number of features, the power of tool and the huge integration potential you have Azure AI Search, with the only real downside being you are locked into to using Microsoft Azure services if you really want to get the most out of it. Weaviate and Pinecone make for a nice middle ground, offering a managed solution that is cloud agnostic without too many compromises. Last, and by no means least, you have Chroma which is a brilliant, easy to use tool that is completely open-source and can be up and running in just a matter of minutes. No matter which tool you chose to go for using a vector database will bring a huge amount of benefits to your LLM projects, if you want to know more about how to use them why not check out this blog on the 10 reasons why you need to implement RAG!
Untitled design - 2024-06-04T163614.486
pattents-img pattents-img pattents-img pattents-img

Downloads

Welcome to our serious side. Less chatty, more professional. Less opinions, more facts. Less theories, more practice.

This is where you’ll find the lowdown on how our solutions and services can transform the fortunes of your business.

 

Media

We can’t resist the call of the camera or leave a microphone sitting there all on its own. We’re brimming with ideas so if anyone asks for an opinion, we’re already hitting record. As we like to say, our videos and podcasts are way more entertaining than presentations about data have any right to be.

Group 1
Vector (37)

Untitled design - 2024-06-04T164133.659
Youtube Channel

We're not only huge fans of knowledge transfer, but pride ourselves on being data nerds. Our YouTube channel is a library of recent tech news, deep dives into topics, and overviews of relevant updates, hosted by Simon Whiteley and the Advancing team. 

Untitled design - 2024-06-04T141655.517
Totally Skewed Podcast

Hello and welcome to Totally Skewed, the podcast devoted to everything Data and AI.

We are looking to put the world to rights, cut through the hype, give you the opinionated view of the state of tech and try to see if we are indeed, all totally skewed!

This podcast is run by Terry, Tori and Craig.

Untitled design - 2024-06-03T183207.303
Data Science in Production Podcast

Welcome to the home of “Data Science in Production”. We aim to give you content on a variety of topics relating to deploying and managing Data Science in Production. Once a month we tackle how developers are getting their models into production, tools and techniques and we feature interesting production stories.

_Ñëîé_1