loader

Building Gen AI Maturity for Project Teams

 

The Push for Gen AI Adoption

Presently there is large push to embrace Generative AI or Gen AI more and drive better usage within development teams. Eager leaders sold on the benefits on Gen AI want to harness its power to drive organisational efficiency and get more value out of their project teams. Despite an acknowledgement of the potential of Gen AI, many are yet to see any tangible value realised or are unsure on how best to implement Gen AI in a project delivery context.

Contributing to this, is a wide disparity within project teams between people who don’t recognise or understand the value of Gen AI to the context of their work, whilst others have fully integrated the use of Gen AI into their ways of working.

Reflecting back to the 90s when internet search engines first gained popularity, users initially attempted to find information using full sentences and overly verbose language, often yielding suboptimal results. However, some users quickly learned the art of using concise keywords, unlocking the full potential of search engines. As more users became adept and knowledge about effective search techniques spread, a greater number of people began to achieve better results. This evolution mirrors today's landscape with LLM prompting. Just as with early search engines, there's a spectrum of proficiency in using Gen AI, with some team members fully harnessing its capabilities while others are still learning the ropes.

So despite a lot of narrative and increasingly more tooling flooding the market, the question remains, how do we actually achieve better Gen AI Adoption within project teams? And do so in a way that is actually driving value for these teams?

 

An Incremental Approach to Gen AI Maturity

I believe the answer to Gen AI adoption lies in understanding the current Gen AI maturity of the team and tailoring a strategy accordingly.

There’s a lot of noise in the industry around either the latest new tooling people should use or on pointing to the most insidious examples of unchecked “vibe coding” as a cautionary tale against any further Gen AI adoption.  Whilst interesting, in my opinion these discussions miss the important point around Gen AI adoption. Most project teams and people just want to be able to work more efficiently in a scalable way that doesn’t compromise quality. When adopted correctly within project teams, Gen AI can be a powerful productivity accelerator and a conduit for collaboration.

Instead of chasing after the capabilities of the top 5% of Gen AI adopters or dismissing Gen AI entirely because it doesn’t instantly boost team productivity, we should evaluate the current Gen AI maturity level of the project team and identify the appropriate maturity level necessary for their success. The following scale shows how Gen AI maturity may be graded for a given project team:

*Example of this in the prompt quality best practices section

Example Interactions for Each Level of LLM Maturity

The following diagrams aim to illustrate what project team’s interaction with Gen AI may look like in practice. Note that more mature teams may have more basic interactions with Gen AI from time to time, especially when the ad hoc nature of a task doesn't warrant the time investment to optimise a Gen AI workflow. These examples are intended to show possible ways of working patterns for teams that have invested in adopting Gen AI at different stages of maturity:

 

Best Practices for Improving Prompt Quality

In 2023, Researchers from Mohamed bin Zayed University of AI came up with a list of 26 guiding principles (linked here) for improving prompt quality which showed a 57% increase in quality of Gen AI outputs when querying the top models at the time. 

From my own recent experimentation and taking inspiration from the guiding principles in the study, I’ve condensed a simplified list of best practices which I’ve found to significantly improve performance of Gen AI outputs:

  1. Make the use of Personas i.e. you are an AI Data Engineering Lead on a project that needs to produce a condensed summary of the platform for the Business Analyst to review.
  2. Define the task clearly using action verbs "Summarise", "Compare", "Refactor", "Design", "Generate", "Explain", "Translate", "Convert“ i.e. Refactor this dbt model to improve performance and maintainability. Include Jinja best practices and describe any trade-offs.
  3. Provide Specific Inputs i.e. the sanitised code or text that should be modified. This is also known as "few-shot prompting".
  4. Add Constraints or Preferences i.e. Avoid using macros. Assume Unity Catalog is enabled. Minimize cross-database joins.
  5. Use modular requests i.e. One prompt to summarise key points from a statement of work, a different prompt to draft a project plan. Avoid very broad prompts such as redesign my whole data platform. This is the core principle followed by agentic workflows today.
  6. Be Specific About Output Format i.e. Generate the output in YAML config using valid dbt formatting.
  7. Provide specific examples of expected output i.e. Here is an example of the YAML config I would like you to adhere to during the generation.

Feel free to tailor this list according to your own findings and project experience as the capability of publicly available Gen AI models is always evolving!

If you manage project teams and are looking for where to start with increasing Gen AI adoption or how to take your project teams to the next level with our Gen AI solutions, let our team know  here at Advancing Analytics.

 

Disclaimer - This blog contains some AI Generated Images

Max Fifield

Author

Max Fifield

Max has 5+ years’ experience in specialist Data & AI skills and technical leadership of delivery teams. As a technical implementation consultant, Max has worked with several organisations to understand business challenges, design solutions, and led data implementations into production.