Getting Enterprise AI Strategy Right

by Charles Humble , Associate Editor

AI offers considerable benefits across almost every commercial enterprise, from efficiency gains to the potential for brand new products. But it also represents a threat, opening up possibilities for new entrants to disrupt established markets.

In business environments, company executives look to produce an ‘AI Strategy’ but can lose sight of their objectives, rather like a politician thinking, “Something needs to be done and this is something, so let’s do this.”

The situation is made more complex by the fact that the term artificial intelligence (AI) covers a broad and continuously evolving range of technologies. Add to this the fact that management strategy is a sector bursting with terrible advice. If you’ve read other AI strategy articles you’ll be familiar with this sort of thing—vision, value, risks, etc—all generic and not helpful.

One exception, widely regarded as a business classic, is Richard Rumelt’s book, Good Strategy Bad Strategy. Though somewhat US-centric it contains a great deal of practical advice. Rumelt states that, “A good strategy has an essential logical structure that I call the kernel. The kernel of a strategy contains three elements: a diagnosis, a guiding policy, and coherent action. The guiding policy specifies the approach to dealing with obstacles called out in the diagnosis. It is like a signpost, marking the direction forward but not defining the details of the trip. Coherent actions are feasible coordinated policies, resource commitments, and actions designed to carry out the guiding policy.”

Rumelt makes the point that strategies focus on some objectives rather than others. In his model, a great deal of strategy work involves trying to figure out exactly what is going on and requires leaders to make often difficult choices. Given this, I’ve found that it’s important for an organisation to have a well-defined purpose and values, since these act as filters when trying to decide between competing priorities.

I’ve also found Wardley Mapping to be an invaluable sense-making tool when trying to understand a competitive landscape. It depersonalises a discussion—essentially allowing you to say it’s the map that is wrong rather than an individual—which can help with reaching a consensus and moving forward. A 13th century theory known as Condorcet’s paradox states that within any voting system it is logically impossible to guarantee that the winner has a majority of the vote, because it is possible that no such winner exists. This explains why building consensus can be so difficult.

David Anderson’s book, The Value Flywheel Effect, describes how he applied Wardley Mapping and other sense-making techniques at Liberty Mutual to transform a 100-year-old insurance company into a modern software start-up; I’ve discussed his experiences as a part of a leadership podcast series I’m hosting for GOTO.

Rumelt’s approach is effective. By the same token, I have never seen a good strategy arise from filling in a template. Given this, it is simply not possible for us to recommend an AI strategy for your business via a blog post—only you can make the necessary diagnosis. But what we can do is give you a sense of where AI might either solve a problem in your organisation or allow you to gain a competitive advantage.

Much of the content on our website is deeply technical. This piece is intended to provide a higher-level overview, informed by our decades of experience in the many different facets of applied AI. If you want to get deeper into a topic, we’ll provide links to further information.

Of course, if what you want is a generic template, then Google “vision mission strategy”, pick one of the thousands of examples and fill it in. It won’t help, but you will be doing something.

Foundations

Data science

At its core, AI is a data-driven set of practices. As with management information, the effectiveness of the recommendations and decisions made by your AI is dependent on the quality of its data.

Our CEO and founder, Phil Winder, has said that he considers AI to be “a child of data science, which is an overarching scientific field that investigates data generated by phenomena”. This means that your organisation’s data is fundamental to the success of any initiative you might pursue using AI.

Practical applications of AI—such as fraud detection in financial services, diagnosis in healthcare, dynamic pricing models in retail, or energy management in a data centre—require the use of an organisation’s unique data to gain competitive advantage. Finding suitable data and, for supervised learning labelling it, is time-consuming but necessary. Any successful AI project requires some ground truth that you can reference to quantify performance.

Security

The use of AI introduces new attack vectors. So, the scope of data governance needs to encompass not only privacy, but also risks arising from data poisoning during training, and model hijacking when a system is in production. Organisations need robust practices in place to be able to trace data lineage and maintain quality. The still somewhat nascent field of explainable AI will, we think, be critical here.

MLOps

MLOps is the process of scaling, maintaining and fulfilling regulatory requirements for ML applications within enterprises. It is becoming more widely adopted as more companies use AI for a broader spectrum of applications.

MLOps has much in common with DevOps practices, but a combination of factors make it distinct. These include:

  1. Data scientists are not software engineers and may not be experts in writing applications. Many practices that are taken entirely for granted in the realm of software development—such as version control and CI/CD—are not necessarily adhered to in the context of an AI project.
  2. While many enterprise systems comprise code and data, the specific way that data is intertwined with AI models—either constantly learning and adapting to new inputs or not, as the case may be—makes verification and monitoring difficult.
  3. Testing AI technologies is different from other forms of software because you are operating in a much more stochastic/ probabilistic space. That lack of predictability around output testing is especially challenging; it isn’t obvious how to do red teaming, for example. For this reason we recommend a phased deployment—from experimentation to internal use, to a closed public beta—before any large-scale public deployment.

MLOps focuses on the standardisation and streamlining of machine learning life cycle management in an enterprise setting. It covers auditing, logging, monitoring and governance for regulatory compliance, and helps organisations to manage risk throughout the ML development lifecycle. A good MLOps practice also includes experiment tracking and packaging to make deployment easier and more scalable, and model management to govern your AI inventory.

Key Enterprise AI techniques

With all this in mind, we’ll look at the two main AI techniques that are being applied in the enterprise: generative AI, and deep and reinforcement learning.

Generative AI

Generative AI, which creates new content such as text and images from unstructured data, has seen a huge surge of interest since the start of 2022, and correspondingly huge investment. Mckinsey identifies generative AI as one of its key trends for 2024 alongside electrification and renewables, and we are starting to see real-world uses.

As a result, the corresponding pace of innovation has been remarkable. The size of prompts, known as ‘context windows’, that large language models (LLMs) can process, has grown dramatically. And generative AI has developed to encompass text summarisation, image generation and capabilities around video and audio.

LLM foundation models are being integrated into enterprise software tools for a range of purposes including automatic transcription, customer-facing chatbots and generating advertising campaigns. McKinsey’s latest Global Survey on the state of AI saw 65% of respondents say their organisations are regularly using generative AI in at least one business function.

Within generative AI we’re seeing three core trends:

  1. Powerful open source models that are comparable to, and in some cases better than, those offered by commercial vendors.
  2. Expanding context windows that allow models to hold larger numbers of tokens (which you can think of more or less as words). This means you can give a model a large novel like War and Peace and ask it to find something within it.
  3. Multimodel generative models that can process any modality including audio, video, images, code and language.

Generative AI can be combined with traditional data science and Natural Language Processing (NLP) techniques for document clustering, topic identification, classification and summarisation. This may be lower cost and more effective for solving a part of your use case.

In the real world

  1. Walmart says, “We’ve used multiple large language models to accurately create or improve over 850 million pieces of data in the catalogue. Without the use of generative AI, this work would have required nearly 100 times the current head count to complete in the same amount of time.”
  2. Reckitt is using generative AI to get advertising ideas, product insights and media analysis.
  3. ING has leveraged generative AI to improve customer service in the Netherlands.
  4. Nubank is also piloting a gen AI virtual assistant to boost customer service.
  5. Programming website, InfoQ, was able to offer transcripts for video content as a result of improvements to machine-generated transcription.

One area where generative AI tools are showing particular promise is in software development. According to the 2024 Docker State Of Application Development Report, 64% of respondents use AI for tasks such as writing code, documentation and research, and 46% work on ML in some capacity.

AI code generation tools, such as aider, Amazon Q, CodiumAI, Continue, GitHub Copilot and JetBrains AI, go beyond recommendations by enabling developers to generate entire functions and boilerplate code. They are particularly useful when adapting to new languages and frameworks, or migrating to a newer version of language or runtime. For example, Amazon’s CEO, Andy Jassy, has said that by using Amazon Q, the “average time to upgrade an application to Java 17 plummeted from what’s typically 50 developer days to just a few hours”.

AI tools also show promise in areas such as automated code review and testing.

Whilst the developer tools tend to be more mature, we suspect that all aspects of software development will gradually benefit from pragmatic use of AI and derived tools, and we’re actively following innovations across the development landscape. We continue to be interested in AI-assisted terminals, tools that turn screenshots and designs into code, and the use of LLM as part of ChatOps processes, such as QueryPal and Kubiya.

Generative AI tools also show some promise for a new type of search—if you have, for example, complex equipment where field engineers regularly have to find and diagnose problems, a small language model trained on your manuals may prove invaluable.

Challenges

The lack of availability for local language support hampers global adoption. Some countries such as China, India, Japan and many countries in the middle east are developing their own foundation models, but given the costs and compute involved this is harder in countries that are less well-resourced.

The majority of highly capable libraries and frameworks, such as LangChain and LlamaIndex, tend to be built around Python, which many enterprises do not run in production. However, this is starting to change with tools such as Spring AI, which offers a generative AI framework for Java developers.

There are a number of other uncertainties:

  1. Inaccuracies in results, aka ‘hallucinations’, are the most widely recognised risk.
  2. Cybersecurity and privacy concerns around data leakage, including customer details and other protected data.
  3. Ethical issues surround the responsible use of generative AI.
  4. Copyright ownership of content generated by models remains an unanswered question.
  5. The environmental impact of training models is high and likely to rise. After years of relatively modest growth in carbon emissions from data centres, Microsoft reported in May that its total carbon emissions have risen nearly 30% since 2020, primarily due to the construction of data centres to meet its push into AI. Google’s emissions have surged nearly 50% since 2019. They also increased 13% year on year in 2023, according to their own annual report. The company attributed the emissions spike to an increase in data centre energy consumption and supply chain emissions driven by artificial intelligence. The report noted that the company’s total data centre electricity consumption grew 17% in 2023.
  6. A growing amount of legislation, including the EU AI Act, is worth noting for researchers and organisations looking to use the technology. This and a desire to maintain sovereignty over their data is forcing many Winder clients to build privately-hosted generative text and image based applications.

We are starting to see some patterns emerging for common uses and contexts. NeMo Guardrails is an open source toolkit for easily adding programmable guardrails to LLM-based conversational applications. Langfuse allows greater observability into the steps leading to an LLM’s output. Helix is an open source competitor to ChatGPT that can be run on your own infrastructure. Winder is a major contributor to it.

Retrieval-augmented generation (RAG) is our preferred pattern for our teams to improve the quality of responses generated by an LLM. RAG integrates an external repository of information to enhance the AI’s ability to generate content that is accurate, relevant and contextually rich.

Much of the hype with generative AI tools is around content creation, but the results for this tend to be very generic and there is something of a pushback against ‘AI slop’. There might be places where use is acceptable but it is probably cheaper and more effective to use human copywriters and illustrators where possible.

Finally, we should note that it isn’t clear how far along the s-curve we actually are but some AI researchers believe we may be close to the peak of what LLMs can accomplish technically, at least on their own. However, that doesn’t mean we’ve exhausted potential use cases—far from it.

Deep and Reinforcement Learning

Deep learning, reinforcement learning and computer vision techniques are being applied in a huge variety of ways, across numerous industries.

Reinforcement learning is particularly effective in situations where you are looking to optimise for long-term, multistep rewards and it is also useful where you want to incorporate business metrics.

For example, internet advertising is over-optimised around click through rates (CTR), but CTR is a poor choice since the goal is more likely a more substantive interaction such as a sale or a sign-up. Reinforcement learning can be used to optimise for a combination of factors—such as which advertisements are shown, in what order and with what content—in order to generate the desired outcome.

In the real world

  1. Healthcare - RapidAI is using applied AI with advanced imaging technology to expand the treatment window for ischemic stroke, the most common type of stroke.
  2. Oil and Gas - Aramco has built an AI hub to analyse more than five billion data points per day from wellheads in the oil and gas fields. This has enhanced the understanding of petrophysical properties and expedited decision-making in exploration and drilling. One example is reduced flaring; combined with other techniques, their AI Model has contributed to reducing total flaring by more than 50% since 2010, helping them achieve one of the industry’s lowest flaring rates.
  3. IT - DeepMind used its AI models to reduce Google’s data centre cooling bill by 40%.

Many forms of algorithmic training in finance apply a combination of deep learning, neural networks and reinforcement learning in an attempt to discover, and then exploit, statistical inefficiencies in financial markets.

Other areas include environmental research, medical research and material science. DeepMind has released AlphaFold 3, which can predict the structures of all the molecules in the human body, not just proteins. In 2021, they spun off Isomorphic Labs to look for new drugs to treat diseases. They continue to use reinforcement learning to build new AI agents.

As the growing use of renewable energy makes balancing the grid more complex, there are research projects using machine learning to help make calculations and predictions. Rhizome, a start-up based in Washington, DC, launched an AI system that takes utility companies’ historical data on the performance of energy equipment, and combines it with global climate models to predict the probability of grid failures caused by extreme weather events, such as snowstorms or wildfires.

Several utility companies are already integrating AI into critical operations, particularly inspecting and managing physical infrastructure such as transmission lines and transformers. For example, overgrown trees are a leading cause of blackouts, due to branches falling on electric wires or sparking fires. Traditionally, manual inspection has been the norm, but given the extensive span of transmission lines, this can take several months. PG&E, covering northern and central California, has been using machine learning to accelerate those inspections. By analysing photographs captured by drones and helicopters, machine learning models identify areas requiring tree-trimming or pinpoint faulty equipment that needs repairing.

Bringing it all together

Rumelt says that, “A good strategy grows out of an independent and careful assessment of the situation, harnessing individual insight to carefully crafted purpose. Bad strategy follows the crowd, substituting popular slogans for insights.”

The explosion of hype around the current AI tools means that everyone is paying attention to the field. However, it remains poorly understood. Learning more about AI will help you to identify places where the current set of tools may be most applicable to your organisation.

It is often surprising how effective even imperfect AI is. Many companies are already using data to derive insights to automate processes, make better decisions and transform business outcomes.

Since the field is still new, the more you can democratise its use within your organisation, the more likely it is you will make a demonstrable difference to your business. This requires your organisation to allow experimentation at scale, with learning and psychological safety at the core.

While AI continues to progress through a massive hype cycle, you also need to ensure you are doing your technical due diligence. If someone is claiming they can help you, consider how long they’ve been involved in the AI field and where they’ve come from. If they were doing crypto last year, be sceptical. With over a decade of real-world experience, Winder is able to provide holistic AI strategy consulting and implementation services. Whether you are looking for help to formulate your AI strategy or to bolster your company-wide AI initiatives, please get in touch.

More articles

AI Strategy for CEOs: Aligning Tech with Business Goals

Phil Winder interviews Charles Humble discussing the state of AI strategy in business. Learn how to align the use of AI with your business goals.

Read more

Introduction to the EU AI Act

A short introductory webinar to introduce you to the new EU AI Act. Learn what it is, how it impacts you, and what you need to do.

Read more
}