Making a business case for AI governance

Published on Dec 19, 2024

As the landscape for artificial intelligence continues to evolve, organisations around the world are seeking to harness the benefits while managing the risks.


The potential for artificial intelligence (AI) to transform the way we work and handle data promises to raise productivity and increase efficiency, resulting in better value for money in public services and a competitive advantage for businesses. However, as AI adoption increases, it highlights the urgent need for AI governance to ensure its application is responsible, ethical, transparent and complies with relevant legislation.

As you read this article, opaque algorithms are being used to make important decisions and discriminate against people. Although less frequent than personal data breach notifications, situations like these occur far more often than you would like to think, particularly given the potential for harm. A patient identified serious bias, transparency, and oversight issues with no appeals process in a model used in the NHS to make life and death decisions about who should receive liver transplant surgery. More recently, another AI system used to detect welfare fraud was found to show bias related to age, disability, marital status, and nationality, despite earlier assurances that the system posed no concerns.

Given the pace at which AI technologies are being integrated into our daily lives, the need for AI governance is more pressing than ever. While we've known about the ethical issues for years, recent advancements have escalated these concerns into substantial threats to society. 

Both the public and commercial sectors have been quick to embrace AI technologies but slow to implement necessary AI governance practices to ensure they are safe to use and deliver on their intended purpose. Without a robust structure or oversight, the path that organisations must navigate to successfully integrate AI systems into business operations remains fraught with risk, uncertainty and failure.

How often do AI projects fail to deliver?

Understanding how to translate AI's potential into tangible results remains a fundamental challenge. According to the Data Trust Report 2025, while 74% of organisations have adopted some AI-based solutions, only 33% have successfully integrated them throughout the business. An even more sober report looking at the root causes of AI project failure suggests that the rate of failure is as high as 80%, twice the rate for IT projects.

Despite such bleak statistics, there are proven approaches organisations can adopt to enhance their chances of success. By addressing four key areas, organisations can significantly mitigate the risks associated with their AI implementations and improve overall outcomes.

Define the business challenge

Organisations frequently approach AI with enthusiasm yet fail to define the specific business challenge they want to address. At the same time, data scientists and other technical experts often lack a comprehensive understanding of their organisation’s overarching strategy and goals. This lack of clarity and understanding leads to misaligned expectations and inefficient use of resources.

While the temptation is to move forward as quickly as possible, it is important to allow the necessary time for detailed engagement with business functions to identify applications that can be realistically achieved, and that will deliver transformative results and significant value.

In addition, AI project teams are often encouraged to experiment with the latest technologies. Various technical options exist for developing AI solutions, each with its own set of advantages and disadvantages. While the choice of technology is a significant factor, successful AI implementations focus more on the problem being solved and its context rather than the technology itself, which should be a secondary consideration.

Instead, organisations should carefully evaluate their options to determine which technology best aligns with their needs, resources, and AI development goals.

Focus on AI risk management

As organisations embark on developing AI for various specific use cases, questions arise about how to implement AI models in a responsible and ethical manner. In order for AI projects to be deployed as planned and without any unintended consequences, organisations must address a range of complex issues through an overarching AI governance programme.

AI risks can be grouped into a number of categories:

  • AI Reliability risks relate to factual inaccuracies, hallucinations, outdated information, and biased training data. All reliability risks require solutions or the implementation will not perform as intended.
  • Data protection risks pose various issues for model development and ongoing maintenance. AI systems require access to large volumes of personal data for training. However, identifying an appropriate lawful basis for processing for AI-related purposes remains a significant challenge. Other data protection issues include the sharing of user information with third parties without prior notice, and sensitive or confidential information entered as prompts becoming part of the knowledge base used in outputs for other users.
  • Copyright and intellectual property risks relate to how AI models are trained using protected materials.
  • Cybersecurity risks include whether AI systems are vulnerable to hacking and malicious attacks that could lead to unauthorised access, personal data breaches and other security incidents.
  • Explainability risks pose challenges due to the opaque nature of AI models. Where such risks are left unmitigated, even experts struggle to understand the internal workings of AI models, leading to outputs that are unpredictable, unverifiable, and unaccountable.
  • Emerging regulatory risks relate to specific rules from governments and other bodies in various jurisdictions regarding the development and use of AI technologies.

Creating a clear and comprehensive plan to address all potential AI risks during the development phase is vital. Failing to do so can lead to substantial costs—not only in terms of budget but also in potential damage to reputation, regulatory enforcement, and the time needed to fix the AI model to address these issues.

Overall, unless organisations are able to manage all the risks, their AI projects will run into severe cost overruns and potentially stall or fail.

Manage AI related costs

One of the fundamental challenges organisations face when assessing whether to move forward with an AI project concerns how to manage the costs effectively. The problem stems from the emerging nature of AI technologies, making it difficult for organisations to accurately estimate the costs associated with their AI initiatives.

Numerous factors contribute to financial challenges in AI projects, including the issues raised above. These uncertainties can lead to budget overruns and unexpected costs that may arise throughout the AI project lifecycle. Without a structured approach, organisations may find themselves vulnerable to spiralling costs that far exceed initial projections. Research by Gartner indicates that miscalculations in AI cost estimates can vary by as much as 500-1000%.

Financial inaccuracies in this order of magnitude significantly increase the chances that AI projects may be abandoned. This is not due to a lack of project viability but becasue of financial constraints that could have been avoided with better planning and oversight. In such cases, a lack of AI governance can lead to missed opportunities to innovate or introduce efficiencies as organisations grapple with the repercussions of unmanaged costs in their AI experiments.

Invest in robust AI governance

To address AI risks in a responsible, ethical, and transparent manner, and to ensure compliance with legal and regulatory requirements, organisations are increasingly adopting comprehensive AI governance programmes. Such programmes are essential not only to guide the development process but also to maintain control over costs.

At its core, AI governance includes a framework and set of best practices that guide the AI development lifecycle. In addition to assessing and mitigating the risks, AI governance also sets the policies and procedures for responsible AI use within the organisation.

The most successful AI governance programmes involve teams comprised of members with various levels of seniority, experience and backgrounds from across the business. Alongside representation from every business function likely to make use of AI technologies, AI governance teams should include individuals with specialist knowledge of the law, data protection, data science, data ethics and technical IT expertise.

With such diverse backgrounds, comprehensive training initiatives are essential to establish a baseline skill set that encompasses an understanding of how AI technologies work, its various use cases, and the risks. It should also include an in-depth exploration of the principles of responsible AI and the mechanisms for implementing an AI governance programme effectively.

At Freevacy, we offer industry-leading certificated AI governance training to help teams and their organisations unlock the transformative potential of AI, maximise their return on investment, and ensure long-term success in an increasingly AI-driven world.

Click your chosen course below to see our next available courses dates

Freevacy has been shortlisted in the Best Educator category.
The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.