Published on Oct 25, 2024
With less than a week to go, all eyes are fixed on Chancellor of the Exchequer Rachel Reeves as she prepares to deliver the upcoming Autumn Statement on Wednesday, 30 October 2024. It's the first fiscal event of the new Labour government and an opportunity to outline the country's tax and spending plans for the coming year. It also provides an assessment of the current state of the economy. The Autumn Statement is usually of secondary importance to the Spring Budget, but this year's event feels significantly more consequential.
This is in part because of the weight of expectations placed on the new government to reignite a sluggish economy following a decade and a half of anaemic growth, along with their promise to bring much-needed reform to ailing public services struggling to cope from years of under funding. Given how UK consumer confidence has fallen to its lowest level this year, the country needs to know how the fiscal event will impact personal finances so that we can finally begin to move forward after what feels like an eternity since the general election in July.
One of the central themes of the Autumn Statement will likely be how the government can raise labour productivity per capita, the average amount of goods and services produced per person for each hour worked. Productivity levels have effectively stalled since the 2008 recession and have only been made worse by the later economic shocks of Brexit, the pandemic and the Russia-Ukraine war.
So, what does all this have to do with people working in privacy and security teams?
Quite a lot as it happens.
The government will claim, somewhat correctly, that the fate of the public sector and British industry will be determined by their ability to implement reforms to traditional labour-intensive processes.
Can the adoption of digital tools and artificial intelligence (AI) address the UK's productivity problem?
It’s a complex question with many variables, but according to a press release from the Department for Science, Innovation and Technology, the International Monetary Fund estimates that the UK could eventually see annual productivity gains of up to 1.5 per cent.
In the private sector, a recent study published by Workday suggests that AI could unlock £119 billion worth of productive work each year for large UK companies. Meanwhile, a separate study by MIT Sloan predicts that generative AI could raise employee output by almost 40% compared against those who don’t use it.
It's worth noting that while AI holds the potential for large-scale productivity gains, a Bank of England report from this month revealed that most organisations already investing in AI don't expect productivity gains for the next 2–3 years. One of the reasons for the delay is down to the work required to prepare data and IT systems for AI use. Another reason given relates to how AI-generated outputs need to be checked manually.
We also need to consider how to mitigate all the risks associated with digital and AI technologies. A report from Gartner in February identified that of the 60% of organisations planning AI deployments, fewer than half are confident they can manage the risks effectively. This report was followed up with a further study in July predicting that at least 30% of generative AI projects are expected to be abandoned after proof of concept by the end of 2025 due to poor data quality, inadequate risk controls, escalating costs, or unclear business value.
That's a lot of wasted investment in time and resources for the programmes that fail to make it past the proof of concept stage.
A white paper published by Stanford University in March of this year highlighted that many of the risks associated with AI implementations and their potential unintended consequences are similar to those we've encountered in privacy and data protection over the past 20 years of internet use and unrestricted data collection. The key difference lies in the scale at which AI systems process data.
In an effort to avoid delays, spiralling costs and potentially the entire viability of the implementation, it is essential to ensure that a robust AI governance programme is in place to oversee the current and future development of AI systems. When establishing an AI Governance programme, it is considered best practice to adopt a multistakeholder approach, comprising individuals with diverse skillsets, experiences, and backgrounds because cross-functional knowledge-sharing is vital.
Despite the connection between privacy and AI risks being widely understood, many organisations find it challenging to strike the right balance of skillsets. This is reflected in the high rates of failures as predicted above. However, while the effectiveness of AI governance programmes should be evaluated on their own merits, at least part of the solution is to ensure that teams include a strong core of individuals with backgrounds in privacy and security.
A report in February by The Productivity Institute revealed underinvestment in skills training is another factor behind the UK's low productivity levels. Despite an increase in the number of higher educated people in the workforce, a significant investment in workplace training, from basic skills through to high-level technical and managerial skills, is required to increase productivity.
To put this in context, a report from Skills England in September found that employer investment in training has been in decline for over a decade, with training expenditure per employee down by 19% in real terms.
It’s completely understandable that budget requests for professional development will be harder to justify during an economic downturn. With that said, the skills required by the teams responsible for data protection compliance and information security are not solely related to the ongoing efforts of adhering to the UK's data protection laws and reducing the risks to data assets; it's also about delivering digital and AI transformation.
As highlighted above, a significant percentage of AI projects will fail unless the teams involved in delivering them have the skills to address data quality issues, implement risk controls, manage costs, and define business value.
As a final point, the current workload being placed on privacy and security teams is already unsustainable, and that's without allowing for the full extent of the additional responsibilities required of these teams to support digital and AI transformation projects.
Once AI governance programmes operate at scale across all industry sectors, something will need to give. Either privacy management and information security performance will decline, resulting in lower levels of data protection compliance and a rise in the number and severity of security incidents, or digital and AI transformation projects will stall or potentially fail.
Establishing an organisation-wide learning programme is the only way to address the chronic skills shortages that severely impact cybersecurity incident outcomes and cause burnout among practitioners. Such programmes should focus on developing skills at all levels and, in particular, identifying privacy and security champions within the business who may eventually transition into full-time specialist data roles.
With not long to wait before the Autumn Statement, it is almost certain that any areas of public investment will come with an expectation of reforms that lead to service improvements. To ensure these investments pay dividends and deliver much-heralded productivity gains, it is essential that privacy and security professionals have the skills they need.
Our new accredited Data Protection and AI Governance public schedule training dates for the start of 2025 are now online. We hope to see you and your teams back in the classroom soon.
Freevacy has been shortlisted in the Best Educator category. The PICCASO Privacy Awards recognise the people making an outstanding contribution to this dynamic and fast-growing sector.