by  Alex Joyner

Debate Finds Balancing Risk, Governance, and Tech Skills Is Essential for AI Success – and Outside Support Will Be Vital

clock-icon-white  6 min read

Successful adoption of artificial intelligence (AI) depends as much on risk management, governance, and culture as it does on technological capability. This was the consensus among attendees at a dinner hosted by SoftServe and Google Cloud. They stressed that without deliberate attention to these areas and the right external support, even the most promising AI projects risk falling short of their potential.

As AI starts to move from hype and proofs of concept to real use cases, businesses are finding out that this has wider ramifications across the enterprise. The diners were told that if AI is to solve real-world problems and deliver the deeper, more transformative changes that businesses expect, it needs to be accompanied by solid foundations.

At a Business Reporter dinner at the House of Lords, senior IT and business executives acknowledged the new challenges faced when embedding AI into decision-making, data governance, and organisational culture. But they also voiced more nuanced frustrations that firms face when working with AI and the imperfect datasets they own.

Debate Finds Balancing Risk

Good decisions

Paul Fryer, Enterprise Solutions Principal at SoftServe, said that although AI will never make every decision correctly, it should not be a reason for businesses to hesitate. Instead, he said,

You should ask if everyone wants to be a data-informed business, how do you make sure you make good decisions?

Fryer compared AI outputs to legal opinions: expert but fallible. Just as organisations accept that a lawyer’s advice may later be challenged, so might they learn to live with AI systems that offer high-quality guidance without guarantees.

Some suggested that it will come down to weighing potential risks or harm. Firms should work to understand the potential impact of those errors, they added, and build some level of risk acceptance, or tolerance, into AI governance frameworks.

Messy realities

The room agreed that no AI system can succeed without reliable data, but that getting to a shared understanding of “reliable” was not straightforward. Several said data governance often falters when teams cannot agree on a correct source of truth. But most accepted that where alignment exists, or can be found, technical challenges become more manageable.

It was pointed out that Generative AI can work around some of these data gaps. But, as another diner said, while Gen AI can infer incomplete addresses or detect missing fields, equally, a single wrongly tagged file can cascade errors through an entire system. Therefore, to mitigate those risks, robust governance around input data will remain important for the successful delivery of critical services.

One of the toughest cultural challenges identified was asking users to abandon the idea of a single, perfect version of the truth. Instead, many felt that organisations must develop more nuanced, task-specific data strategies that reflect the messy realities of the data they hold.

Automation at scale

While the potential for automation is considerable, most businesses have yet to realise it at scale. Mark Westhenry, Analytics and AI Lead for Telco at Google Cloud, emphasised that using the cloud is not simply a cost play. He said rather it represents a leap in automation and visibility. “New technology provides a better picture of the data you’re working with, which can unlock automation at scale,” he added.

Yet challenges persist. Attendees highlighted the difficulty in distinguishing whether a failure is rooted in the data or the model itself. Sometimes, large language models (LLMs) complicate this further by producing non-deterministic results — answers that change with each interaction.

Regulatory environments add further layers of complexity, particularly in sectors such as financial services, where explainability is vital. AI outputs must be defensible, one said, as any lapse, such as bias in hiring processes, can escalate into a crisis.

Participants stressed this all means that AI adoption cannot happen in isolation. Most agreed with the need for support and that vendors and partners must step up, not only by providing robust systems but by helping clients understand and manage AI risks at scale.

Exponential change

As the discussion turned to the future, a note of urgency emerged. Businesses accustomed to gradual, incremental improvement must pivot towards delivering exponential change if they are to compete. “The board and C-suite don’t want marginal gains; they want 5x improvements,” Westhenry said.

Workforce dynamics were another issue raised. For example, employees who once preserved institutional knowledge are increasingly scarce, leading companies to seek ways to embed this knowledge into processes. Many felt that AI and automation, if governed wisely, could help mitigate this erosion.

However, the room was told that organisations must also deal honestly with the human cost of AI-driven change. Jobs will be displaced. Businesses must decide who will prepare workers for this transition — employers, governments and regulators, or workers themselves. Open communication about the realities of AI will be important if it is to succeed, one added.

Be bold

As the evening concluded, Fryer noted that many financial services companies, despite heavy regulation, are pushing ahead with AI deployments, proving that progress is possible even under tight constraints. If they can succeed, so can others, added Westhenry, urging businesses to think bigger. With an estimated 60 percent of AI projects likely to fizzle out, he said, the ones that succeed must deliver significant, transformative impact.

The attendees were therefore left to consider whether the path to effective AI lies not in chasing marginal gains, but in embracing bold change — albeit underpinned by sound governance and a clear-eyed view of risk. Either way, it seems that a great deal of trial and error still lies ahead on the enterprise AI journey, and plenty of support from expert partners will be required to keep businesses moving in the right AI direction.

Debate Finds Balancing Risk Contact us if you would like SoftServe to assess and help you build your data and AI strategy or join us at our next Data Strategy Workshop with Google Cloud on June 4