blog-banner-image

Don't want to miss a thing?

Subscribe to get expert insights, in-depth research, and the latest updates from our team.
Subscribe
by  SoftServe Team

Collaboration, Persuasion, Trust & Governance Seen as Key Pillars to Successful AI Deployment

clock-icon-white  9 min read
SoftServe UK Executive AI Summit hears that “the future of AI will belong to those who can trust their AI enough to depend on it.”

Dealing with human elements such as habits and cultures will be as important to the success of AI in organizations as implementing the technologies themselves. This became a recurring theme from speakers at our second annual SoftServe UK Executive Summit, once again held in the stimulating surroundings of London’s Science Museum.

The title of this year’s event was “Guiding Breakthrough Technologies Back to Earth.” So, we asked our guest presenters, from clients, partners and industry thought leaders, to offer us perspectives on the best practises required to translate AI initiatives from Proofs of Concept (PoC’s) to applications that deliver real business value. And our clients and other guests weren’t disappointed with their responses.

Leadership

It quickly became apparent that it wasn’t just about the technology, as culture and leadership were seen as key requirements. “AI will only work if you bring your people along with you,” said Ben Garside, the Head of AI Products and Platforms at UK retailer Currys.

He told the audience that “Individuals are always going to ask, ‘What will this mean for my job?’ as they are naturally nervous and cautious. We must therefore show we are being ethical and responsible to reassure teams. We don’t want them to take a stance against AI as we want them to work with it.”

There was widespread agreement that some form of AI adoption watershed had been crossed, with businesses becoming more comfortable integrating AI capabilities at multiple touchpoints across organisations, not just in ring-fenced PoC’s. One speaker was also reassured by the consistency of the presentations noting “It’s good to see we are not barking up the wrong tree with AI, as in so many areas like ROI objectives, safety and governance everyone seems to be getting on the same page.” But he cautioned that “Everyone is also struggling with the same things.”

Assess failures

Before defining success, SoftServe CTO Alex Chubay said businesses have to become better at assessing failure and understanding “how multiple small errors can accumulate and compound into larger disasters.” To back this up he cited Warren Buffet’s right-hand man Charlie Munger who famously said: “Avoiding stupidity is much easier than seeking brilliance.”

Chubay said this was easier than it sounded as “success is fragile, while failure is common because, while all factors have to perform well for a project to succeed, it only takes one factor to go wrong for it to fail.” This becomes even more important when contemplating the multi-dimensional complexities of AI and data ecosystems.

In these, he said, many corporate strategies are ill-fated from the outset as they “dwell on those methods borrowed from simpler domains, assuming there is a single simple way forward or blueprint.” To overcome this risk, he recommends organizations should adopt what he called “Via Negativa,” or a sort of inverse thinking. He defined this by instead getting businesses to ask the question “What should I avoid to prevent failure and reveal success, with the goal of approaching perfection by eliminating imperfections?”

Bad habits

This message was reinforced by our keynote presenter Dr Stefan Groner from Fresenius University of Applied Sciences. He said that “failures don’t mostly come from management, or engineering errors, they are a result of ingrained bad habits.”

To counter this, he said businesses need to change those habits to be in a position where they can continuously adapt. It will then enable them to be more agile and effectively become internal self-disruptors that can meet changing market needs, with a much lower risk of being displaced by an external, third-party, disruptor.

The concept was taken a step further by Smitha Dunwell, the Industry Partner Lead for Data & AI/ML at AWS EMEA. She said when firms are entering periods of change they should focus first on what’s not going to change before they consider innovation or disruption.

AI factory

For example, she said “Businesses will always need faster and better decisions, while data will always need to be trusted and scale will always need repeatable processes.” Therefore, in order to harness those pre-requisites and be successful with AI, she believed businesses need the bedrock of an AI Factory, which she defined as the “Industrialization of Intelligence”.

This standardization of processes, with re-usable components, and cross-functional squads will enable businesses to build and scale repeatable AI solutions. This she added, will minimise the risk of PoC’s failing when introduced to real life environments.

Dunwell said that the underlying factory discipline this creates becomes relevant across both different industries and different architectures. She cited examples of AI deployment from financial services, where smarter real-time credit and fraud decisioning is evident, in manufacturing, where predictive quality control on production lines is emerging, and in healthcare, where better clinical decision support occurs under clearer data residency.

With the right foundations (not just platforms she stressed) enterprises can have an AI Factory where self-sufficiency is the only sustainable decision and accelerates transformational business outcomes. Once in place, organizations can then work out how to upskill their teams and change mindsets.

Trust dependency

Ajay Chakravarthy, the Chief AI Officer at Thales UK, focused on “trust” as being one of the essential components of successful AI outcomes, noting that “the future will belong to those who can trust their AI enough to depend on it.”

This was particularly essential in many of the defence and mission critical applications where his company deploys AI in military environments. In those situations, it means AI has to be powerful enough to bring massive performance gains, intuitive enough to make technology easier to use and trustworthy enough to provide world-class cybersecurity and other protections.

To achieve that level of trust and ensure valued outcomes, he said businesses must start with real use cases, not models, integrate AI into workflows, not next to them, and focus on human AI-learning to drive adoption. Once in place this will mean systems are designed for trust from Day one.

Governance design

Another critical element when working with AI is governance, although Ben Garside warned that implementing governance upfront “can mean death by a thousand cuts.” He said that approach, which is ingrained in many traditional businesses, is inappropriate for AI environments where governance needs to be built by design and evolving as businesses build and scale “not being designed to protect us from the start.”

He said: “Governance needs to shift from control to observability, which will then show the efficacy of the products you are building.” He also said that when firms start out on their AI journey they need to remember “they are becoming the disruption.” But this doesn’t mean they have to find the whole process at the outset, just the first step.

He believes it is critical for businesses to “build it and prove it” step by step, as the improvements “can create pressure on everything that hasn’t changed.” In that way “you’re not transforming the process you’re making the process want to transform itself.”

He likened the AI environment to the human anatomy, where the brain, or decision intelligence, strategy and vision, is where most of the initial investment has gone and not the nervous system, which he likens to data and integration, where many programmes, if neglected, can quietly die.

However, equal investment needs to be applied to the body’s muscles, representing AI execution and automation, which is the part that compounds and where every capability can make the next one cheaper. And finally, also to the skeleton, representing governance, which Garside said, when re-built as AI-native infrastructure, delivers the observability and real-time control that becomes fit for an Agentic AI world.

Conclusion

The closing panel debate concluded that one of the key takeaways from the day is that “AI has to be a team sport.” It is not something that can exist independently of the business and that to bring people with you (and get sufficient investment backing for AI) you need to be persuasive if you “want to win the argument and win the money.”

Businesses also need the flexibility to “play and learn with AI” in a culture that promotes and enables learning experimentation. “Make a hero of your internal clients. Make them feel special as a result of what AI can do for the business.” As one said, “Be prepared to give the credit away.”

One panellist felt the story around ROI can be difficult to start with, but with the changing landscape assumptions will also change and there will be less pressure the longer it goes on. The benefits from AI will then become more evident through new revenues and not just cost savings.

It will be equally important, one noted, to remind business heads about COI, or the Cost of Inaction, as the dangers mount to a business that does not embrace AI, but its competitors do. While another concluded that “We have no idea yet how this is all going to unfold. We are only at the start of the first minute of the first episode of what is going to be an epic series ahead of us. These are indeed exciting times as AI is not being done for the sake of AI, it is an enabler to do other things better.”

Start a conversation with us