Tue, Oct 08, 2024 | Rabī‘ ath-thānī 5, 1446 | DXB °C

Trust dies in darkness, so let’s take AI out of its ‘black box’

AI is an immensely powerful tool with the potential to upgrade how decisions get made

  • Joe Dunleavy
  • Updated: Tue 8 Oct 2024, 5:30 PM

In business, as in life, it is trust that holds everything together — investment, deals, recruitment. Customers must trust cloud service providers because if they do not, they would second guess every technology decision made. A manager must trust an employee because if they do not, they would have to micromanage every task and would have no time for strategic thinking. In these two examples, we see the outcomes of trust clearly. In the former, the cloud customer dares to innovate; in the latter, the manager does the same. The fundamentals are covered, so no need to worry.

But behind every trust relationship are processes. It is precisely because the cloud provider and the employee follow these processes faithfully that they inspire trust. As you read this, across the GCC hundreds if not thousands of AI systems are embedded in enterprises’ day-to-day operations. According to McKinsey research from 2023, in the regional retail sector alone, 75% of companies have adopted AI to augment at least one business function. One can only assume AI is trusted by these companies.


The ‘black box’

But what is the origin of this trust? AI algorithms are famously non-deterministic. It is, in most cases, difficult if not impossible to trace output back to a branching decision within the system. This gives AI a “black box” feel that could give way to doubt. But of course, if the results send trend lines in the right direction, decision makers are unlikely to be interested in delving into the source of the insights that preluded success. Hence, trust can come before due diligence, where it should be the other way around.

AI is an immensely powerful tool with the potential to upgrade how decisions get made. From cabinet rooms to boardrooms to living rooms, it has a role to play in improving lives. Large language models (LLMs), for example, are not only adept at understanding intent and nuance in human queries; they can organise responses in meaningful ways that aid absorption of knowledge and lead to better insights. It is perhaps unsurprising that generative AI (the family to which LLMs belong) has rapidly taken its place in economic projections. According to one estimate from PwC’s Strategy&, the GCC could generate almost US$10 for every dollar invested in GenAI.


Again, if we look behind the figures, we see potential in LLMs’ ability to connect humans to large data sets that would otherwise be out of reach for non-technical people. But we also see an eagerness to trust because of the powerful results. However, if we take a moment to consider a highly regulated industry like BFSI, we find age-old business processes and workflows defined around consistency and privacy. When audited, a bank must explain exactly what it does with data during each of its workflows. LLMs offer a marvellous interface between raw data, and the rapid insights they can provide are tempting to BFSIs. But when answering the question of what happened to the data during the process, things start to become vague. And trust can take a hit.

Law and order

And when trust takes a hit, there is a knock-on effect of decreased innovation and perhaps even a slowing of economic growth. As such, there is increasing pressure to get the balance right between total trust and the placement of guardrails. Far from being an impediment to highly regulated industries, AI regulation is an attempt to remove it from its black box and make it accountable.

Joe Dunleavy, Global SVP & Head of AI Pod, Endava

Joe Dunleavy, Global SVP & Head of AI Pod, Endava

One place this is happening right now is in Europe. The EU Artificial Intelligence Act became law in August 2024. The framework is described by the European Commission as being based around “human rights and fundamental values” and intended as groundwork for development of “an AI ecosystem that benefits everyone”. The law goes so far as to break risk down into four distinct categories and covers every AI use case from “minimal risk” spam filters through “specific transparency risk” chatbots to the “unacceptable risk” of controversial social-scoring models, which is now banned.

The stakes could not be higher. Without addressing the black-box problem, not only might AI not be as beneficial as we had hoped; it might end up inflicting net harm on society. GCC governments are already addressing AI through a risk lens, but individual organisations can do their part by sticking to three basic principles.

1. Data transparency: Data integrity should account for the fact that initial inputs may be imperfect, but that data-cleaning can lead to more positive outcomes. Governance based on transparency will be critical in tightly regulated sectors, with each transaction logged, tagged, and made visible.

2. Division of AI labour: Governance and transparency are better achieved if AI agents are made responsible for smaller tasks that are combined through orchestration to deliver insights or automate processes. Supplemented by human oversight, such an approach will help eliminate the black box.

3. Dynamic scaling: Processes can be accelerated by implementing multiple channels to move complex and resource-heavy tasks through the workflow more efficiently.

E-trust

Trust may be fundamental to our economies, but that does not mean we should automatically trust something just because it is the only way to get our pet project off the ground. It is not just acceptable but mission-critical to demand that AI earns our trust and that we leave the training wheels and report cards in place even after it has done so. Our healthcare, safety, bank accounts and national security are at stake. So, let’s start opening AI black boxes wherever we find them.

The writer is Global SVP & Head of AI Pod, Endava.


Next Story