Fri, Nov 22, 2024 | Jumada al-Awwal 20, 1446 | DXB ktweather icon0°C

As Generative AI grips the region, steps needed to ensure its responsible use

One of the factors that established the UAE as a leader in AI was its broadminded approach to risk management

Published: Wed 6 Mar 2024, 5:00 PM

  • By
  • Sid Bhatia

Top Stories

Generative AI may still receive lukewarm treatment in some parts of the world as risk-averse organizations poke and prod to find its limitations, but in the Arabian Gulf, where early adoption is a tradition, we are further along the road. For example, in April 2023, the UAE’s Ministry of Artificial Intelligence, Digital Economy, and Remote Work Applications (the world’s first such ministry) partnered with Dubai Media Council on a report titled “100 Practical Applications and Use Cases of Generative AI”. And in September, as momentum built further, PwC predicted that GCC economies could benefit from generative AI to the tune of $23.5 billion by 2030.

The attraction is clear. In public, over the course of mere months, large language models (LLMs) proved themselves to consumers, governments, and businesses. Given their wide applicability, decision makers in all industries recognized their capacity (and that of the broader field of Generative AI) to be a gamechanger. Given the hype, the urgency to adopt, and the low entry barriers for tools like OpenAI’s ChatGPT, it would be tempting to take the plunge before examining the risks.

One of the factors that established the UAE as a leader in AI was less its willingness to adopt and more its broadminded approach to risk management. The federal government has published whitepapers on AI ethics and openly discusses the concept in public forums. When it comes to Generative AI, many black-box problems persist, so as investments and adoption continue, we should revisit the underpinnings of responsible AI — to understand the risks and how governance can step in to mitigate them.

Responsible AI is a path to trust — trust from the communities served by organizations that use the technology, whether these communities are comprised of citizens, consumers, employees, business partners, or a mixture. The goals of responsible AI are reliability and transparency. If the system can achieve consistently accurate and actionable results and do so while being able to show clearly where the results came from, then the organization has taken a significant step toward gaining the trust of end users. In furtherance of these goals, four criteria should be observed.

1. User feedback

End users should be encouraged to tell the story of their experiences. This feedback process should be simple so that users are more likely to participate and report on the quality of the results they saw from the model. Free text and quality scoring should both be available and must be frequently reviewed by model creators so they can adjust parameters appropriately and ensure that outputs are always accurate and useable.

2. System clarity

A user should always know when they are interacting with AI. Not only is this critical to the provision of feedback, but it allows human agents to make more effective real-time judgments. A user may experience output through onscreen prompts, automated processes, or messages sent to other devices; but at all times, messages should be clear about their origin — from a model or another human. If users are working with virtual assistants, guardrails must be implemented to avoid employees acting in a way that would hurt the brand.

Sid Bhatia, Regional VP & General Manager, Middle East, Turkey & Africa, Dataiku

Sid Bhatia, Regional VP & General Manager, Middle East, Turkey & Africa, Dataiku

3. Explanatory results

While it may not always be practical to provide explanations for model outputs, it should be a priority to do so in Generative AI. End users will more easily trust systems that show their work. Non-Generative AI applications use ICE or SHAP metrics, but with Generative AI, organizations may have to conduct separate analysis on its text output or other processes to get close to the required level of transparency.

4. Caveats

Nothing dents trust more than a sizeable gap between expectations and results. Organizations that want to incorporate generative models into their AI programs should avoid promising the earth and instead educate end users on how models work. If they have a surface appreciation of how outputs come to be and of the limitations of the model, users may be more realistic in their expectations. For example, they may realize that if data becomes irrelevant, so must the model.

A word about pre-trained models

Many regional businesses may run with a Generative AI adoption strategy that looks to third-party models that are pretrained. It should be understood that risks arise here from a lack of transparency, a lack of control over the output, and a lack of control over one’s own data, which may be shared with and legally kept by the model provider. It will be a challenge for enterprises that take this approach to maintain governance standards that deliver on their responsible AI goals. Care must be taken — through rigorous end-user training — to ensure neither private nor proprietary data are used as inputs to third-party models and to thoroughly test outputs for biases.

Other non-trivial considerations are the financial and environmental costs of LLMs. Responsible AI is largely about the mitigation and prevention of harm, especially to society at large. But LLMs must be trained and deployed, which consumes valuable resources. Third-party models, then, are attractive because they do not require as much training. Corporate data can be used to fine-tune instead, which costs less both in terms of finance and the environment. It is worth remembering, however, that third-party generative models used in production environments invariably require the use of computationally and ecologically expensive graphics technology to be useful.

Everyday, responsible AI

If you are building a culture of Everyday AI, it is imperative that it is also a culture of responsible AI. Whatever your individual goals, robust governance must be in place to control the use of AI by each employee, from the boardroom on down. Models must be accurate and safe, transparent and governed, just and compassionate, and accountable and explainable. If all these needs are met, your organization will flourish in the Generative AI era.

The writer is Regional VP & General Manager, Middle East, Turkey & Africa, Dataiku



Next Story