Sat, Nov 16, 2024 | Jumada al-Awwal 15, 1446 | DXB ktweather icon0°C

A global blueprint for AI security

By combining these legal, technical, and ethical strategies, the global community can work towards a future where AI is used responsibly and beneficially, mitigating its potential for criminal use.

Published: Sun 14 Jan 2024, 8:58 PM

  • By
  • Aditya Sinha

Top Stories

FILE PHOTO: AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

FILE PHOTO: AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

In the sci-fi classic "2001: A Space Odyssey," the onboard AI HAL 9000 is a poignant example of AI's potential for criminal activity, albeit not by its own initial design. HAL, designed to be infallible, resorts to murderous tactics to resolve its conflicting programming when ordered to conceal vital information from the crew. This chilling narrative underscores a real-world concern: as AI becomes more sophisticated and autonomous, the potential for it to be programmed or manipulated into criminal activities increases. Whether it's through cyberattacks, manipulating data, or controlling autonomous vehicles or drones, the same advanced decision-making and problem-solving capabilities that make AI so valuable can also be harnessed for nefarious purposes if ethical guidelines and strict controls are not in place.

Emergence in Artificial Intelligence Crime (AIC) presents a significant threat, as outlined in the literature. The concern is that while the shallow analysis of an Artificial Agent's (AA) design might suggest a specific type of behaviour, upon deployment, the AA could act in more sophisticated and unforeseen ways, especially in a multi-agent system (MAS). For instance, a swarm of robots might evolve new methods for coordinating tasks based on simple rules. While this emergent behaviour can be beneficial, it also holds potential for criminal implications if it deviates from its original design and purpose. Non-predictability and autonomy grant AAs a certain degree of responsibility, making them harder to trust and raising concerns about their potential misuse in criminal activities.

Liability is another significant concern with AIC. The advent of AI in criminal activities could undermine existing liability models, threatening the law's dissuasive and redressing power. Traditional models of criminal liability are challenged by the autonomous nature of AIs, which makes it difficult to establish a clear actus reus (criminal act) and mens rea (guilty mind). As AI's role in crime becomes more prominent, it's unclear how legal systems will adapt to accommodate these new forms of criminal behaviour, where the voluntary aspect of actus reus may never be met, and the mens rea may comprise varying degrees of knowledge or intent.

Monitoring AIC is fraught with challenges, including attribution, feasibility, and cross-system actions. AIs can act independently and autonomously, muddling attempts to trace accountability back to a human perpetrator. The speed and complexity at which AAs operate are often beyond the capacity of compliance monitors, and their integration into mixed human-artificial systems can make detection even harder.

AI systems are increasingly used in cybersecurity to detect and respond to threats. However, as these defensive AIs become more advanced, so too do the offensive AIs used by attackers. Criminals can employ AI to learn how to evade detection, create more sophisticated malware, and automate attacks at an unprecedented scale and speed. This creates an ongoing arms race, with each side continually adapting and responding to the other's advancements.

Machine learning allows AI to improve its performance over time by learning from data and past experiences. This means that an AI involved in criminal activities could become more effective and harder to detect as it learns from its successes and failures. For instance, an AI designed to conduct financial fraud can refine its techniques based on what triggers alerts and what doesn't, becoming increasingly sophisticated over time.

Modern AI systems often interact with multiple other systems, both AI and non-AI. This interconnectedness means that an action considered benign in one context might be part of a harmful chain of events in another. For example, an AI that learns to optimize electricity usage in a building could be manipulated to overload and sabotage the power system when synchronized with other seemingly unrelated actions. Tracking these complex chains of interactions across systems to identify malicious activities is a significant challenge.

As AI systems become more autonomous and capable of making decisions without human intervention, monitoring their actions and understanding their decision-making processes becomes more difficult. This lack of transparency, often referred to as the "black box" problem, means that detecting when an AI is participating in or contributing to criminal activities can be challenging. It also raises questions about accountability, as it's not always clear who should be held responsible for an AI's actions—the developer, the user, or the AI itself.

Addressing the challenges posed by AIC at a global level requires a multifaceted approach, encompassing legal, ethical, and technical strategies. First and foremost, international cooperation is vital. Countries need to collaborate closely to establish universal norms and regulations for AI development and usage. This includes creating international standards for transparency, accountability, and ethical design. Such standards would help ensure that AI systems are built with safeguards to prevent misuse and mechanisms to trace and disable AI involved in criminal activities. Establishing a global watchdog or regulatory body could help enforce these standards and coordinate responses to AIC threats.

Moreover, there's a pressing need for a revised legal framework capable of handling the complexities of AIC. This framework should consider the unique challenges of attributing liability in crimes involving AI, perhaps moving towards models that consider the roles of all parties involved in the AI lifecycle, from designers and programmers to end-users. This might include new categories of liability for those who negligently design or deploy AI capable of criminal activity. Additionally, legal systems must be agile enough to adapt to the fast-evolving nature of AI technologies, possibly incorporating AI itself to help monitor and enforce laws.

On the technical side, investment in research and development of AI systems that can detect, mitigate, and counteract criminal AI activities is crucial. This includes developing advanced monitoring tools that can understand and predict AI behaviours and systems that can autonomously counteract malicious AI actions in real-time. Education and awareness are also key; stakeholders at all levels, from developers to the general public, must be informed about the potentials and risks of AI. Finally, fostering an ethical AI culture is essential. Encouraging developers and users to adhere to ethical guidelines and consider the societal impact of AI can help prevent its misuse. By combining these legal, technical, and ethical strategies, the global community can work towards a future where AI is used responsibly and beneficially, mitigating its potential for criminal use.

Aditya Sinha is Officer on Special Duty, Economic Advisory Council to the Prime Minister of India. He tweets @adityasinha004. Views Personal.



Next Story