Establishing Liability in an Increasingly AI Prevalent World

Advait Kandiyoor *



The paper aims to understand how liability may be placed on issues pertaining to Artificial Intelligence it does so by bringing to light certain existing theoretical models developed to establish liability on Artificial Intelligence entities and utilizes the same to analyse legal issues that have occurred previously such as the Uber AV accident as well as those which persist today such as the misappropriate use of deepfakes, while very briefly contextualising it with regards to the Indian legal system. The paper then attempts to bring forth the idea of how and why an AI entity itself may be held liable and finally concludes with another potential direction that could be considered with regards to placing liability on AI entities.


Every day, we move a step closer to ‘technological singularity,’ a term suggesting a point in time where the intersectionality of humans, technology, and artificial intelligence results in an unprecedented transformation. This super-intelligent technology, Artificial Intelligence (AI), is predicted to be the cause of this singularity. Artificial Intelligence has been an area of research since the 1970s; however, the most exponential growth in its development has occurred only in the 21st century, with claims suggesting that this decade is equivalent to the renaissance for Artificial Intelligence (AI) research. Although we are not under any immediate threat of superintelligent AI, it has begun interfering with our lives in every sphere, such as self-driving cars, autonomous drones, or telerobots capable of performing surgery. With such a rise in the use and development of AI, these systems have begun to pose legal and philosophical dilemmas concerning criminal liabilities, which must be dealt with, before we prepare to integrate this into our daily lives completely.

Analysing the Criminal Liability of AI in Recent Cases

Offences arising out of AI are an inevitable threat, that may require existing laws to be revisited, in order to effectively address these novel issues. Although there exist no precedents relating to AI-related crimes, one can still analyze the problems using existing principles of law. Professor Gabrielle Hallevy, in his article[1], suggests three models that determine liability in cases involving AI. The first one is ‘Perpetration via Another Model.’ It assumes AI to be an ‘innocent agent’ and akin to criminal law. The innocent agent is viewed as a mere tool by which an individual commits the offence, hence rendering the tool not liable. The second model is the ‘Natural-Probable Consequences Liability Model’ which essentially suggests that the person could be held liable when the offence is a direct consequence of that person’s action or lack thereof. The third model is the ‘Direct Liability Model,’ which aims at holding the AI entity accountable.

These models provide a basic framework to tackle emerging AI-related crimes. For this paper, these crimes have been categorized into two categories, namely- AI-Enabled Crime and AI-Committed Crime.

i) AI-Enabled Crime: Increasing use of Deepfakes

AI-Enabled crimes are currently the most pervasive AI-based crimes. While some of these can be tackled using existing legislation, others demand a revision. AI is enabling criminals to further their activities in trading illicit items, blackmail, theft and other such crimes. In the Crimes Science Journal, a study conducted by scientists ranked AI-enabled crimes based on the severity of their threat. It was concluded that the most imminent threat was that posed by Deepfakes.

‘Deepfakes’ is a portmanteau of ‘deep learning and ‘fakes’, and are a form of doctored videos/photos made using Deep learning algorithms (a part of AI systems) that allow for eerily life-like recreation of an individual in the aforementioned format. Initially, it was utilized in some pornographic forums where celebrities’ faces were morphed into pornographic videos. However, over time, with vast developments in technology, it is now being used for a multitude of other malicious acts, such as blackmail, defamation, or doctoring evidence. A significant issue that India is facing, revolves around telegram bots that generate more than 100,000 images of lewd deep fakes of individuals on request, a substantial proportion of which are of women.

A report by security company Sensity found that there exists a “deepfake ecosystem on the messaging platform Telegram” and “The focal point of this ecosystem is an AI-powered bot that allows users to photo-realistically strip naked clothed images of women." Considering the same, the Bombay High Court made an inquiry about the report and directed the Additional Solicitor General along with the Ministry of Information and Broadcasting to investigate the “malice” of the report. Telegram only constitutes a small portion of the larger network of these malicious deepfakes, these heinous crimes are also prevalent across other social media platforms. According to an estimated figure, there are around 1.8 billion such images shared across these virtual platforms every week. Generally, in cases where illicit media of women are shared, the accused can be booked for either defamation[2], criminal intimidation[3], outraging the modesty of a woman,[4] sexual harassment[5] or a combination of these sections. In addition to this, they could also be charged with certain provisions of the Information Technology Act, 2000.

The legal problem, however, arises due to the multitude of actors who, either voluntarily or involuntarily, contribute to the issue. The parties complicit in the crime could include the AI technology itself, the operator of the AI (in this case, the person using the bot to request and obtain images), or the programmer of the AI. It would be rather rigid and constricting to determine the liability on any stakeholder. Even if the offence was committed by an AI machine, it cannot be held liable since it is merely an instrument in this case for the commission of the offence. This narrows the possibilities down to the user and the programmer of such technology. Identifying all the users of such Deepfake bots would be almost impossible, due to the sheer number of them as week as various technological barriers. Therefore, the programmer is the only accountable figure, who if found guilty, could plead that he has not committed any of the offences, and prove that there is a lack of actus reus on his part. However, that does not omit the existence of mens rea, since the circulation of illicit deepfakes is a foreseeable result of creating an AI program that is capable of providing users with such deepfakes on request. Therefore, this can be a valid ground to hold the programmer accountable for such crimes.

Unfortunately, the conventional legal application makes it difficult to ascertain the liability or charges that may be levied on such an individual. Professor Gabrielle’s ‘Perpetration via Another Model’ could be applied, which assumes that the programmer and user committed the offence, and adequately imposes liability on those offenders. Although this may lead to the prosecution of the programmer more appropriately, it does not tackle the use of malicious AI.

Countries like the United States have passed the Deepfakes Accountability Act (2019), which required the watermarking of all deepfakes for identification along with different State laws governing and regulating its use. In India, these Deepfakes have begun to penetrate the political sphere during elections, and in order to combat this, there is an urgent need to make adequate amendments to the existing legislation or pass new regulations and guidelines that could effectively govern such newly arising issues.

ii) AI-Committed Crime

Another form of AI is Autonomous Vehicles (AV) or Self-driving cars, which have recently started to gain traction, as multiple countries permitted road testing and some are even planning on having street-legal AVs soon. These AVs pose a challenge with regards to ascertaining criminal liability in cases where the vehicles malfunction and result in fatal accidents. The collision between an Uber self-driving vehicle and a pedestrian in Arizona is a prime example of the same. Uber’s AV software, which included a standby driver indicating a level 3 automation, was unable to identify a pedestrian. It first identified it as an unknown object, then as a car and then as a bicycle. The driver was allegedly streaming a show, just moments before the crash, due to which the emergency brakes were not utilized (it was later found the emergency braking was deactivated), resulting in the subsequent collision. The County Attorney suggested in a public letter, that there was "no basis for criminal liability" for Uber. He directed that the standby driver be subject to an additional police investigation, and later held the driver liable for the offence of negligent homicide. Unfortunately, from an analysis standpoint, the County Attorney did not release any information regarding the rationale behind the decision. However, this gives us an opportunity to apply the existing principles to find the rationale as well as possible solutions to this rather ambiguous decision.

If we attempt to apply Professor Gabrielle’s models, the only applicable model would be the Natural Probable Consequences (NPC) Liability Model, since the first model requires mens rea on the part of the programmers, i.e., Uber, which they did not have. With reference to the third model, it in itself would not be applicable, as in this case, the car having the AI system, itself cannot be made liable, as yet. Under the NPC model, it would seem that the driver must be held responsible since she failed to be alert as a standby driver. However, the probable cause could also be drawn from the fact that Uber disabled the emergency braking systems when the car was under complete computer control since it claims to cause “erratic vehicle behaviour”. In case this system was still in place, the AI machinery would have triggered the same, avoiding the crash. This leads us to the question of why there was “no basis for criminal liability” against Uber, and whether there should be clear-cut legislation governing AVs before actual testing is allowed on public roads with the potential risk of losing lives.

Currently, in India, Autonomous Vehicles are not covered under the ambit of the Motor Vehicles Act, 1988. The Act requires there to be a human driver[6] in effective control of the vehicle at all times. Though the amendment to the Act in 2017 proposed testing, it is yet to be enforced. If the infrastructural hurdles are not considered, there still needs to be an appropriate action plan to encompass AVs in the Indian legal framework, more accurately in order to ascertain the liability. Despite the Uber and Tesla AV cases giving a slight direction regarding the treatment of liability in the case of AI, neither of them gives any conclusive legal backing or an amicable solution to the issue. Identifying one entity’s liability becomes even more challenging when the degree of autonomy of the AV increases, and malfunctions can be further classified into hardware and software issues which would lead to further groupification based on who would be liable in case of an accident. Holding the AI technology itself liable, as prescribed under the Direct Liability Model, seems less practical unless AVs are recognized as separate legal entities, and the problem regarding lack of mens rea is resolved.

Holding the AI Entity Liable

As AI programs are being developed rapidly, they are becoming increasingly autonomous. Simultaneously, the liability on the programmers (either individuals or corporate divisions) becomes even more questionable since the programmers may never have intended or foreseen the crime that the technology commits, due to the degree of autonomy possessed by the AI system. Therefore, there is an urgent need to address the loopholes in accountability that arise.

One way to do this would be by imposing liability on the AI entity itself. However, it is important to note that this does not suggest that the developers are absolved of all liability. Instead, it indicates that the liability must be prorated between the user, programmer, and the AI itself (based on the facts of each case) and that the AI would be held liable in addition to its developer and/or user. Though Professor Gabrielle talks about a direct liability model that revolves around holding the AI liable, he does not emphasize why or how the liability is placed. He only suggests that AI may be capable of forming intent and having the knowledge to fulfil both elements required for a crime, therefore it would be held liable through the same authority that is imposed on humans.

Considering why AI systems must be held liable, there are two primary reasons. The first one being ‘causality’. This denotes that there must be some direct causation between the crime and what caused it. In a case where both the user and programmer of the AI were not directly complicit in the crime, such as in an automation level 5 (fully automated without the requirement of a driver) car deciding to swerve off the road and hitting a pedestrian (possibly to avoid collision with another vehicle), the AV’s swerve is the cause for the resulting accident. Thus, holding the programmer/corporation would make little sense, since all programmers cannot foresee decisions that an AV might take.

Secondly, with the advancement in the ability of AI, the programmer cannot be held liable for every action/inaction of the AI that results in a crime, and it is more likely that the programmer is charged vicariously not for the crime, but for not ensuring the AI's feasibility. This means the liability of the actual crime is not placed on anyone, which would be against the conventions of criminal law, specifically the idea of retributive justice. Holding the AI liable, allows for the appropriate placement of liability as well as ensuring the requisite direction or “punishment” to be given for the AI, the possibility and potential benefits of which are discussed by Ryan Abbott and Alex Sarch,[7]the implications of which are beyond the scope of this paper.

In order to establish how AI can be held liable, it would be necessary to primarily identify AI systems as legal entities, that have their own distinct identities, as suggested by experts cited previously. Although this might seem like a pre-emptive measure, it is necessary due to the exponential rate at which technological advancements are occurring in the field of AI. The idea of giving legal status to an artificial person or thing is not foreign. Corporations are separate legal entities from the owners and are subject to provisions of criminal law as well. Additionally, legal non-human personhood has been granted in multiple cases to animals, such as dolphins and primates in multiple parts of the world, as they are considered intelligent creatures with a complex social system. The application of the same principles should theoretically be extended to AI.

The next issue would be finding the internal elements of a crime or the mens rea. The external element, or actus reus, is fairly easy to identify, as is evident in the examples mentioned previously. If we consider the concept of strict liability, there is no requirement for the mens rea in the commission of an offence. Therefore, under cases of strict liability, the AI entity fulfils the requirements to be considered guilty of committing an offence. In the Indian legal system,mens rea plays a pivotal role in determining whether a crime has indeed been committed or not, and there are only a handful of offences under the Indian Penal Code that are exempted from the requirement of mens rea. However, many recent statutes that have been passed in furtherance of public interest, such as the Motor Vehicles Act (1988), Arms Act (1959) or Narcotic Drugs and Psychotropic Substances Act (1985), impose strict liability.

The principle behind the imposition of such strict liability of statutory offences can be inferred from Roscoe Pound’s view on statutory offences, suggesting that Statutory offences express the needs of society and “such statutes are not meant to punish vicious will, but to put pressure on the thoughtless and inefficient individuals to do their whole duty in the interest of public health, safety, or morals.”[8] Ying Hu suggests that AI, once sufficiently advanced, be held to a higher moral standard than humans, since “an action or omission might be considered wrongful if carried out by a robot, but excusable if carried out by a human.”[9] He provides an example of a drowning child with a human versus a robot passer-by. The human may not be obliged to risk his life to save the child and has no criminal liability in case he/she does not save the child. However, a robot is likely to save the child, regardless of the fact that it might get damaged in the process. Ying uses a robot to depict a physical manifestation of AI yet, the principle remains the same, and if indeed robots/AI are to be held to a higher moral standard, it makes sense to impose strict liability in cases of crimes committed by AI to find them guilty of crimes in the interest of public safety.


Unlike Prof. Gabrielle, I believe principles of criminal law applicable to humans must not directly be imposed on AI entities. Instead, they should be governed potentially through the use of strict liability, or a model that is similar to corporate criminal liability. In Iridium India Telecom Ltd. v. Motorola Inc. (2011)[10], the Supreme Court held that corporations cannot escape criminal liability by merely contending that there is a lack of mens rea. Therefore, any legislation or statute passed to govern these AI entities should require the application of strict liability while adopting models present in establishing corporate criminal liability.

Holding AI entities accountable in this manner would mean they (people representing the AI) could use defences to mitigate or absolve liability as they are separate entities. John Kingston poses an interesting question – “Whether defences would be available to AI’s who have been charged with an offence, more specifically, the use of insanity as a defence in case the crime that occurred was due to a virus or hacker would be sound?”[11] He goes on to suggest that this is not all too theoretical since there have been cases in the UK where individuals, charged with offences committed through the use of a computer, have been able to argue that the root cause was malware in their system[12]. Assigning a legal status to AI entities will undoubtedly raise a new set of challenges that will need to be tackled separately. Despite this, I believe it is the way forward.

Successful integration of AI requires competent legislation and a solid statutory framework to be in place. This is owing to the fact that even today, the legal questions concerning AI lack definitive answers, while the possibilities of both AI-enabled and AI-committed crimes are only increasing. Since the law cannot keep up with the sheer rate at which technological advancements are rising, the law must be much more proactive, rather than reactive. This ex-ante approach is essential, considering the vast opportunities for the implementation of and application of AI in the country.

[1] Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities - from Science Fiction to Legal Social Control, 4 Akron Intellectual Property Journal, (2010). [2] Indian Penal Code, 1860, § 500. [3] Indian Penal Code, 1860, § 504, 506. [4] Indian Penal Code, 1860, § 354. [5] Indian Penal Code, 1860, § 354(A). [6] The Motor Vehicles Act, 1988 § 9. [7] Ryan Abbott et. al Alex Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction, 53 UC Davis Law Review 1, 323 (2019). [8] Roscoe Pound, The Spirit of the Common Law 52, (Cornell University Library 2009) (1921). [9] Ying Hu, Robot Criminals, 52 University of Michigan Journal of Law Reform, 500, (2020). [10] Iridium India Telecom Ltd v. Motorola Inc., (2011) SCC 74. [11] John Kingston, Artificial Intelligence and Legal Liability, Research and Development in Intelligent Systems XXXIII: Incorporating Applications and Innovations in Intelligent Systems XXIV, 269-279, (2016). [12] Ibid 10.


[*] Advait Kandiyoor is a third-year undergraduate student from Jindal Global Law School, India. Preferred Citation – Advait Kandiyoor, “Establishing Liability in an Increasingly AI Prevalent World", Syin & Sern Law Review, Published on 18th July 2021.

110 views0 comments