Recommendation for the European CommissionDownload
The below text is based on the author’s bachelor thesis at Tilburg University
1. INTRODUCTION
1.1 Background of the report
There is no widely agreed-upon definition of Artificial Intelligence (AI). Some define it in terms of its similarity to human intelligence. Others focus on its ability to understand or solve problems.[1] European Commission (EC) describes it as ‘systems which display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals’.[2] AI technologies encompass self-driving cars, disease mapping devices, conversational marketing bots, smart assistants and more. There are many different types of AI, yet they all share three characteristics: unaccountability, unpredictability and autonomy.[3] AI can be classified as strong or weak – the former being if not entirely, then largely autonomous, with weak AI being more controllable by humans.[4] Such a set-up is dependent on the deployed algorithms. Algorithms are defined as sequences of instructions that allow computer systems to turn input into output.[5] Relevantly, AI can use pre-coded algorithms to create its own sets of input and further calculate a novel output. Establishing liability for a faulty or harmful result in such a situation may be burdensome.[6] Should the programmer answer for the machine’s independent enterprise? Or should it be the person who operates it? In either case, should they be held liable for something that was unforeseeable?
The debate on liability related to the use of AI is heated, as more and more frequently actors find themselves being harmed by AI, be it by being denied certain services, being discriminated against in selection or having their various rights, e.g. of intellectual property (IP) or to dignity, violated. An example of a real-life happening like this is the US case which involved a predictive healthcare AI that was revealed to discriminate against dark-skinned patients. Because of a long-undetected glitch, the percentage of Afro-Americans awarded a specialised care programme was lower in comparison to Caucasians.[7] In Europe, there have been no larger cases reported, yet the voices of concern about the technology’s incapability of appropriate (at all times) judgement have been raised as much.[8] Importantly, in many legal systems, AI victims have difficulties asserting remedies from AI-related incidents.[9]
The Expert Group on Liability and New Technologies (Expert Group) was established by the EC to research the existing liability systems of the Member States (MS). The goal was to assess whether individuals of the Union have access to satisfactory recourse in case they experience any type of harm caused by AI. In 2019, the Expert Group’s report was published.[10] It claims that it is not necessary to give AI distinct legal personality, that operators are to be strictly liable for the devices they control and derive benefits from, that manufacturers are to be held strictly liable for the defectiveness of their devices, and that the existing, non-technology-specific national legal systems rather satisfactorily cover any technology-related accidents.[11] This, however, does not seem to be reflected in reality, with various MS individually looking for ways to create appropriate recourse systems for AI damages.[12] In Poland, for example, the lack of AI personhood prevents the establishment of liability in cases where the algorithm violates copyrights.[13] Hence, the legal debate has not concluded. The environment of the Union is not settled on the findings, alongside many expert contributions still investigating the matter.[14] The indissolubleness was acknowledged by the EC itself,[15] which ultimately undermined the Expert Group’s findings.
A concern raised in the EU is that all actors should be comprehensively protected. However, these apprehensions are balanced against the worry that an overly stringent approach to regulation can be an impediment to innovation which is held as one of the Union’s priorities.[16]
1.2 Research questions
This report is to answer the questions posed by the EC: What kind of solutions are needed for fair and efficient establishment of liability for the actions of Artificial Intelligence within the EU? Is approximation of such laws desirable?
The exploration aims to identify the solution the EU (EC) should most likely propose and create legislation for, if at all, on the EU level. This report brings legal analysis to use and provides a perspective on the responsibility of the Union in this sphere. The findings should become a helpful point of reference in the upcoming discussions on AI laws.[17]
1.3 Methodology
The exploration is conducted as a qualitative, comparative study based on dogmatic research, that is, researching the current state, European and international law, principles and concepts, doctrines, as well as case law. Expert and academic contributions are reviewed likewise.[18]
The subject matter is explored considering solutions utilised in selected MS, or proposed by experts from those states. An expansion to what has been done, for example, in the Expert Group report, is provided by examining the findings and attitudes from non-member states, whose efforts in the area have been sufficient to provide valuable input for the Union.[19] Contributions from different areas of law (criminal, IP, etc.) are given appropriate attention, acknowledging that AI-related problems emerge across the legal plain, not only in the sphere of tort or contract law.
1.4 Structure
The paper is structured to first consider the necessity and character of the approximation of AI laws on the EU level, and its feasibility. Afterwards, different alternatives for AI-related liability are weighed, starting with the inputs from international law, academics, closing with state practice and orientation. At last, recommendations are provided.
2. THE NEED FOR HARMONISATION OR UNIFICATION IN THE EU[20]
2.1 Harmonisation vs unification
Harmonisation is the process of creating common standards across the EU;[21] it is usually conveyed through directives,[22] which allow for differentiation among the MS.[23] Unification, conversely, may be defined as a complete replacement of the existing MS’ legal orders with a new EU order, as achieved through regulations.[24] Unification may be described as the next step from harmonization. Regulations might be more disputable and more challenging to create since such laws have to be somewhat adjustable to the MS’s expectations and their situations. Directives can create a field for discrepancies, vagueness and lack of certainty, by allowing discretion in achieving the set-out goals, consequently, possibly not amending a large aspect of the problem at hand.[25] What is more, a regulation is defined in terms of granting protection, which is relevant in the AI liability discourse. The directive, conversely, is to rather award rights; this not being excluded for regulations.[26] Which instrument may be opted for AI regulation, however, is dependent on the context of the problem and the powers vested in the actors. The EU is bound by the principle of proportionality which states that its action ‘cannot exceed what is necessary to achieve the objectives of the treaties’.[27] Hence, the selection of the means used must be conducted with caution.
2.2 Contemporary situation
The only instance when AI is regulated by a harmonised EU law is under the regime for defective products, under the Product Liability Directive (PLD).[28] The question of the need for novel harmonisation has not been answered approvingly by the Expert Group report, and if at all, only somewhat scrutinised by other academics.[29] The Expert Group suggested adjusting the existing regimes to accommodate technological developments.[30] For the Union, however, the problem lies in the fact that the PLD framework is not fit for this purpose. As said, PLD is lacking in terms of providing sufficient mechanisms for supervision and compensation for AI damage, and the adaptational changes proposed by different sources were deemed to only exacerbate the issues of underprotection. This has been admitted by the EC itself.[31] The PLD might not be able to follow the spirit of innovation, with questions being raised about the definitions of certain aspects or objects – such as ‘product’, ‘defect’, ‘service’, etc. To utilise the PLD regime, at the very minimum, we need to define whether AI technologies are products and/or part of providing services.[32] Finding such definitions happened to be problematic, for example, in the USA. Consequently, American courts attempt to distinguish between the software (a product) and information produced by the software (not the said product) independently, which creates a field for inconsistency.[33] Therefore, US practitioners recommend conducting further exploration to learn which solutions and arrangements could provide more legal certainty and structure.[34] This impasse ought to be taken into consideration by the Union, to potentially avoid unnecessary adaptation of the PLD as an exclusive EU AI vessel. Other options could prove more effective and efficient.
In such a vein, the approximation in the form of a new, AI-specific, cross-area law was announced necessary by the European Parliament (EP) in the resolution of 20 October 2020.[35]
2.3 Legal competence and legal ground
Before taking any course, however, the Union’s acting competence needs to be established, since the EU’s capabilities are based on the conferral of powers.[36] Article 6 of the Treaty on European Union (TFEU)[37] does not permit harmonisation of laws in some specified areas. Taking that the regulation of AI does not directly fall under any of those,[38] there seems to be no direct prohibition of the Union’s action. The EU, conversely, can harmonise laws under Articles 3 and 4 of TFEU, with both Articles listing instances where the competence is given; under Article 3 exclusively to the EU, with Article 4 jointly with the MS.
The most appropriate legal ground for an approximating proposal of AI laws could be Article 114 TFEU as it allows the EU to harmonise for the purposes of the establishment and functioning of the internal market;[39] in agreement with Article 4(2)(a) TFEU. Relevantly, the
EU may intervene only when the goal behind its action is, in fact, to protect from harmful diversification of the internal market.[40] Since some MS have introduced laws on AI, while some have not, the legal fragmentation that emerged within the internal market is in all likelihood to create problems with circulation of products and services, largely due to thefundamental legislative and procedural uncertainty.[41] Individual MS action cannot tackle the trans-border problem; hence, the Union’s action may be due. Other grounds for action listed in Article 4(2)[42] do not seem to be applicable for AI regulation or are also covered by Article 114.[43]
2.4 Advantages and disadvantages of approximation
Having established the acting competence, it is important to consider the benefits and drawbacks of the plausible EU action. The benefits of approximation range across the fields. The processes of harmonisation or unification facilitate commercial exchanges,[44] boost innovation, and reduce uncertainty related to the costs of operation and introduction of new products.[45] They also aid in achieving the normative goals of the EU, which include reaching a political, economic and legal acquis built on democracy, human rights and rule of law.[46] Furthermore, harmonisation is said to improve internationalisation, prevent conflicts and enhance understanding of standards.[47] Being able to cover these spheres, in such a multitude, seems to provide a rich incentive for legislative action.
Conversely, even weak harmonisation may be met with the disapproval of the MS, taking that directives and regulations often impose a higher (but may also lower) degree of protection than the MS laws, possibly creating complications in adaptation and acceptance among the Union’s subjects. The problems of transparency of the legislative procedure on the EU level and the known influence of external stakeholders might further create a field for doubt and disagreement. An approximation may also bring considerable costs of time, public funds, and others, to both Union’s organs and the MS.[48]
3. REGULATION OF LIABILITY
We can differentiate between three major types of liability that could be applied for AI. The advantages and disadvantages of producer, operator and AI’s own (algorithm) liability will be explored to learn which option could be the most suitable for the purposes of the potential EU approximation or MS recommendation.
3.1 Producer Liability
Producer liability is typically defined as obligations, responsibilities or debts of manufacturers for their products and services.[49]
Producer liability is used in the PLD, where manufacturers are strictly liable for damage caused by a defect in their products.[50] As discussed above,[51] however, this framework, unless expanded upon, may render insufficient for the purposes of AI regulation.
To those unconcerned with AI, producer liability might be the most obvious solution to opt for. Importantly, however, what creates a complication here is the ‘black box effect’ – a phenomenon which prevents humans from learning how AI decisions are made. The technology is programmed to be self-standing and to start developing its own results, yet how it is done and to what extent this is based on primary human input typically cannot be checked or controlled.[52] Knowing that the produced output could be in no way linked to the actions (or their lack) of the manufacturer puts in question the fairness of establishing producer’s liability in this context. There are ways to track the activity which resulted in a damage[53] but the accuracy of such methods in all situations may be questioned for self-developing, strong AI.
Despite that, in the somewhat controversial Singaporean case B2C2 v Quoine,[54] it was decided that algorithms can only take decisions which they are programmed to take, pointing at the producers’ liability. This contention has been countered by academics[55] and seems not to be widely confirmed by case law in other countries. Nonetheless, under the contemporary UK system, somewhat uniquely, manufacturer liability is commonly asserted.[56] Furthermore, in Germany, the prevalent opinion is that the manufacturer should be liable for the harm caused by AI.[57] This is rooted in the strictly product-orientated approach towards the technology,[58] which is drawn from the PLD. Such a stand may also be present in other MS, born from the absence of other solutions. Again, this appears problematic if we identify AI and its doings as providing services. Hence, some German commentators propose creating a system analogical to the law of the principal responsible for their auxiliaries, as codified under Section 278 of the German Civil Code.[59] This is operator liability.
3.2 Operator Liability
Operator liability is the legal responsibility of the operator of a product – not the person who built the object but the person that used it. In the context of AI, an operation may involve simple requesting for output, but also providing data to the technology.[60]
A general legal rule states that the one operating a thoughtless object is its principal and that, hence, they can exercise their will upon the object. It has been argued that the rule ought to be applied in case there is no other regulation of AI.[61] This option was presented in the aforementioned EP resolution of 20 October 2020,[62] yet without the important consideration whether AI, with its vast learning capabilities, ought to be considered ‘thoughtless’.
Operator liability was recommended by the Expert Group for a predominant number of instances. Nevertheless, the proposal is somewhat lacking. For example, in case of wanting to establish shared operator responsibility, strict liability ought to be assigned to the person who exercises a higher degree of risk control over the technology.[63] Strict liability entails imposition of liability without looking for fault, which might generate injustice to some operators. More critically, however, how this degree of control can be measured, especially if numerous subjects are considered, remains unclear. This could pose additional procedural hurdles. In particular, in the format that was proposed by the Expert Group and the EP, which extends the notion of operators towards producers as well.[64]
The intricacy of including many actors could be mitigated with the introduction of the reversed burden of proof.[65] And further, it must be remembered that the EC itself acknowledged that identifying the liability of several actors involved, for instance, in a value chain, may not be crucial to secure due compensation for AI victims.[66] Despite that, and paramountly, it is important from an overall public policy standpoint to provide legal certainty and assign blame as justly and transparently as it is only possible. Operator liability, possibly even more than other liability variants taking the broadness of the term ‘operation’, relies on individual risk assessments. It might be, however, that despite utmost care, the algorithm did what it was not supposed to, at least to the knowledge of the operator. This is related to the complications caused by the black box phenomenon, as well as the frequent inexperience of the public with novel technologies. Assignment of strict legal responsibility to the, in such a case, guiltless operator might generally deter actors from utilising novel technologies – to the detriment of society at large.[67] This is not an overstatement if we consider that AI has been playing a crucial role in our lives, for example, by helping the world fight the COVID-19 virus.[68]
Professor Gabriel Hallevy examined whether AI could satisfy the criteria of both actus reus and mens rea to be held criminally liable. He proposed a three-element model. It considers an AI entity to be an innocent agent, for whose wrongful act or omission its programmer or operator should be held liable as the ‘perpetrator-via-another’.[69] Nevertheless, this paradigm is suitable only in cases concerning the so-called weak AI; when strong AI is used, it cannot be accurate since the technology may become a semi-innocent or a non-innocent agent, as pointed out by Lacey and Wells.[70]
Operator liability has been opted for in the United States, under the regime of liability for dangerous products. Yet, as claimed, while this system might be effectively applied in cases of personal injury caused by AI, it seems of little use to IP specialists concerned with economic copyright.[71]
This option has also been explored in Russia, based on the existing rules of special kind of ownership, of animals to be precise, under Article 137 of the Russian Civil Code.[72] Some Russian scholars made a remark which also ought to be considered in other jurisdictions – that the analogical basis of such proposition is unacceptable for different areas of law; for example, criminal law. Applying the same rules to AI, which is reasonably expected to be less predictable than animals, poses risks of underprotection. Moreover, as widely accepted, animals are incapable of exercising their rights and obligations which is somewhat untrue for AI.[73]
Interestingly, these arguments have not been raised in one of the MS, France, where the analogy to animal liability has been widely considered an appropriate solution. This might be, however, due to the unique design of the French liability laws, with the extended damage awarding system.[74]
3.3 Algorithm Liability
Another option is to give AI legal personhood and, thus, to create algorithm liability.[75] Marten Kaevats, the National Digital Advisor of the Estonian government, claims that worldwide legal analysis shows that this is the most reasonable solution long-term; and, as argued by some experts, an inevitable one in the upcoming years.[76] Such personhood would bind with both obligations and rights.[77]
Nevertheless, this alternative was rejected in the Expert Group report. The publication highlights the issues of this solution but with scarcely discussed arguments, pertaining mostly to ethics and potential procedural abuse.[78] Somewhat similarly, the Robotics Open Letter to the EC, which has currently 285 signatures of different legal and technological practitioners, disputes this option, primarily relying on three contentions – that effective legal solutions can be guaranteed without AI’s legal personhood, that we are overstating the capabilities of technology and that, ethically and legally, establishing such personhood is inappropriate because AI is not human.[79] These conclusions ought to be verified.
The Expert Group’s report provides a voice of support for the argument of the existing systems’ alleged sufficiency. Nonetheless, as mentioned before,[80] the current situation does not seem to reflect this. As much as the existing systems may be sufficiently covering the AI liability of one of the actors,[81] such simplistic situations where only one person is involved are rare.[82] Moreover, it must be considered whether the contemporary arrangements have not been hampering innovation, as feared by the EU. The concern is not unfounded, looking at the example of the technological limbo present in Australia, where strong AI is not used since there is no appropriate legislation regulating it.[83] The examples illustrate that the conclusions of the Expert Group report and the Open Letter, that the existing systems of liability are sufficient and appropriate, are incorrect.
Regarding the second point, professor Shawn Bayern has proved that AI would be able to gain control of a limited liability company under the current US company law.[84] Moreover, in Finland, AI has secured a seat on a company board, showing that it is legally capable of control.[85] This does not only prove that AI conducts actions that are alike to those available to people by law but also that it suppresses our expectations.
Lastly, the point on the unethicality of granting AI legal personhood may also be undermined. As with the Expert Group report, the ethical implications are not explained in detail by the Open Letter. The conclusions are rather reduced to highlighting the unethicality of assigning similar rights to non-humans as to humans. Nevertheless, one must realise that basing on the mere superiority of humanity over any other being is not a viable argument in the face of such advanced technology, which additionally tends to express a similar degree of complexity of thought and action as people. As stated by Chesterman, failing to recognise the complexity and, hence, advancement of technology may only illustrate the limits to our perception.[86]
One must note the benefits which seem to be overlooked by the contemporary EU discussion. Firstly, the flexibility offered by introducing a separate personhood would amend the possible ‘AI gaps’ in different areas of law. Legal personality will guarantee that each field of law is allowed the freedom to assess the legal issues posed by AI within its own framework, ensuring accuracy and effectiveness of the undertaken actions. It will also provide for proximity; that is, it will bring the actual legal actors closer to each other, creating more clarity about who is legally responsible when and for what.[87] Most of all, however, the creation of algorithm liability will allow for the coverage and adaptation to all forms of AI, ensuring that any novel alteration to the existing technology will not create new legal white spaces. It could also remove the need of assigning strict liability, which is said to disincentive the use of the technology. Certainly, granting legal personhood is not an easy solution, which could initially provide much legal havoc, taking into account how many existing arrangements and perceptions would have to be adjusted or altered. This shows that if this option is opted for, approximation on the Union’s level could be required, not to create additional procedural problems among the MS.
Importantly, it is claimed that for this proposal to have methodological value and sense, AI ought to be deemed sufficiently independent from humans; it should also have the ability to exercise rights and obligations, and the whole arrangement needs to be advantageous to all the stakeholders.[88] These aspects may be difficult to ascertain taking the black box’s prevention of transparency, and the latter point’s degree of subjectivity. Nevertheless, academics agree that AI is capable of exercising rights.[89] Furthermore, Russian scholars compared the practice of granting legal personality to AI to awarding it to corporations – it is ascertained without determining whether a company has free will, whether it can act purposefully. Hence, as claimed, this element is not crucial.[90]
The overall problem of the corporate liability analogy, however, is rooted in the fact that legal entities are only criminally liable if the investigated illegal action is performed on behalf of the company by its employee or manager. The actions of AI might not be conveyed in such a manner. This, however, since also not extensively explored by the scholars adduced above, might illustrate that this sort of argument is not valid if what is taken from the example is an inspiration for a regulatory set-up, not blind systematic replication.
Last but not least, according to the Expert Report[91] and academics,[92] there might be little use in giving AI legal personality if the technology is incapable of compensating for the caused damage. An alleviating solution is the imposition of mandatory insurance upon the operators and/or producers;[93] or simply the duty to compensate in certain situations, despite AI’s fault. As much as this option could be criticised for its potential to create procedural difficulties with establishing who ought to answer for a particular AI behaviour, this does not have to be strenuous if concrete standards of assessment are developed. These are needed in the contemporary producer/operator liability cases either way. The implementation of reversed burden of proof could be one of the proposed standards. Accurate assignment of fault is expected by the society at large, above all.[94]
4. RECOMMENDATIONS
Questions: What kind of solutions are needed for fair and efficient establishment of liability for the actions of Artificial Intelligence within the EU? Is approximation of such laws desirable?
Recommendation: It is suggested that EC develops a regulation in which AI is given legal personhood.
Except for some products and activities, EU law lacks provisions providing uniform rules on the liability for accidents caused by the operation of potentially dangerous things, including AI.[95] A strong argument for regulation, in general, is the rather likely temporality of the inaction. Coase or Shavell’s contributions remind us that the purpose of liability rules is to strike a balance between protection and encouragement for development.[96] It is claimed, for example by the Expert Group, that the existing regimes of accountability might suffice or even the lack thereof, but the fact that it might as well not be the case should also be stressed. The inhibition of development is to be considered too much of a risk, considering the EU’s pro-innovation policy.
Both producer and operator liability systems frequently rely on product liability regimes, with strict liability or negligence applications. The imposition of strict liability may negatively interrupt the pursuit of innovation; standards of negligence might introduce problems of proving who is in fact liable, taking into consideration the black box effect. Adapting the existing regimes to the AI nuances is likely unsuccessful in foreseeing the emerging problems of uncertainty and inconsistency for all the parties involved.
Therefore, taking what has been explored above, it is recommended that the EC opts for a separate legal personality for AI. This solution could aid in easing the adaptation of different fields of law to technological advancement. It could also prevent the emergence of unexpected legal faults in case of innovation in an unforeseen direction. It would also provide more certainty and clarity about the roles of different actors. Many criminal lawyers stress the need of establishing a legal personality for AI, simply because of the severity of the criminal implications its lack creates.[97]
It should be noted that holding an actual wrongdoer liable, without the bonds or shortcuts of strict liability, may already improve the societal outlook on using novel technologies and does not have to simultaneously introduce underprotection. The introduction of AI personhood by no means shifts the legal focus from the detectable faults of human actors. The present solutions, under the PLD or national laws, are not to be neglected in the situations that they regulate.
The clear drawback of this option, however, lies in AI’s incapability to pay damages, in most cases at least. Nevertheless, the proposal to introduce mandatory AI insurance and payment schemes could alleviate this problem.
Ultimately, approximation brings more benefits to the Union as a whole and it allows for uniform tackling of the situation, which, taking the scale of the danger, could be assessed to be more relevant than the concerns of costs or mere MS disapproval. As of now, there is no uniformity be it in regards to the approach to AI or to sectoral tackling of the problems related thereof in different MS. The legislative unevenness and uncertainty may hamper the exchange on the internal market and strive for innovation across the Union. It must be remembered that AI is a continuously developing technology. This characteristic may also generate challenges for anticipation of risks and prevention. By introducing a unified regime, the EU could ensure that an appropriate legislative system is provided for the internal market. Crucially, the Union has the power to legislate in this situation.
Hence, taking the definitions provided by the EU, and having in mind the principle of proportionality, it can be deduced that a regulation would be more fit for the purpose of regulating AI than a directive. It can be created based on the powers given to the Union via Article 114 TFEU. Experience shows that regulations generate wide-range results, and if their adaptation is appropriately incentivised, they work effectively.[98] The dangerous flexibility of directives should be considered too grave to pursue this option, and possibly futile, considering that the proposal is to establish a functioning legal personhood. Moreover, the lack of stern uniformity in directives may hamper the development of the internal market at the pace that is required for staying competitive globally.[99] Regulations ensure that the law is implemented just as envisioned by the legislator.
This recommendation opens a door to a wider exploration of what exact provisions should be included in the regulation. As adduced above,[100] AI ought to receive both rights and obligations which could be delineated in the document, together with a clarification of what AI is, when AI liability could be applicable, and how other frameworks would fit into the environment with a new legal actor. An elucidation of the rules on damage payment is important to include likewise.
5. CONCLUSION
This report recommends the institution of legal personhood for AI through an EU regulation. Such a solution permits the establishment of comprehensive, cross-area and cross-topic liability, where it is due. The experience of the selected MS and non-member states, as well as the opinion of scholars from different legal branches, illustrates that this approach, despite some flaws, offers the most reliable solution to the difficulty of damages securisation from AI-related incidents and to the fear of innovation.
6. BIBLIOGRAPHY
Case law
B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(l) 3.
June Rodgers v. Christopher Christie, No. 19-2616 (3d Cir. 2020).
Legislation
Consolidated version of the Treaty on the Functioning of the European Union [2012] OJ C 326.
Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (Product Liability Directive) [1985] OJ L 210.
German Civil Code in the version promulgated on 2 January 2002 (Federal Law Gazette [Bundesgesetzblatt] I page 42, 2909; 2003 I page 738), last amended by Article 4 para. 5 of the Act of 1 October 2013 (Federal Law Gazette I page 3719).
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
The Civil Code of the Russian Federation 1995, last amended December 6, 8, 2011.
Publications
Commission Staff Working Document Evaluation of Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products Accompanying the document Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the Application of the Council Directive on the approximation of the laws, regulations, and administrative provisions of the Member States concerning liability for defective products (85/374/EEC), Brussels, 7.5.2018 SWD(2018) 157 final.
Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee, and the Committee of the Regions, A single market for intellectual property rights boosting creativity and innovation to provide economic growth, high quality jobs and first class products and services in Europe, Brussels, COM/2011/0287 final.
Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe, Brussels, 25.04.2018 COM(2018) 237 final.
Danila Kirpichnikov1, Albert Pavlyuk, Yulia Grebneva and Hilary Okagbue, ‘Criminal Liability of the Artificial Intelligence’ (2020) E3S Web of Conferences 159, 04025.
European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) calling for establishing ‘electronic personality’ rules.
European Parliament resolution of 20 October 2020 with recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence (2020/2014(INL)).
Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, Brussels, 21.4.2021 COM(2021) 206 final.
White Paper On Artificial Intelligence – A European approach to excellence and trust, Brussels, 19.2.2020 COM(2020) 65 final.
Reports
Andrea Bertolini, ‘Artificial Intelligence and Civil Liability’, Policy Department for Citizens’ Rights and Constitutional Affairs Directorate-General for Internal Policies PE 621.926 – July 2020.
Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union.
Niels Thygesen et al., ‘European Commission Annual Report 2018’, European Fiscal Board 28 September 2018.
Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee – Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics Brussels, 19.2.2020 COM(2020) 64 final.
Books
Christoph Bartneck, Christoph Lütge, Alan Wagner and Sean Welsh, An Introduction to Ethics in Robotics and AI (Springer 2020).
Fernanda Torre, Liselotte Engstam and Robin Teigland, AI Leadership for Boards: The Future of Corporate Governance
(Digoshen by Innovisa 2020).
Karen Yeung and Martin Lodge, Algorithmic Regulation (Oxford University Press 2019).
Richard E Neapolitan and Xia Jiang, Artificial Intelligence: With an Introduction to Machine Learning, Second Edition (CRC Press 2018).
Thomas H Cormen, Charles E Leiserson, Ronald L Rivest and Clifford Stein, Introduction To Algorithms (MIT Press, 2001).
Yaniv Benhamou and Justine Ferland, ‘Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages’ in Pina D’Agostino, Carole Piovesan and Aviv Gaon (eds.) Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law (Thomson Reuters Canada 2020).
Dictionaries
‘harmonization of laws’ (Oxford Reference) <https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095921694>.
‘producer liability’ (GEMET) <https://www.eionet.europa.eu/gemet/en/concept/6658>.
The Editors of Encyclopaedia Britannica, ‘Manufacturer’s liability’ (Britannica) <https://www.britannica.com/topic/manufacturers-liability>.
Journal Articles
A A Vasilyev, Zh I Ibragimov and E V Gubernatorova, ‘The Russian draft bill of “the Grishin Law” in terms of improving the legal regulation of relations in the field of robotics: critical analysis’ (2019) 1333 Journal of Physics: Conference Series 1.
Gabriel Hallevy ‘The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control’ (2010) 4 Akron Intellectual Property Journal 171.
Hannah R Sullivan and Scott J Schweikart, ‘Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?’ (2019) 21 AMA Journal of Ethics 160.
Helmut Koziol, ‘Harmonising Tort Law in the European Union: Advantages and Difficulties’ (2013) 1 ELTE Law Journal 73.
Herbert Zech, ‘Liability for AI: public policy considerations’ (2021) 22 ERA Forum, 147.
Katarzyna Jagodzinska, ‘The Implications of Harmonization of European Contract Law on International Business Practice’ (2014) 3 International Law Research 16.
Marta Infantino and Weiwei Wang, ‘Algorithmic Torts: A Prospective Comparative Overview’ (2019) 29 Transnational Law & Contemporary Problems 1.
Nora Osmani, ‘The Complexity of Criminal Liability of AI Systems’ (2020) 14 Masaryk University Journal of Law and Technology 53.
Paulius Čerka, Jurgita Grigienė and Gintarė Sirbikytė, ‘Liability for damages caused by artificial intelligence’ (2015) 31(3) Computer Law and Security Review <https://www.sciencedirect.com/science/article/abs/pii/S026736491500062X>.
Piet Jan Slot, ‘Harmonization’ (1996) 21 European Law Review 378.
Priyanka Majumdar, Dr. Bindu Ronald and Dr. Rupal Rautdesai, ‘Artificial Intelligence, Legal Personhood and Determination of Criminal Liability’ (2019) 6 Journal of Critical Reviews 323.
Shawn Bayern, ‘The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems’ (2015) 19 Stanford Technology Law Review 93.
Shuangge Wen, ‘Less is More – A Critical View of Further EU Action Towards a Harmonized Corporate Governance Framework in the Wake of the Crisis’ (2013) 12 Washington University Global Studies Law Review 41.
Simon Chesterman, ‘Artificial lIntelligence and the Limits of Legal Personality’ (2020) 69 International & Comparative Law Quarterly 819.
Stephen Weatherill, ‘The Limits of Legislative Harmonization Ten Years after Tobacco Advertising: How the Court’s Case Law has become a “Drafting Guide”‘ (2015) 12 German Law Journal 827.
Yavar Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31 Harvard Journal of Law & Technology 890.
Other Articles
‘AI, Machine Learning & Big Data 2020 | Germany’ (Global Legal Insights) <https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/germany>.
Anthony Borgese Jonathan Thompson and Alice Scamps Goodman, ‘AI, Machine Learning & Big Data 2020 | Australia’ (Global Legal Insights) <https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/australia>.
‘Areas of EU action’ (European Commission) <https://ec.europa.eu/info/about-european-commission/what-european-commission-does/law/areas-eu-action_en> .
Bernt Hugenholtz, ‘Is Harmonization a Good Thing? The Case of the Copyright Acquis’ (Ivir, 2013) <https://www.ivir.nl/publicaties/download/Is_harmonization_a_good_thing.pdf>.
Carmen Niethammer, ‘AI Bias Could Put Women’s Lives At Risk – A Challenge For Regulators’ (Forbes, 2 May 2020) <https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/?sh=7c3ae018534f>.
Charles H. Moellenberg Jr., Robert Kantner, David C. Kiernan and Jeffrey Jones, ‘United States: Mitigating Product Liability For Artificial Intelligence’ (Mondaq, 22 March 2018) <https://www.mondaq.com/unitedstates/product-liability-safety/685294/mitigating-product-liability-for-artificial-intelligence>.
Christopher Götz, ‘German Data Ethics Commission’s Report on Data and Algorithmic Systems’ (Simmons and Simmons, 2 March 2020) <https://www.simmons-simmons.com/en/publications/ck7c1j0rr11cb0916xk8vm8kl/german-data-ethics-commission-s-report-on-data-and-algorithmic-systems>.
‘Estonia: Government Issues Artificial Intelligence Report’ (Library of Congress, 31 July 2019) <https://www.loc.gov/law/foreign-news/article/estonia-government-issues-artificial-intelligence-report/>.
Jan Vranken, ‘Exciting Times for Legal Scholarship’ (Law and Method, 2012) <https://www.bjutijdschriften.nl/tijdschrift/lawandmethod/2012/2/ReM_2212-2508_2012_002_002_004/fullscreen>.
Jeremy Khan, ‘Why do so few businesses see financial gains from using A.I.?’ (Fortune, 20 October 2020) < https://fortune.com/2020/10/20/why-do-so-few-businesses-see-financial-gains-from-using-a-i/>.
Katarzyna Szczudlik, ‘Poland: Liability For Copyright Infringement By AI’ (Mondaq, 10 April 2018) <https://www.mondaq.com/copyright/690144/liability-for-copyright-infringement-by-ai>.
Lee Gluyas and Stefanie Day, ‘Who is liable when AI fails to perform?’ (CMS, 2018) <https://cms.law/en/pol/publication/artificial-intelligence-who-is-liable-when-ai-fails-to-perform>.
Maksim Karliuk, ‘The Ethical and Legal Issues of Artificial Intelligence’ (Russian International Affairs Council, 23 April 2018) <https://russiancouncil.ru/en/analytics-and-comments/analytics/the-ethical-and-legal-issues-of-artificial-intelligence/>.
Marten Kaevats, ‘Estonia considers a ’kratt law’ to legalise Artificial Intelligence (AI)’ (Medium, 25 September 2017) <https://medium.com/e-residency-blog/estonia-starts-public-discussion-legalising-ai-166cb8e34596>.
Matilda Claussén-Karlsson, ‘Artificial Intelligence and the External Element of the Crime: An Analysis of the Liability Problem’ (Orebro Universitet, 2017) <https://www.diva-portal.org/smash/get/diva2:1115160/FULLTEXT01.pdf>.
Miriam Buiten, Alexandre de Streel, Martin Peitz ‘EU Liability Rules for the Age of Artificial Intelligence’ (Centre of Regulation in Europe, March 2021) <https://cerre.eu/wp-content/uploads/2021/03/CERRE_EU-liability-rules-for-the-age-of-Artificial-Intelligence_March2021.pdf>.
‘More intergovernmental cooperation is needed using Artificial Intelligence to fight Covid-19 Coronavirus’ (Council of Europe) <https://www.coe.int/en/web/portal/covid-19-artificial-intelligence>.
Moritz Maaßen and Sibylle Schumacher, ‘Product liability and recalls in Germany’ (Pinsent Masons, 10 Jan 2019) <https://www.pinsentmasons.com/out-law/guides/product-liability-and-recalls-in-germany>.
Omri Rachum-Twaig, ‘Product Liability in the Age of Connected Devices and Artificial Intelligence’ (The Federmann Cyber Security Research Center – Cyber Law Program, 2020) <https://csrcl.huji.ac.il/book/product-liability-age-connected-devices-and-artificial-intelligence>.
‘Open Letter to the European Commission Artificial Intelligence and Robotics’ (Robotics Open Letter) <http://www.robotics-openletter.eu>.
Pedro Miguel Freitas, Francisco Andrade and Paulo Novais, ‘Criminal Liability of Autonomous Agents: from the unthinkable to the plausible’ (Universidade do Minho) <https://core.ac.uk/download/pdf/55634657.pdf>.
‘Regulations, Directives and other acts’ (European Union) <https://europa.eu/european-union/law/legal-acts_en>.
‘Response to the EU consultation on artificial intelligence liability and insurance for personal injury and death damages caused by AI artefacts/systems’ (The Pan-European Organisation of Personal Injury Lawyers, September 2020) <https://www.peopil.com/document/3692/download>.
Robert Pearl, ‘New Study Blames Algorithm For Racial Discrimination, Ignores Physician Bias’ (Forbes, 11 November 2019) <https://www.forbes.com/sites/robertpearl/2019/11/11/algorithm/?sh=6f46fd087800>.
Sandra Tubert and Laura Ziegler, ‘France: Artificial Intelligence Comparative Guide’ (Mondaq, 21 April 2021) <https://www.mondaq.com/france/technology/1059760/artificial-intelligence-comparative-guide>.
‘Strong AI’ (IBM, 31 August 2020) <https://www.ibm.com/cloud/learn/strong-ai> accessed 27 April 2021.
‘Two years of the GDPR: Questions and answers’ (European Commission) <https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_1166>.
‘Using artificial intelligence to help combat COVID-19’ (OECD, 23 April 2020) <https://www.oecd.org/coronavirus/policy-responses/using-artificial-intelligence-to-help-combat-covid-19-ae4c5c21/>. Vagelis Papakonstantinou and Paul De Hert, ‘Refusing to award legal personality to AI: Why the European Parliament got it wrong’ (European Law Blog, 25 November 2020) <https://europeanlawblog.eu/2020/11/25/refusing-to-award-legal-personality-to-ai-why-the-european-parliament-got-it-wrong/>.
[1] Richard E Neapolitan and Xia Jiang, Artificial Intelligence: With an Introduction to Machine Learning, Second Edition (CRC Press 2018) 3.
[2] Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe, Brussels, 25.04.2018 COM(2018) 237 final.
[3] Matilda Claussén-Karlsson, ‘Artificial Intelligence and the External Element of the Crime: An Analysis of the Liability Problem’ (Orebro Universitet, 2017) <https://www.diva-portal.org/smash/get/diva2:1115160/FULLTEXT01.pdf> accessed 24 Apri 2021.
[4] ‘Strong AI’ (IBM, 31 August 2020) <https://www.ibm.com/cloud/learn/strong-ai> accessed 27 April 2021.
[5] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest and Clifford Stein, Introduction To Algorithms (MIT Press, 2001) 5.
[6] Paulius Čerka, Jurgita Grigienė and Gintarė Sirbikytė, ‘Liability for damages caused by artificial intelligence’ (2015) 31(3) Computer Law and Security Review <https://www.sciencedirect.com/science/article/abs/pii/S026736491500062X> accessed 18 April 2021.
[7] Robert Pearl, ‘New Study Blames Algorithm For Racial Discrimination, Ignores Physician Bias’ (Forbes, 11 November 2019) <https://www.forbes.com/sites/robertpearl/2019/11/11/algorithm/?sh=6f46fd087800> accessed 5 May 2021.
[8] With a good summary provided in, for example: Carmen Niethammer, ‘AI Bias Could Put Women’s Lives At Risk – A Challenge For Regulators’ (Forbes, 2 May 2020) <https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/?sh=7c3ae018534f> accessed 5 May 2021.
[9] Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee – Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics Brussels, 19.2.2020 COM(2020) 64 final, 13; White Paper On Artificial Intelligence – A European approach to excellence and trust, Brussels, 19.2.2020 COM(2020) 65 final, 11-12, 14.
[10] Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union.
[11] Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 3-9.
[12] White Paper On Artificial Intelligence – A European approach to excellence and trust, Brussels, 19.2.2020 COM(2020) 65 final, 15.
[13] Katarzyna Szczudlik, ‘Poland: Liability For Copyright Infringement By AI’ (Mondaq, 10 April 2018) <https://www.mondaq.com/copyright/690144/liability-for-copyright-infringement-by-ai> accessed 21 April 2021.
[14] Omri Rachum-Twaig, ‘Product Liability in the Age of Connected Devices and Artificial Intelligence’ (The Federmann Cyber Security Research Center – Cyber Law Program, 2020) <https://csrcl.huji.ac.il/book/product-liability-age-connected-devices-and-artificial-intelligence> accessed 18 April 2021.
[15] White Paper On Artificial Intelligence – A European approach to excellence and trust, Brussels, 19.2.2020 COM(2020) 65 final, 15; Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, Brussels, 19.2.2020 COM(2020) 64 final, point 4.
[16] For example, in White Paper On Artificial Intelligence – A European approach to excellence and trust, Brussels, 19.2.2020 COM(2020) 65 final, 15; or Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, Brussels, 19.2.2020 COM(2020) 64 final, point 4.
[17] Which have been said to soon be concreticised, with the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, Brussels, 21.4.2021 COM(2021) 206 final issued in late April of 2021 being the first material step towards creating a regulatory EU framework for AI.
[18] As allowed under the definition included in Jan Vranken, ‘Exciting Times for Legal Scholarship’ (Law and Method, 2012) <https://www.bjutijdschriften.nl/tijdschrift/lawandmethod/2012/2/ReM_2212- 2508_2012_002_002_004/fullscreen> accessed 23 March 2021.
[19] Here it is relevant to mention that despite China’s grand development in the direction of AI, the state’s legislative efforts on the front of AI liability are rather scarce: Marta Infantino and Weiwei Wang, ‘Algorithmic Torts: A Prospective Comparative Overview’ (2019) 29 Transnational Law & Contemporary Problems 1, 32.
[20] Even though these terms are frequently used interchangeably, even by the Union, it is paramount to highlight that harmonisation does not equal unificiation (even though they are almost the same). This is highlighted further in the text.
[21] ‘harmonization of laws’ (Oxford Reference) <https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095921694> accessed 10 May 2021.
[22] Article 288 of the Consolidated version of the Treaty on the Functioning of the European Union [2012] OJ C 326 states that a directive is binding on those to whom it is addressed and the parties have a duty to achieve the goals set out by it. The MS authorities, however, are given a free hand in relation to how to reach the goal.
[23] Shuangge Wen, ‘Less is More – A Critical View of Further EU Action Towards a Harmonized Corporate Governance Framework in the Wake of the Crisis’ (2013) 12 Washington University Global Studies Law Review 41.
[24] Piet Jan Slot, ‘Harmonization’ (1996) 21 European Law Review 378, 379; Article 288 of the Consolidated version of the Treaty on the Functioning of the European Union [2012] OJ C 326. states that regulations have general application, this means that they are binding in their entirety and are directly applicable across the Union.
[25] Bernt Hugenholtz, ‘Is Harmonization a Good Thing? The Case of the Copyright Acquis’ (Ivir, 2013) <https://www.ivir.nl/publicaties/download/Is_harmonization_a_good_thing.pdf> accessed 26 April 2021; an example discussed in Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 16.
[26] ‘Regulations, Directives and other acts’ (European Union) <https://europa.eu/european-union/law/legal-acts_en> accessed 26 April 2021.
[27] ‘Areas of EU action’ (European Commission) <https://ec.europa.eu/info/about-european-commission/what-european-commission-does/law/areas-eu-action_en> accessed 10 May 2021.
[28] Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (Product Liability Directive) [1985] OJ L 210.
[29] A rare example being Miriam Buiten, Alexandre de Streel, Martin Peitz ‘EU Liability Rules for the Age of Artificial Intelligence’ (Centre of Regulation in Europe, March 2021) <https://cerre.eu/wp-content/uploads/2021/03/CERRE_EU-liability-rules-for-the-age-of-Artificial-Intelligence_March2021.pdf> accessed 10 April 2021.
[30] Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 5.
[31] Andrea Bertolini, ‘Artificial Intelligence and Civil Liability’, Policy Department for Citizens’ Rights and Constitutional Affairs Directorate-General for Internal Policies PE 621.926 – July 2020.
[32] Commission Staff Working Document Evaluation of Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products Accompanying the document Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the Application of the Council Directive on the approximation of the laws, regulations, and administrative provisions of the Member States concerning liability for defective products (85/374/EEC), Brussels, 7.5.2018 SWD(2018) 157 final.
[33] With the question first being addressed in 2020, in June Rodgers v. Christopher Christie, No. 19-2616 (3d Cir. 2020).
[34] Charles H. Moellenberg Jr., Robert Kantner, David C. Kiernan and Jeffrey Jones, ‘United States: Mitigating Product Liability For Artificial Intelligence’ (Mondaq, 22 March 2018) <https://www.mondaq.com/unitedstates/product-liability-safety/685294/mitigating-product-liability-for-artificial-intelligence> accessed 22 April 2021.
[35] European Parliament resolution of 20 October 2020 with recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence (2020/2014(INL)), ‘Introduction’.
[36] The EU has authority only where conferred upon by the Treaties, as supervised by the MS. This is one of the 3 fundamental principles of the EU.
[37] Consolidated version of the Treaty on the Functioning of the European Union [2012] OJ C 326.
[38] These being: protection and improvement of human health, industry, culture, tourism, education, youth, sport and vocational training, civil protection and administrative cooperation.
[39] Article 114 TFEU is a commonly used basis for action for the Union, with examples of the Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (Product Liability Directive) [1985] OJ L 210 or Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
[40] Stephen Weatherill, ‚The Limits of Legislative Harmonization Ten Years after Tobacco Advertising: How the Court’s Case Law has become a “Drafting Guide”‘ (2015) 12 German Law Journal 827, 830.
[41] Which has been emphasised in European Parliament resolution of 20 October 2020 with recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence (2020/2014(INL)) ‘General principles concerning the development of robotics and artificial intelligence for civil use’ and ‘Introduction’; and White Paper On Artificial Intelligence – A European approach to excellence and trust, Brussels, 19.2.2020 COM(2020) 65 final, 15.
[42] (b) social policy, for the aspects defined in this Treaty; (c) economic, social and territorial cohesion; (d) agriculture and fisheries, excluding the conservation of marine biological resources; (e) environment; (f) consumer protection; (g) transport; (h) trans-European networks; (i) energy; (j) area of freedom, security and justice; (k) common safety concerns in public health matters, for the aspects defined in this Treaty.
[43] Such as consumer protection, for example, as additionally highlighted in Article 169(2)(a) TFEU.
[44] Helmut Koziol, ‘Harmonising Tort Law in the European Union: Advantages and Difficulties’ (2013) 1 ELTE Law Journal 73, 73.
[45] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee, and the Committee of the Regions, A single market for intellectual property rights boosting creativity and innovation to provide economic growth, high quality jobs and first class products and services in Europe, Brussels, COM/2011/0287 final, ‘The answer is in the Single Market’.
[46] Bernt Hugenholtz, ‘Is Harmonization a Good Thing? The Case of the Copyright Acquis’ (Ivir, 2013) <https://www.ivir.nl/publicaties/download/Is_harmonization_a_good_thing.pdf> accessed 10 April 2021.
[47] Katarzyna Jagodzinska, ‘The Implications of Harmonization of European Contract Law on International Business Practice’ (2014) 3 International Law Research 16, 16.
[48] Bernt Hugenholtz, ‘Is Harmonization a Good Thing? The Case of the Copyright Acquis’ (Ivir, 2013) <https://www.ivir.nl/publicaties/download/Is_harmonization_a_good_thing.pdf> accessed 26 April 2021.
[49] For example, in ‘producer liability’ (GEMET) <https://www.eionet.europa.eu/gemet/en/concept/6658> accessed 5 May 2021 or The Editors of Encyclopaedia Britannica, ‘Manufacturer’s liability’ (Britannica) <https://www.britannica.com/topic/manufacturers-liability> accessed 5 May 2021.
[50] Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (Product Liability Directive) [1985] OJ L 210, Preamble and Article 1.
[51] Pages 6-7 of this report.
[52] Yavar Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31 Harvard Journal of Law & Technology 890, 938; Hannah R Sullivan and Scott J Schweikart, ‘Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?’ (2019) 21 AMA Journal of Ethics 160, 160-161.
[53] Christoph Bartneck, Christoph Lütge, Alan Wagner and Sean Welsh, An Introduction to Ethics in Robotics and AI (Springer 2020) 44.
[54] B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(l) 3.
[55] Jeremy Khan, ‘Why do so few businesses see financial gains from using A.I.?’ (Fortune, 20 October 2020) < https://fortune.com/2020/10/20/why-do-so-few-businesses-see-financial-gains-from-using-a-i/> accessed 18 February 2021.
[56] Lee Gluyas and Stefanie Day, ‘Who is liable when AI fails to perform?’ (CMS, 2018) <https://cms.law/en/pol/publication/artificial-intelligence-who-is-liable-when-ai-fails-to-perform> accessed 21 April 2021.
[57] ‘AI, Machine Learning & Big Data 2020 | Germany’ (Global Legal Insights) <https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/germany> accessed 24 April 2021.
[58] Moritz Maaßen and Sibylle Schumacher, ‘Product liability and recalls in Germany’ (Pinsent Masons, 10 Jan 2019) <https://www.pinsentmasons.com/out-law/guides/product-liability-and-recalls-in-germany> accessed 6 May 2021.
[59] Christopher Götz, ‘German Data Ethics Commission’s Report on Data and Algorithmic Systems’ (Simmons and Simmons, 2 March 2020) <https://www.simmons-simmons.com/en/publications/ck7c1j0rr11cb0916xk8vm8kl/german-data-ethics-commission-s-report-on-data-and-algorithmic-systems> accessed 24 April 2021; German Civil Code in the version promulgated on 2 January 2002 (Federal Law Gazette [Bundesgesetzblatt] I page 42, 2909; 2003 I page 738), last amended by Article 4 para. 5 of the Act of 1 October 2013 (Federal Law Gazette I page 3719), Section 278.
[60] Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 43.
[61] Paulius Čerka, Jurgita Grigienė and Gintarė Sirbikytė, ‘Liability for damages caused by artificial intelligence’ (2015) 31(3) Computer Law and Security Review <https://www.sciencedirect.com/science/article/abs/pii/S026736491500062X> accessed 18 April 2021.
[62] European Parliament resolution of 20 October 2020 with recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence (2020/2014(INL)) ‘Liability and Artificial Intelligence’.
[63] Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 39-4.
[64] Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 39-4; and European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence, points 11-13.
[65] Karen Yeung and Martin Lodge, Algorithmic Regulation (Oxford University Press 2019) 241.
[66] Niels Thygesen et al., ‘European Commission Annual Report 2018’, European Fiscal Board 28 September 2018, 20-21.
[67] Karen Yeung and Martin Lodge, Algorithmic Regulation (Oxford University Press 2019) 241.
[68] As discussed in ‘More intergovernmental cooperation is needed using Artificial Intelligence to fight Covid-19 Coronavirus’ (Council of Europe) <https://www.coe.int/en/web/portal/covid-19-artificial-intelligence> accessed 9 May 2021 or ‘Using artificial intelligence to help combat COVID-19’ (OECD, 23 April 2020) <https://www.oecd.org/coronavirus/policy-responses/using-artificial-intelligence-to-help-combat-covid-19-ae4c5c21/> accessed 9 May 2021.
[69] Gabriel Hallevy ‘The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control’ (2010) 4 Akron Intellectual Property Journal 171, 177-194.
[70] Priyanka Majumdar, Dr. Bindu Ronald and Dr. Rupal Rautdesai, ‘Artificial Intelligence, Legal Personhood and Determination of Criminal Liability’ (2019) 6 Journal of Critical Reviews 323, 324.
[71] Katarzyna Szczudlik, ‘Poland: Liability For Copyright Infringement By AI’ (Mondaq, 10 April 2018) <https://www.mondaq.com/copyright/690144/liability-for-copyright-infringement-by-ai> accessed 21 April 2021.
[72] The Civil Code of the Russian Federation 1995, last amended December 6, 8, 2011, Article 137.
[73] Maksim Karliuk, ‘The Ethical and Legal Issues of Artificial Intelligence’ (Russian International Affairs Council, 23 April 2018) <https://russiancouncil.ru/en/analytics-and-comments/analytics/the-ethical-and-legal-issues-of-artificial-intelligence/> accessed 23 April 2021.
[74] Sandra Tubert and Laura Ziegler, ‘France: Artificial Intelligence Comparative Guide’ (Mondaq, 21 April 2021) <https://www.mondaq.com/france/technology/1059760/artificial-intelligence-comparative-guide> accessed 24 April 2021.
[75] Such a name is typically used for AI’s own liability, in all likelihood since an algorithm is the acting essence of the technology.
[76] Marten Kaevats, ‘Estonia considers a ’kratt law’ to legalise Artificial Intelligence (AI)’ (Medium, 25 September 2017) <https://medium.com/e-residency-blog/estonia-starts-public-discussion-legalising-ai-166cb8e34596> accessed 26 April 2021.
[77] ‘Estonia: Government Issues Artificial Intelligence Report’ (Library of Congress, 31 July 2019) <https://www.loc.gov/law/foreign-news/article/estonia-government-issues-artificial-intelligence-report/> accessed 11 April 2021.
[78] Whilst answering the EP in European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) calling for establishing ‘electronic personality’ rules: Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 37-38.
[79] ‘Open Letter to the European Commission Artificial Intelligence and Robotics’ (Robotics Open Letter) <http://www.robotics-openletter.eu> accessed 27 April 2021.
[80] Page 4 of this report.
[81] Which the Expert Group Report succeeds to illustrate, for example in Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 35.
[82] Yaniv Benhamou and Justine Ferland, ‘Artificial Intelligence & Damages: Assessing Liability and Calculating the Damages’ in Pina D’Agostino, Carole Piovesan and Aviv Gaon (eds.) Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law (Thomson Reuters Canada 2020) 2.
[83] Anthony Borgese Jonathan Thompson and Alice Scamps Goodman, ‘AI, Machine Learning & Big Data 2020 | Australia’ (Global Legal Insights) <https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/australia> accessed 24 April 2021.
[84] Shawn Bayern, ‘The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems’ (2015) 19 Stanford Technology Law Review 93, 94.
[85] Fernanda Torre, Liselotte Engstam and Robin Teigland, AI Leadership for Boards: The Future of Corporate Governance (Digoshen by Innovisa 2020) 61.
[86] Simon Chesterman, ‘Artificial lIntelligence and the Limits of Legal Personality’ (2020) 69 International & Comparative Law Quarterly 819, 843.
[87] Vagelis Papakonstantinou and Paul De Hert, ‘Refusing to award legal personality to AI: Why the European Parliament got it wrong’ (European Law Blog, 25 November 2020) <https://europeanlawblog.eu/2020/11/25/refusing-to-award-legal-personality-to-ai-why-the-european-parliament-got-it-wrong/> accessed 11 April 2021.
[88] A A Vasilyev, Zh I Ibragimov and E V Gubernatorova, ‘The Russian draft bill of “the Grishin Law” in terms of improving the legal regulation of relations in the field of robotics: critical analysis’ (2019) 1333 Journal of Physics: Conference Series 1, 4.
[89] Maksim Karliuk, ‘The Ethical and Legal Issues of Artificial Intelligence’ (Russian International Affairs Council, 23 April 2018) <https://russiancouncil.ru/en/analytics-and-comments/analytics/the-ethical-and-legal-issues-of-artificial-intelligence/> accessed 23 April 2021; and Shawn Bayern, ‘The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems’ (2015) 19 Stanford Technology Law Review 93, 94.
[90] Maksim Karliuk, ‘The Ethical and Legal Issues of Artificial Intelligence’ (Russian International Affairs Council, 23 April 2018) <https://russiancouncil.ru/en/analytics-and-comments/analytics/the-ethical-and-legal-issues-of-artificial-intelligence/> accessed 23 April 2021.
[91] Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 38-39.
[92] As highlighted in, for example: A A Vasilyev, Zh I Ibragimov and E V Gubernatorova, ‘The Russian draft bill of “the Grishin Law” in terms of improving the legal regulation of relations in the field of robotics: critical analysis’ (2019) 1333 Journal of Physics: Conference Series 1, 3; or Karen Yeung and Martin Lodge, Algorithmic Regulation (Oxford University Press, 2019) 242.
[93] For example, in Karen Yeung and Martin Lodge, Algorithmic Regulation (Oxford University Press 2019) 244; Herbert Zech, ‘Liability for AI: public policy considerations’ (2021) 22 ERA Forum, 147; or in the Expert Group report itself: Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for Artificial Intelligence and other emerging digital technologies’ (2019) Publications Office of the European Union, 62.
[94] A A Vasilyev, Zh I Ibragimov and E V Gubernatorova, ‘The Russian draft bill of “the Grishin Law” in terms of improving the legal regulation of relations in the field of robotics: critical analysis’ (2019) 1333 Journal of Physics: Conference Series 1, 3.
[95] ‘Response to the EU consultation on artificial intelligence liability and insurance for personal injury and death damages caused by AI artefacts/systems’ (The Pan-European Organisation of Personal Injury Lawyers, September 2020) <https://www.peopil.com/document/3692/download> accessed 18 April 2021.
[96] Karen Yeung and Martin Lodge, Algorithmic Regulation (Oxford University Press, 2019) 241.
[97] For example, in Gabriel Hallevy ‘The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control’ (2010) 4 Akron Intellectual Property Journal 171, 199-201; Pedro Miguel Freitas, Francisco Andrade and Paulo Novais, ‘Criminal Liability of Autonomous Agents: from the unthinkable to the plausible’ (Universidade do Minho) <https://core.ac.uk/download/pdf/55634657.pdf> accessed 5 May 2021; Nora Osmani, ‘The Complexity of Criminal Liability of AI Systems’ (2020) 14 Masaryk University Journal of Law and Technology 53, 75-76; Danila Kirpichnikov1, Albert Pavlyuk, Yulia Grebneva and Hilary Okagbue, ‘Criminal Liability of the Artificial Intelligence’ (2020) E3S Web of Conferences 159, 04025, 8.
[98] ‘Two years of the GDPR: Questions and answers’ (European Commission) <https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_1166> accessed 27 April 2021.
[99] Logically, if too many uncertainties and hurdles are retained among the MS and within the Union as a whole.
[100] Page 12 of this report.
IF YOU VALUE THE INSTITUTE OF NEW EUROPE’S WORK, BECOME ONE OF ITS DONORS!
Funds received will allow us to finance further publications.
You can contribute by making donations to INE’s bank account:
95 2530 0008 2090 1053 7214 0001
with the following payment title: „darowizna na cele statutowe”
Comments are closed.