Read Time:16 Minute, 7 Second

by Lex Keukens

Legal Landscape & Relevant Guidelines

Artificial Intelligence (AI) is revolutionising the way we code products. AI refers to a system that can make predictions, recommendations or decisions based on human-defined objectives.[1] Through its design vast amounts of code can be cut out and instead be replaced by (shorter) mathematical computations. On top of that, AI is able to learn from its previous decisions. Via deep learning,[2] a form of machine learning,[3] it is even possible that the same input does not necessarily lead to the same output as different factors are weighed according to context. This is similar to how the human mind assesses situations. Such an approach allows for more nuanced computations and therefore more complex operations that were previously reserved for humans. Examples are manifold and include AI creating pieces of art, inventing products designed for specific needs and facilitating complex medical procedures. As artificial Intelligence is becoming more prevalent in modern products, it is increasingly important to understand legal and regulatory challenges pertaining to it. 

Currently no specific legal regulation concerning AI exists and instead a cluster of soft-law instruments, such as reports and guidelines, exists. At an international level the OECD has published principles to promote the innovative and trustworthy development of AI.[4] Within the EU, the Commission highlights the strategic importance for the EU as a whole to assume a leading role in the development and deployment of AI in its communication from 25 April 2018.[5]It voices its desire to boost the economy through wider use of AI technologies, aims to prepare for socio-economic changes following the changes in labour and education settings, and finally seeks to ensure appropriate ethical and legal frameworks for AI to develop in. To this end the AI Alliance was set up to provide a platform for dialogue and the High-Level Expert Group (AI HLEG)[6] was set up to draft guidelines. So far they have published a set of ethics guidelines[7] and policy & investment recommendations.[8] Additionally the Expert Group on Liability and New Technologies has published a report on the challenges of AI in current liability regulation.[9]

Ethics and AI

Ethics plays a central role in discussions about AI. This is because it can give direction to the decisions that an AI must make. As increasingly many processes are automated and human intervention is reduced, it is important to ensure that the code executing certain tasks is intrinsically ethical. If this is not the case, the use of automation may lead to greater discrimination.[10]

The core ethical issues when dealing with AI relate to the design of the AI, its training data and any bias it developed itself. Already at the design phase it is important to define the goals properly to steer the AI into a direction that respects fairness and reduces bias as much as possible. Next, the training data used to teach the AI is crucial as it may contain hidden biases. This may be the case when certain groups are under- or overrepresented, when data is outdated or when social norms and law have changed in such a way that the data is no longer applicable. For example, historical European voting data may contain predominantly white men as other groups were only successively granted voting rights later on. Finally, an AI system may also develop its own bias in an attempt to reach its goal. Algorithms are written by analysing data from selected parameters or qualities, for example in healthcare they may look at weight, age, and medical history. Bias can easily infiltrate the selection process if companies put too much emphasis on certain attributes and how they interact with other data fields.

To counter the ethical risks that may arise and to encourage the development of innovative and trustworthy AI the OECD has published its Recommendation of the Council on Artificial Intelligence.[11] This Recommendation contains five principles:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being;
  • It should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society;
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them;
  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed;
  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles. 

In Section Two of its Recommendation the OECD additionally considers actions to be undertaken by states to foster an environment conducive to the development of trustworthy AI. 

At EU level the HLEG has published its Ethics Guidelines for Trustworthy AI,[12] which provides a framework to foster the deployment of lawful, ethical and robust AI. The guidelines stipulate seven requirements that AI systems should meet, namely: 

  • Human agency and oversight;
  • Technical robustness and safety;
  • Privacy and data governance; 
  • Transparency;
  • Diversity, non-discrimination and fairness;
  • Environmental and societal well-being;
  • Accountability.

Additionally, the HLEG is currently piloting an assessment list (questionnaire) with various stakeholders. The goal of this assessment list is to help and steer developers and deployers of AI systems to attain the seven requirements listed above. Results and an updated assessment list are expected in 2020. 

Privacy & AI

A basic definition of privacy is having the power to separate oneself, or information about oneself, in order to limit the influence others can have on our behaviour. Privacy is a fundamental human right recognized in the UN Declaration of Human Rights, the International Covenant on Civil and Political Rights and in many other treaties. Privacy has been traditionally recognized as a prerequisite for the exercise of other human rights such as the freedom of expressions, freedom of association and the freedom of the press. 

In the our digital society, privacy hinges on our ability to control how our data is being stored, modified and exchanged between different parties. The advent of advanced data mining techniques in the last decade has enormous potential, but comes along with critical privacy challenges. Social actors that on a regular base utilize these techniques, such as government agencies and (big) corporations, are now in the position to identify, profile, and directly affect the lives of people without their consent. These privacy concerns have only been enlarged with the emergence of increasingly sophisticated AI systems. Seemingly anonymized personal data can easily be de-anonymized by AI. AI has the ability to gather, analyse and combine vast quantities of data from different sources. It also allows for tracking, monitoring, and profiling people as well as predicting behaviours. Together with facial recognition technology, such AI systems can be used to cast a wide network of surveillance.

From a legislative standpoint, these trends have not gone unnoticed. The General Data Protection Regulation[13](‘GDPR’) gives EU citizens not only more control over how their personal data is used, it also gives organisations that process personal data more responsibilities. The GDPR has 6 data protection principles at its core. The essence of these principles is that personal information shall be utilised in a way that protects the privacy of the data subject in the best possible way, and that each individual has the right to decide how his or her personal data is used.

In summary, these principles require that personal data is:

  • processed in a lawful, fair and transparent manner (principle of legality, fairness and transparency) 
  • collected for specific, expressly stated and justified purposes and not treated in a new way that is incompatible with these purposes (principle of purpose limitation) 
  • adequate, relevant and limited to what is necessary for fulfilling the purposes for which it is being processed (principle of data minimisation) 
  • correct and, if necessary, updated (accuracy principle) 
  • not stored in identifiable form for longer periods than is necessary for the purposes (principle relating to data retention periods) 
  • processed in a way that ensures adequate personal data protection (principle of integrity and confidentiality)

The provisions of the GDPR regulate the data controller’s duties and the rights of the data subject when personal information is processed. The GDPR therefore applies when AI is under development with the help of personal data, and also when it is used to analyse or reach decisions about individuals. AI development has several problems fulfilling the principles of fairness, purpose limitation, data minimisations and transparency.

  • Algorithm bias & the fairness principle: The GDPR fairness principle requires all processing of personal information to be conducted with respect for the data subject’s interests. The principle also requires the data controller to implement measures to prevent the arbitrary

discriminatory treatment of individual persons. This means that AI systems must be trained using relevant and correct data and it must also learn which data to emphasize. However, many AI systems are trained using biased data or their algorithmic models contain certain biases. That is the reason why AI systems often demonstrate racial, gender, health, religious, or ideological discrimination. To comply with the GDPR, companies who use AI systems, have to learn how to mitigate those biases in their AI systems.[14]

  • AI & purpose limitation: The purpose limitation principle states that a data subject has to be informed about the purpose of data collection and processing. Only in this way is it possible for a person choose whether to consent to processing. AI systems often use information that is a side product of the original data collection. For instance, an AI application can use social media data for calculating a user’s health insurance rate. The GDPR states that data can be processed further if the further purpose is compatible with the original purpose. If this is not the case, new consent is required or the basis for processing must be changed.
  • AI & data minimization: This principle controls the degree of intervention into a data subject’s privacy. It ensures that the data collected fits the purpose of the project. Personal data that is collected, should be adequate, limited, and relevant. Engineers have to determine what data and what quantity of the data is necessary for the project and the corresponding AI system. This can be a challenge because it is not always possible to predict how and what a model will learn from data. Developers should therefor continuously reassess the type of and minimum quantity of training data required to fulfil the data minimization principle.
  • AI & transparency and the right to information: The GDPR is largely about safeguarding the rights of individuals to decide how information about themselves is used. This means that data controllers have to be open about the use of personal data and transparent about their actions. Data subjects must be informed about the purposes of the processing, the categories of personal data concerned and the recipients of the personal data (if any).  However, this will be very challenging regarding AI systems. AI systems are essentially black boxes: it is not always clear for the users and even for the developers how the AI models makes certain decisions. 


Intellectual Property rights aim to protect creations of the mind, such as innovative inventions (patent law), creative artistic works (copyright law) and symbols and names (trademark law). In our society, we believe that these creations of the mind deserve to be adequately protected by a temporary exclusive right, because this constitutes an incentive to create new creative works and innovative inventions. An intellectual property right therefore both benefits both the creator as well as our society as a whole.[15]

Nowadays AI is already used in creating art work[16], books[17] and innovative inventions[18], but given the various possibilities of AI in the future, this is only scratches the surface. When trying to fit in AI in applicable IP regulations many legal challenges arise, for example about the ownership of these IP rights. This article focuses on some legal challenges that arise for copyright law.

AI & copyright law

In order to obtain copyright protection, a copyright doesn’t need to be registered. A copyright arises automatically upon the creation of a creative work when it meets the following three conditions:

  1. The work needs to have a own creative character, which means that the work can’t be copied or derived from another work;
    1. The work needs to have the personal imprint of the author of the work, which means the work needs to be the result of creative choices of the author;
    1. The work is susceptible for human perception.

Facing the abovementioned conditions quickly, raises the question whether an AI system is able to make creative choices at all. Given the current stage of AI technology this question should most likely be answered with a “no”, which means a work that is solely created by an AI system can’t be protected by copyright (yet). However, this could be different when someone creates a work together with an AI system or when an AI system is able to make creative choices in the future. 

Assuming that an AI system is able to create a copyrighted work, some legal questions arise about the ownership of the copyright, the liability for copyright infringements and the term of the copyright protection.

Numerous actors are potential candidates to become the designated copyright owner. For example: the programmer of the AI system, the owner of the AI system, the AI system itself or some form of joint ownership between these actors. Unfortunately for the AI system, it can’t be a copyright owner (yet), just as animals can’t be copyright owners either.[19] According to the current copyright regulations, only natural persons or legal entities are able to own a copyright. As the owner of the AI system has a certain control over the AI system, it seems reasonable to designate him as the copyright owner. However, because the owner of the AI system has a certain control over het AI system, he will most likely be the person being sued for copyright infringements made by the AI as well. 

Normally, the term of copyright protection runs for 70 years after the death of the author.[20] An AI system will not die at least not in the same way as natural persons. A solution to this problem could be that the term of the copyright protection runs for 70 years after the work is lawfully made available to the public.[21] Of course, the legislator could also make special provisions for AI systems.


The future of AI is bright. However there a multiple legal challenges that needs to clarified before AI can truly show its potential. It’s up to the European legislator to tackle these legal challenges and draft a legal framework that provides legal certainty.

Photo by Tingey Injury Law Firm 

[1] Definition taken from the OECD Principles on AI (OECD/LEGAL/0449), available at:

[2] Deep learning is a type of machine learning using artificial neural networks. ‘Deep’ refers to the amount of layers involved, where every layer is another level of abstraction that focuses on different features. In this way the AI is able to learn the input-output relationship with less need for human guidance. Also see the AI HLEG Revised Document on a Definition of AI, available at:

[3] Machine learning refers to the way that an AI learns about the input-output relationship. This may be via decision trees, neural networks, deep learning or another technique. For more information on this see the AI HLEG Revised Document on a Definition of AI, available at:

[4] OECD Recommendation of the Council on Artificial Intelligence (‘Principles on AI’) (OECD/LEGAL/0449), available at:

[5] EU Commission Communication COM(2018) 237 final, available at:

[6] Learn more about the activities of the AI HLEG on their website, available at:

[7] AI HLEG Ethics Guidelines for Trustworthy AI, available at:

[8] AI HLEG Policy and Investment Recommendations for Trustworthy AI, available at:

[9] Expert Group on Liability and New Technologies, ‘Liability for Artificial Intelligence and other emerging digital technologies’, available at:

[10] For instance, the issues around the Dutch SyRI software.

[11] OECD Recommendation of the Council on Artificial Intelligence (‘Principles on AI’) (OECD/LEGAL/0449), available at:

[12] AI HLEG Ethics Guidelines for Trustworthy AI, available at:

[13] Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data.

[14] “Discrimination, artificial intelligence and algorithmic decision-making”, Published by the

Directorate General of Democracy, Council of Europe – 2018.

[15] P.G.F.A. Geerts & A.M.E. Verschuur (red.), Kort begrip van het intellectuele eigendomsrecht, Deventer: Wolters Kluwer 2018, par. 2-4.

[16] A. Elgammal, ‘AI Is Blurring the Definition of Artist’, American Scientist, Issue January/February 2019, Volume 107, number 1, available at: See more examples of AI art at:

[17] J. Schmale, ‘Ronald Giphart: Robot verzon de titel van mijn verhaal’, AD, 1 november 2017, available at:

[18] A. Chen, ‘Can an AI be an inventor? Not yet.’, MIT Technology Review, 8 January 2020, available at:

[19] The monkey, known from the famous ‘monkey selfie’, could not be the copyright owner of that picture:

[20] Article 1(1) Directive 2006/116/EC on the term of protection of copyright and certain related rights and article 37(1) Dutch Copyright Act.

[21] Like in the case of anonymous or pseudonymous works: article 1(3) Directive 2006/116/EC and article 38(1) Dutch Copyright Act.

Previous post Racial Bias in Smart Technologies
Next post New technologies, new trust?