True artificial intelligence (AI) that rivals the human brain is gradually approaching. Tasks that we thought could only be done by people are increasingly being taken over by smart devices. And what about new creations with or without the help of a person? Recently, we have seen more and more programmes that compose music, create poems or write books themselves.
On 9 October 2021, Beethoven’s Tenth Symphony was performed in Berlin. Beethoven did not finish the symphony. But an AI system completed the symphony almost 200 years after Beethoven’s death. Opinions differ on the result, and it is possible that Beethoven is turning around in his grave, but the fact is that AI accomplished the majority of the work in the creation.
Another example of a creation by an AI system is ‘The Next Rembrandt‘, in which an entirely new work was created based on the painting technique of the great master.
Who can claim the rights? The one who wrote the code or algorithms? The person who introduced the basic information to teach the system the relevant knowledge? Or the one who uses the system? Even more so, is there a right holder? Should we regulate this? Or will an AI system or a robot ever become a legal entity itself?
What is AI?
AI can be defined in different ways, broadly and narrowly. Among others, it can be defined as the ability of a system to correctly interpret external data, to learn from these data, and to use these lessons to achieve specific goals and tasks through flexible adaptation. According to Wikipedia, there are three different types of AI systems: (1) analytical, (2) human-inspired and (3) humanised artificial intelligence.
Analytical AI only has properties consistent with cognitive intelligence that generates a cognitive representation of the world and uses learning from previous experience to influence future decisions.
Human-inspired AI contains elements of cognitive and emotional intelligence, understanding, in addition to cognitive elements, as well as human emotions that are taken into account in the decision-making process.
Humanised AI exhibits characteristics of all types of competences (cognitive, emotional and social intelligence), and is capable of being self-aware in interactions with others.1
In this sense, not all software deserves the predicate AI as sometimes seems to be suggested. AI is much more and different than just smart software.
But the intelligence of software and the logic embedded in software is increasing, and with it the ability to create original works. Whereas a work created with a computer programme usually involves human intervention, this is increasingly unclear with AI-driven programmes. The creation of a work may be far removed from the initial intentions of the developer of the self-learning AI programme. The idea of a programme as a tool for creation is fading. Because intellectual property rights are only granted to individuals, the allocation of IP rights is becoming more difficult with the use of increasingly complex AI.
Copyright in the EU protects a ‘work’ as the ‘intellectual creation of its author’. The key question is not whether a human being is involved in the process of artificial creation, but whether and to what extent the involvement of a natural person in the output generated by AI is sufficient to qualify it as a copyrighted work.2 A distinction can be made between different categories, and on the basis of that distinction it can be determined whether they would be eligible for copyright protection in the EU on the basis of the originality requirement. For a work created with AI as a tool, protection should not be a problem. For a work created by AI but selected by a person, it could be argued that the rights would belong to the person making the choice and selection. But for a work made and selected by AI, the current intellectual property system seems to offer no room.
The next question then is who should be regarded as the creator and rightful claimant. Granting rights to an AI machine itself does not fit into our current system. The extent to which intelligent systems help create new works varies greatly. Whereas in many cases it is (still) only a tool, in other cases it can be almost exclusively the work of a system. The degree of influence of the creator of such a system also varies. In some cases, the developer has a great deal of influence through the way in which he organises the intelligent system. In other cases, the end product is an outcome of processes developed by the system itself. But for the time being, human intervention will still be needed somewhere in the creation process and the AI application will be little more than an – intelligent – tool used by a human, so you will usually be able to designate a human or an organisation as the rightful claimant.
Perhaps the creator of the AI application qualifies as the creator of the work, but the creator’s input may be minor and inconsequential to the outcome. In some cases, it may be defensible that any rights would accrue to the user of the machine. After all, he or she will direct the outcome and therefore certainly has an influence on the composition and originality of the final work. But whether the user actually adds so much is doubtful.
In the US, there was a funny case that bears some resemblance to AI applications without human intervention.3 A monkey had gotten hold of a photographer’s camera and made a selfie with it. The monkey was thus the photographer. According to the court, the copyright for the photograph was not held by the monkey, nor by the owner of the camera. We can extend this reasoning to an AI creation: in the absence of human intervention, in principle, neither the programme nor the creator or licensee of the programme (or algorithm) can claim copyright. As stated in the article by Hugenholtz/Quintais, this will only be different if the product created with the help of AI: (1) is a product in the field of literature, science or art; (2) is the product of human intellectual effort; (3) is the result of creative choices that (4) are expressed in the output.4 For the other cases, the legislator may have to devise a different solution: for example, a sui generis right, as in the case of the protection of databases, whereby the producer of the database is designated as the rights holder.
Patent on an AI system invention?
In 2020, the European Patent Office (EPO) ruled that inventions ‘conceived’ by artificial intelligence are not eligible for a patent. Stephen Thaler, managing director of the company Imagination Engines, submitted the applications to the EPO and the USPTO. Two patent applications were involved: a food container and a lamp to attract attention in emergency situations. He described how he had used the AI system DABUS to achieve these innovations without human interference. However, the requirements for registering a patent state that every patent application must mention a natural person as the inventor. And both of Thaler’s applications did not meet that requirement. The patent applications were a test case in cooperation with the Artificial Inventor project team of Professor Ryan Abbott of the British University of Surrey. Meanwhile, in December 2021, the EPO Board of Appeal has rejected the appeal on the same formal grounds.
If it is true that more and more new ideas will emerge from the deployment of artificial intelligence, then this may become an increasingly difficult hurdle to overcome in order to achieve adequate protection. It could be solved by declaring the person who turned on the AI system or who gave the instruction to perform certain actions as the inventor. But that will usually be somewhat contrived. After all, the idea behind listing the natural person as the inventor is the “moral” right, i.e. the non-commercial right of the inventor to show off the invention in his or her name. This is important, for example, in the academic world.
Incidentally, the rule that a natural person must always be listed as the inventor, even apart from AI, may have become a bit old-fashioned and obsolete since it is not always appropriate for instance because in many cases entire departments work together on innovations and it is not possible to immediately identify an inventor.
And what about infringement “by” an AI system?
Is it conceivable that an independently acting AI system infringes on the rights of another? And who would be liable? It is obvious that the person who operates the system will in principle be liable for any infringement. But that person could then perhaps pass the liability on to the person who wrote the algorithms and the code. Because the user may not have had any influence at all on the choices made by the AI system. This is more likely to be the case with self-learning systems than with ‘dumber’, non-self-learning systems. In the current system, it is not possible to sue AI systems themselves for IP infringement.
AI systems and especially self-learning (‘deep learning’) systems depend on training data. The algorithms generate output based on the input that developers ‘feed’ to the system. If that input contains protected works, the obvious course of action is to place the responsibility and possible liability with the (legal) person who entered the training data. It is also conceivable that the responsibility and possible liability lies with the person who manages the AI system. After all, that person or legal entity has the opportunity to prevent the breach by disabling the system.
This article raises questions for which there are no ready-made answers. In practice, it will become more important to consider robotisation and smart applications that will increasingly approach the human brain and may one day even blend with it to some extent. If the output of an AI system is created without relevant human creative input, it will not be a work in the copyright sense.5 Besides intellectual property issues, there are many other interesting aspects to the application of artificial intelligence. Such aspects are, for example, liability, human rights, privacy and possible supervision of the application of AI. The European Commission has proposed a far-reaching regulation in these areas in particular, with a subdivision based on the risks associated with specific applications and forms of AI and smart algorithms.6 However, this falls outside the scope of this article.
AI and intellectual property law will remain an interesting topic for the time being and we will have to wait and see how the law and the courts deal with it. As our current law on certain points does not know how to deal with such new developments, we may not be able to avoid specific legislation in the future, for example to recognise the existence of intellectual property as the output of AI and to determine who is the rightful claimant. And who knows, maybe one day a robot or an application with artificial intelligence will be so smart that we will not be able to avoid recognising it as a legal person or entity that can also hold rights.
2. Article 5(1) Copyright Directive.
3. Naruto v. Slater, 15-cv-04324-WHO, 2016, 5; http://cases.justia.com/federal/districtcourts/california/candce/3:2015cv04324/291324/45/0.pdf?ts=1454149106.
4. See B. Hugenholtz & J.P. Quintais, Copyright and artificial creation, Copyright 2021, pp. 47-52.
5. See previous note.
6. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain legislative acts of the Union; https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.