Author – Charlotte van Oirsouw, TNO
For our second blog, we had the pleasure of talking with Estelle Derclaye (LL.M. PhD). She is a professor of intellectual property law at the University of Nottingham and a member of the European Copyright Society, a group of European academics that aims at influencing policymaking around copyright challenges including in the context of Big Data and AI. She is an expert in intellectual property law, in particular, the Database Directive, copyright and designs law. She is the author of many books, articles, and she has recently been working on an evaluation of the Database Directive for the European Commission, with the aim to investigate whether the Database Directive was still fit for purpose in the fast-growing data economy.
Since the entering into force of the Database Directive in the 1990s, a lot has changed. Nowadays, we generate far more data and more computing power. Besides, it is not only humans that create and databases: there are many things that generate and read data. So, there is more data by definition. As more data is available, we are also able to process it better. All this data can help us do many great things, but we have to be able to analyze and use it. If done so, we can create economic development, fight climate change, improve health, education, and many other great things. However, there are challenges around big data and data4ai that prevent us from achieving many of these things
After one year of GDPR, there is still a lot of work to do where it concerns regulating both personal and non-personal data. The previous Commission has done a lot of great work doing consultations and reviews. Many studies were conducted and we now have to take a look at how to tackle these issues in the most efficient manner. So, it is now up to the new Commission to tackle the Digital Economy package altogether.
Within the context of Big Data, one of the main challenges is to measure the effects of the regulatory landscape. Take for instance the Database Directive. The purpose of this piece of legislation was to increase investment in databases in order to become a competitor of the US. Granting protection to databases would stimulate innovation and production of them. Whether the EU is actually a competitor of the US in this respect, however, is still not clear, even after the European Commission’s intensive evaluation in 2018. This is because it is almost impossible to collect data about databases, even though they are pervasive everywhere. The only data about databases that were present during the evaluation study stemmed from publishers and the amount that they could gather was extremely small. So, whether the protection of the Database Directive actually increased database production and innovation is unsure. The best way to go around this is to look at the cases that ended up at the Court of Justice because these are the cases where big money is involved. Even though its effect on database production and innovation is unsure, the Database Directive did have a harmonizing effect throughout the EU. As a result, everybody speaks the same language. This saves a lot of time and effort where it concerns contracts. Without this common law, transborder data exchanges are easier. For this reason, the Database Directive is probably one that is there to stay.
The evaluation also addressed the Database Directive in light of technological developments such as the IoT, AI, algorithm and sensor-generated data, and of course, Big Data. These rapid technological changes raise a number of legal questions in particular where it concerns machine- and sensor-generated databases. It is unclear how these are regulated, whether they could fall within the definition of a database, who the owner will be of that database, and whether they can benefit protection of the sui generis right, which is the protection of a Database on the basis of a substantial investment that was made in it. The views in this respect were very polarized at the time of the evaluation. The Database Directive is not amended or clarified yet to tackle these developments.
Currently, the debate about data ownership and data access is still taking place and should definitely still be going on. Nowadays, almost any physical object involves software and (non-)personal data. So, for instance, if you as an owner would like to effectively repair your object, then this will not be possible without having full access to the data. However, full access to the necessary data is often not provided and you will be at the mercy of the manufacturer. So, you need to find other ways around, for instance by paying an extra fee.
The fact that data as part of a physical object is not enshrined in legislative frameworks is really an issue that is being overlooked and unfortunately, current legislation does not allow for merging the concepts of data and objects.
In terms of research, there is a lot going on about whether there should be a copyright on AI generated works. When investigating such a problem, you need to work in an interdisciplinary manner and talk to the people who are actually working with AI. The best way to go is to start out by asking the AI engineers what they think that the issues are and after that turn to a lawyer to see how this translates into a legal question. Data issues are also best addressed from the viewpoint of industry. Sometimes companies do not even know that a certain piece of legislation exists (which sometimes happens if it concerns the Database Directive). It is important to start addressing the topics that they perceive as issues. In addition to that, the overview of how different policy fields and different legislations interact should be cleared for them as well.
One of the current issues that can be overlooked competition issues resulting from data issues. If a monopolist encrypts his data and can thereby keep it for himself, then he can also maintain his market position. Another issue relating to this is the reinforcement of a monopoly position through a thick layer of legal protection, consisting of thinner layers of legal protections derived from different fields. This results from the fact that it is often unclear how different legal fields and protections interact with each other. Take for instance the Database Directive and the Trade Secret Directive. The fact that the latter is applicable on top of the protection that the Database Directive provides both contributes and allows to maintain a monopoly position. So, in terms of competition, this should definitely be looked into. The way to go here is to first clear the interaction between all these several pieces of law and then proceed to address the competition issues.
A patient, holistic approach
Big data involves so many different interests and rights, so the best thing is to take a holistic view when approaching the issues that come to surface. All the different aspects of data should be taken into account. In doing so, the DGs should maintain the responsibilities as they have them now, but cooperate and communicate even more closely where their fields intersect. Before something substantial is implemented, it would be good if they always looked at the interactions between fields and address them. This also makes the process of working more efficient in that respect.
Another frustrating and surprising aspect is the lack of participation of stakeholders (except of course for the bigger ones) in public consultations, even though the Commission releases them very well in advance. As a result, there is not enough data available on their views. So, there needs to be more action to reach out and involve stakeholders to encourage better to take part in the consultations. Arguably, this can be hard to do but directly (e)mailing not only the associations in different sectors but all the main stakeholders could be part of the solution. In the end, increased involvement of stakeholders will also further strengthen the position of the Commission because it prevents backlash when regulatory changes are implemented. During the process, everybody that deals with Big Data should be involved. This means that stakeholders from different disciplines need to take an active role, think of computer scientists, engineers, environmental scientists, and so forth.
When drafting and implementing legislation, it is important not to rush the process. Once legislation is implemented, it is very hard to undo. Take for instance the database sui generis right. Another example is the data producer right, a right that would directly protect machine-generated data without material prerequisites. If it had been adopted, this would have protected machine-generated data much more, potentially preventing access to a lot more data than currently. It is a good thing that this right was not put into existence because the effect could have been disastrous. So, when tackling the challenge of regulating Big Data, this has to be done in a patient, efficient and holistic manner.
– DERCLAYE, E. ET AL., 2018. Study in Support of the Evaluation of the Database Directive: Study for the European Commission, DG Connect, available here
– Evaluation of the Database Directive in April 2018, available here
– Impact assessment of the PSI Directive, available here
Websites and blogs
– TILTing Conference, this year’s edition here
– Conference of the European Copyright Society, here
– Third Annual Junior Faculty Forum for Law and Stem, organized by Stanford Law school, September 27-28 2019