Author – Charlotte van Oirsouw, TNO

In this series of blogs, we reflect on interviews that we conduct with legal scholars or practitioners from different fields to explore their views on the main challenges and recommendations concerning Big Data. For our first blog, we had the pleasure of talking to Michal S. Gal (LL.B., LL.M., S.J.D.). She is Professor and Director of the Forum for Law and Markets at the University of Haifa and president of the Academic Society for Competition Law Scholars (ASCOLA). Her expertise ranges from competition law, intellectual property law, law and technology, and regulation and governance. She has authored many books and scholarly papers, many of them holding particular importance and authority within the context of Big Data.


view on

Big Data

Big Data brings about many challenges as well as opportunities. Big data can affect many aspects of our life. Beyond its effects on market dynamics, it can also affect politics, social interactions and other parts of our lives. Moreover, such effects may often be intertwined. Accordingly, we have to make sure that welfare is not harmed by the use of big data. This is where academics and regulators come in. When exploring when and how to regulate data markets, it may be useful to keep in mind several issues. First, competition to gain data-based advantages is no longer reserved to private firms only. Rather, some governments are increasingly involved in creating environments that enable their domestic firms to gain data-based advantages, which can be translated into comparative advantages in Artificial Intelligence (AI). Consequently, regulators need to think beyond the borders of their own jurisdictions, because their industries have to be able to compete in the global market. Second, it may also be beneficial for regulators to recognize that different dynamics apply to different data-based markets. Market dynamics are affected, inter alia, by the sort of Big Data which is needed for each specific industry. This implies that market players (collectors, aggregators and analyzers of Big Data) do not necessarily compete if they collect or analyze different types of data. Furthermore, the importance of the four V’s contributing to the value of big data (volume, velocity, variety, veracity) might differ among the myriad of markets in which Big Data serves as input. Market dynamics are also affected by the fact that the same dataset can be used for a variety of users and uses. Furthermore, the value of the data might increase when combined with other, related datasets. This implies that Big Data need not be collected from the same source or by the same entity, and that data portability and interoperability play an important role in the competitiveness of data markets, as well as in the ability to derive better insights from the data. Finally, it is important to recognize that entry barriers exist into some types of data. Just because vast amounts of data are collected, one cannot conclude that all types of data can be easily or costlessly collected.

Current challenges: the role-and need
for data standards in some Big Data markets

It is often the case that databases of complementary data belong to several different firms. If you can interconnect them, you might be able to create better knowledge. Take for instance separate databases that contain information about individuals that have a particular rare disease. Combining these separate databases will enable us to extract more valuable information. This example illustrates that interconnectivity might be key to the creation of synergetic knowledge. One of the current main challenges is however that “data do not talk to each other” due to the different standards used by data collectors. The meta-data, that is the information about the data which is included in the dataset (e.g., in what metrics was it collected), might be different or unclear to those attempting to combine the datasets; the data might be organized in different ways that make it difficult to combine the datasets; or some data points might be missing, making it harder to create a coherent dataset. Michal refers to this as the “Tower of Babel of Databases”. As a result, some of the value might be “lost in translation”. This situation also inhibits competition. Without joining forces to create better datasets, small and medium data collectors will find it difficult to challenge the comparative advantages of the large data collectors which currently enjoy significant market power.

Data standardization plays an important role here because it can technically ensure data interoperability and portability and as a result can positively affect private and public welfare. Indeed, in her own work with Dan Rubinfeld (Berkeley), Michal advocates the creation of data standards, at least in some market settings. In many cases, the market itself is able to create and implement a standard that is efficient. However, in some situations, the creation of such standards might be inhibited by collective action problems, even though this would be beneficial for all of them. The inability to reach an agreement on data standards might result in a patchwork of inconsistent data standards that slow down data flow. Furthermore, we cannot always rely on market participants to set standards that are also socially beneficial. Even if a standard does serve the interest of all market players, the standard might still not reflect the social optimum, for instance when it disregards spillover effects of data subjects. Moreover, those setting the standards might set standards that further their own comparative advantages. Sometimes standards are also set in such a way that they raise rivals costs, the rivals being smaller and medium firms. This is not to say that data standards should always be facilitated. Yet government has a role to play in overseeing and sometimes even actively facilitating the creation of data standards- together with market players- in those instances in which it is clear that the lack of standard leads to market failures.

During standardization processes, there are certain things that need to be taken into account. First, where it comes to setting the standard itself, the standard set in one particular field or market might not work in another. If this is anticipated, a lot of money and time can be saved. Second, due to market failures, the market cannot solely be relied upon to create and implement a standard. So during this process, there is an important role for government in the acknowledgement and evaluation of the standard, as well as the facilitation of the process. During this process, different kinds of players have to be involved. Not only small, medium and big companies, but experts as well. The role of experts is a very important one because they can explain how a technology works. This will allow for better understanding of regulators of the implications of their decisions on market players and how to better to evaluate whether the proposed industry standards are efficient. Once the standard has been agreed upon, the regulator also has to decide how to facilitate the adoption of the standard, for instance by including best-practices or by creating incentives for adoption. Currently there is no governmental body exploring the need for a general data standardization agenda. However, we cannot continue to rely upon the market to create and implement social-welfare enhancing standards. This is one of the current challenges that needs to be tackled.

Recommendations – breaking regulatory silos;
creating in-house expertise

Up until now, everybody has been thinking in silos of law, mainly because this is the manner in which our regulators are structured. However, due to its characteristics, Big Data cannot be separated from the market, politics and societal interactions. As a result, ensuring that its use increases social welfare is challenge of multidisciplinary scope in which many different interests and issues have to be taken into account. To do so, regulators of different fields need to sit together and work in teams to create ‘regulatory packages’. Think for instance about the regulation of an algorithmic assistant, such as Amazon’s Alexa. This technology necessitates regulators to think about not only consumer and contract law, but also about privacy, data protection and competition law. In addressing this, it is important to analyze and understand the new market dynamics that such technology brings about. Michal’s current work attempts to do so.

Furthermore, industry cannot be the only one that understands how algorithms, AI and Big Data work and interact. Regulators should employ in-house counsels that have such knowledge (data scientists and computer scientists). Moreover, industry, regulators and technical and legal experts of different fields need to sit down together and think about how to shape the regulatory framework that tackles these technological developments. In doing so, one thing to consider is the potential chilling effects of a regulatory system that enables government to make use of data collected by private firms for governance purposes. As Michal shown in her work, this might affect the willingness of data subjects to enable the collection of their data. Chilling effects might also be created when the government shares its own data with private firms. A shift in a willingness to share can affect the dynamics of data driven markets, affecting the quantity and quality of data collected, the use of the technologies that build upon the ongoing analysis of data (for instance AI) and thereby also data-driven innovation. The total welfare effect of gathering data will thus depends on the positive and negative effects of the use of collected data.

Summarizing this, the main task to take up is ensuring a than any regulatory framework adopted is beneficial to both the market and the society. In this process, we cannot solely rely on the market and government has to take a role in this. This challenge is multidisciplinary. This means that regulators should no longer think or work in silos, but should form regulatory teams to mitigate the challenges and risks that we are facing. If we take this into account, we can work towards achieving the greatest total welfare effect of Big Data for both the market and society.

The content in this blog resulted from the interview with Michal S. Gal and her five articles listed below, which we highly recommend to read.

Recommended articles

o Michal S. Gal and Daniel L. Rubinfeld, Access Barriers to Big Data, 59 Arizona Law Review 340-381 (2017).

o Michal Gal and Niva Elkin-Koren, Algorithmic Consumers, 30 Harvard Journal of Law and Technology (2017).

o Michal S. Gal, Algorithmic Challenges to Autonomous Choice, 25 Mich. Tech. L. Rev. 59 (2018).


o Niva Elkin-Koren and Michal S. Gal, The Chilling Effect of Governance-by-Data on
Data Markets, 86 University of Chicago Law Review 403-432 (2018).

 o  Michal S. Gal and Daniel L. Rubinfeld, Data Standardization, 94 NYU Law Review (forthcoming, 2019).

You can find more information about Michal and download the papers through this link.