Author – Charlotte van Oirsouw, TNO

So far, we have been discussing the policy framework for Big Data. Data is the fuel for AI and we need a solid data framework, because otherwise the uptake of AI will not be successful. AI has received a lot of attention, mainly focusing on the risks it brings about. In this blog we go beyond these aspects of AI by looking at the benefits of AI and how to mitigate these risks.

Virginia’s

view on

Big Data

Virginia Dignum is professor at the Department of Computing Science at Umea University in Sweden. She is also member of, among others, the European Commission High Level Expert Group on AI, the World Economic Forum Council on AI, the IEEE Global Initiative on Ethically Aligned Design of Autonomous and Intelligent Systems and the European Global Forum on AI (AI4People). She has written a series of blogs for the website Medium.com, on which the content of this blogpost is based. You can find the hyperlinks to her blogs at the bottom of this page.

Facing the challenge of bringing together many views from different disciplines on what AI exactly entails, Virginia’s definition of AI offers an overarching perspective:

“(…) AI is the discipline of developing computer systems that are able of perceiving their environment, with the ability to deliberate how to best act in order to achieve its own goals, while taking into account that the environment contains other actors similar to itself.”  Read more at Medium.com

She continues by explaining that AI is about the autonomy on deciding how to act, adapting to changes in the environment, including to actions and aims of other agents in that environment and then deciding how to act. AI is not just the algorithm nor just machine learning.. AI systems use algorithms to reach conclusions, but then so do we, each time we follow a recipe to bake an apple pie. As with an apple pie, the actual result of an AI system is not only determined by the algorithm, but by the choice of ingredients (the data in the case of AI). As a scientific field, AI refers to many different methods, theories and techniques, including machine learning, knowledge representation, planning, dealing with uncertainties, theorem proving, cognitive robots and human-agent or robot interaction. In the end, an AI system is an artefact that is decided, designed and implemented by us. This means that we are responsible for it. This raises questions concerning the ethical, legal, societal and economic effects of AI.

As to the question of responsibility for AI, she asserts that everyone is responsible – it is a multi-disciplinary challenge. Engineers are the ones developing and implementing ethical standards in AI, but policymakers, regulators are the ones that set and enforce the purpose of AI. Users and society in general are also responsible for demanding and expecting quality of the products and services they use, and for demanding policy-makers and regulators to take their own responsibility. The principles formulated are eventually codes of behavior for us, not for the AI itself. So, it is not merely about making sure that your AI systems checks all of the compliance boxes. We need to ensure that the purpose of AI remains in place when algorithms and their contexts evolve. So it is really about why we design AI, the way in which we design AI and who is involved in the process of designing AI. This entire process is one of trial and error. We need to make sure that even when errors and mistakes are made, these can be used to improve AI systems and to inform current policy.

Drawing on former, similar frameworks such as privacy-by-design to value-sensitive design, putting human values and ethical principles into the core of the design of a system really requires a mind-shift of researchers and developers. Ethical principles should be embedded in the design of the systems by default. In this light, one starting point is the work done by the High Level Expert Group on AI released the Ethics Guidelines for Trustworthy AI in April this year, of which Virginia is a member. This document describes the ethical principles that must be respected by AI systems in their development, deployment and use. These are the principles of respect for human autonomy, prevention of harm, fairness and explicability. The document then proceeds to give seven requirements that AI systems should meet to ensure trustworthiness.

There requirements are:
     1.- Human agency and oversight.
     2.- Technical robustness and safety.
     3.- Privacy and data governance.
     4.- Transparency.
     5.- Diversity, non-discrimination and fairness.
     6.- Environmental and societal well-being.
     7.- Accountability.

Meeting these requirements will steer towards lawful, ethical and robust AI. The document also provides an assessment list for AI systems, but this list is never exhaustive and always needs to be tailored to the specific use of an AI system. In this respect, topics such as transparency and accountability of AI can never be just box-ticking exercises that are performed once at the start of a project- they are continuous and dynamic challenges.

Transparency needs to be promoted in AI models, because this can ensure that AI brings benefits to people’s lives. Machine learning algorithms are trained by the optimization of functions, though at the same time provide no insights on how these functions are approximated. This black-box effect of AI is one of the main impediments for transparency. However, transparency is more that opening up this black box. Is also about openness of the decisions and choices that have been made in the design and development process, and about ensuring that those potentially affected by systems’ behavior are also participate in the design phase. Moreover, it is about understanding the data that is used to train the system. Machine learning algorithms are trained by people. People have their short-comings and make mistakes as well. The heuristics that people use are culturally influenced and reinforced by practice, which can turn into biases or stereotypes when a misstep or misconception caused by these heuristics is reinforced. Biases are unavoidable however, because they naturally occur in human thinking. So, data collected by humans will always include biases. We do not want AI system to act upon biases, but attempting to remove the black box will not solve the issue as the system will still recognize biased patterns in the data. It is therefore important that transparency is ensured, so that we ensure for a proper learning process. Transparency therefore requires that the data and the processes are open, as well as openness of the stakes and stakeholders involved.

Responsibility in AI begins with demystifying its possibilities and processes. To achieve this, Virginia states that we need training of people, good regulation and awareness creation. AI must be understood as being part of socio-technical relations and as having a position in a societal context. We also need to be responsible about our dependency on big data, because more data is not always better nor necessary. We can for instance rethink the use of correlation techniques, the ways we address causality or the abstraction theories that we use. This will allow for more sustainable solutions in data use. For all of the above, Education is necessary. Researchers and developers, but also governments and citizens need to understand AI and its impact. According to Virginia, education plays a very important role because it can ensure that knowledge of AI’s potential is there, so that people will know that they can participate in shaping this development. It is necessary to understand how people work with AI, so that we get an understanding of how to develop frameworks that ensure responsible AI. This means that we need to improve the education of developers, the public and regulators, so that they understand why it is important that they comply with the with the ethical principles of AI. This requires that ethical and technical education has to be improved throughout the entire spectrum, starting at primary school all the way through university. This allows for creating the skill to scrutinize AI. Answering the big unsolved questions concerning AI also requires participation of both society and stakeholders. So for answering these questions, education is key.

We need new forms of governance to meet the special nature of AI and ensure that it will advance such that it serves societal good. Virginia stresses that regulation of AI is vital because of its potential power to disrupt current social, economic and political structures. Because of this, we do not have the luxury to make mistakes anymore, the bar of AI development has to be raised. Current software regulation may be not robust enough to do the job. We also need to look at current regulation about product and service liability, and about specific regulations concerning data and privacy. Ensuring trustworthy AI, is therefore also key in ensuring good practices. Trustworthiness can be approached either by means of regulation or incentivization… By incentivizing companies, for instance through providing proofs and certificates that state that an AI application is considered “safe”, trust in systems can be increased. However, with incentivization we should remain aware that it does not become a mere act of ‘box-ticking’.

Virginia points out that, in the end, true intelligence is about social skills, collaboration, feeling and contribution to a greater good. This is a multi-disciplinary and multi-stakeholder challenge. It is also a learning process in which everybody has to participate. The ethical principles are for us, we have AI in hand. Therefore, we are responsible.

 

X