Author – Joe Hardwell, TNO
The discussions around the concept of emotional AI is relatively fresh in comparison to more standardised forms of AI we’re traditionally exposed to within the field of regulation and policymaking. To shine light upon the notion of emotional AI, I spoke with Andrew McStay, a professor in digital media at Bangor University in North Wales. Upon reaching the end of this blog, I hope to have enlightened you on what emotional AI actually consists of, Professor McStay’s opinions on the uses of these technologies in our daily lives, as well as what the main regulatory challenges and issues currently look like within the European landscape.
For some context about this month’s contributor, Prof. McStay’s primary interests are about the social and cultural impact of emerging technologies, with a heightened acknowledgement in those which interact in qualitative dimensions of human life. Currently, he has three projects ongoing which address emotional AI through differing avenues. First, the empathic media project aims to better understand the interests of stakeholders, chief executives, senior policymakers and defense organisations regarding technologies that utilise emotions. The ultimate aim of the project is to understand what these technologies actually are, how realistic they are for deployment, as well as how they are being used.
Secondly, a project concerning children and how they engage and use little toys such as robots is currently underway. Specific attention is paid to the natural language processing ability, and potentially even biometrics to understand how the toys engage with children’s facial expressions and voice commands. The key questions that are emerging based on these technologies revolve around, what rights to children have in this regard? What rights should they have? How do parents feel about these technologies? What are the potential benefits? Prof McStay declares that yes, there is scope for harm with these technologies, but there’s also clear scope for benefit to engage with these technologies in a richer and more interactive way, which begs the question of, how can we have more of these positives and certainly less of the negatives when it comes to children?
Regarding results for this project, the highlights display how through conducting national surveys with parents on these technologies, there is deep concern about the potential for the collection of data and where it might go, and who has access to it. The focus groups provided interesting insights in relation to parental fears around emerging technologies, as well as hopes. Technologies such as home robots, and how children will reprogram these devices was a point which piqued considerable interest. Overall, Prof. McStay conveys the message that children today are growing up with technologies in both the virtual and the physical, and this issue of emotional AI is all about that. This issue of emotional AI isn’t just a virtual entity, it’s also a physical one, after all, it tracks our physical bodies.
The third and latest project aims to compare and contrast the UK and the Japanese perspective on these technologies, as an Asian point of view is currently under researched. A key point here viewed as important within this context is to understand the issue of training data, for example, a clearer insight into how training data for these technologies is generated, the ecology of training data around emotions. Key questions that still remain unanswered as of this moment revolve around how is this training data created, acquired, bought and sold? Are these national datasets, or more European or even international?
How would you describe the use of emotional AI in a few sentences?
In describing what actually constitutes as emotional AI, Prof McStay breaks the concept into two separate entities. The first is affective computing, which involves technologies that gauge affective states and infer emotions through a person’s emotions. The other half relates more to the AI side of things with algorithms, patterns, learning and human interaction. Together, these two entities converge to learn, send and interact with human emotion and life. However, there is a small caveat in this explanation of emotional AI. Furthermore, an explanation is given on how emotional AI is actually a weak form of AI. By weak, it is meant that these technologies don’t actually understand emotions, nor does it feel. Instead, it’s about simulating understanding emotion, which is achieved through text, voice, monitoring facial expressions, biometrics (such as heart rate), as well as the words and images we post online on platforms such as Instagram.
In discussing the current laws and regulations around AI and emotional AI in particular, he declares “that there will always be future-proofing issues, but that it’s very easy to point to legislation to say we need certain changes. When you consider the topic of emotional AI, many of the issues highlighted from my projects haven’t actually gone before a court yet, which is the real issue. Furthermore, the GDPR makes no specific reference or mention to emotions; the revised e-privacy directive rarely mentions it either. Let’s remember that these technologies are going to interact with people in new, interesting and novel ways, and emotions are going to be a key part of that. We also need to recognise that these technologies are not new, they’ve been around for a while, so I would have expected to see more of a mention of emotion in GDPR”.
The notion of exploitation and AI have increasingly been experiencing a stronger relationship, especially when we consider that the use cases are so varied with out-of-home advertising, classroom technologies, transport and the workplace being at the forefront. In determining the levels in the fear of exploitation, Prof McStay comments that it is indeed possible, as well as that when we consider whether AI as a whole is good or bad, the answer is of course both, it will do good but also socially corrosive stuff as well.
“I think the same applies to emotional AI, because these technologies will help interact with devices and even ourselves in interesting, enriching and important ways. But yes, there will be exploitation. We only have to look at the history of our technologies, such as social media. It’s delivered some really beneficial stuff such as connectedness, but from a data point of view, there have been huge exploitations along the way”.
In expanding upon this notion of exploitation, drawing the line between exploitation and appropriate levels of informance is heavily blurred. An example referred to is that of Amazon Alexa and a duty to inform about poor mental health.
“As they (Alexa and other smart devices) develop and are emotionally enabled to pick up the high points in our voices and the low points in our voices, and therefore being able to use these measurements to infer emotions, do these technologies have a duty to inform us about our mental wellbeing? This in itself raises huge ethical questions”.
The paper which sparked my interest in wanting to conduct this interview was, ‘Emotional AI: a societal challenge’ referred to the term ‘intimate data’, which is data that is sensitive without being personal.
To provide some examples of how intimate data would be experienced in our lives, Prof McStay refers to the field of advertising. In the traditional forms of online advertising that we’re familiar with, cookies and various other technologies track and profile us around the Web, but what we’re increasingly seeing is our behaviour being tracked in physical spaces, thanks to the emotional AI. Take Piccadilly Circus in London as a case study, it has huge advertising screens that have cameras behind them that track people’s emotions when they look back at the billboard. The systems behind the cameras immediately remove any personal data and aggregate the geometric relationship, for example the eyes, cheekbones, nose and mouth that make up the emotion expression, which results in the personal data being stripped and aggregated into the data itself. Therefore, strictly speaking, the data is not personal as it cannot be linked back to any individual, so for all intents and purposes, the data is anonymous. He continues by stating that “now that’s interesting for me because it’s not personal, but the data is about emotion, which legally speaking is sensitive. For me, that type of data should be called intimate, because it’s not personal yet it’s about emotions”.
This example of Piccadilly Circus was very eye-opening for me personally. You would expect to have some form of security cameras surveilling you in such a hotspot for safety purposes, but having these private companies installing this emotional AI in such a public space without the average citizen having much knowledge this was occuring was surprising. I asked Prof McStay how is it best appropriate to educate or inform citizens that these technologies exist, and that these types of intimate data are being used and/or sold?
“The first question we should be asking, is should this be happening in the first place? From my perspective, it’s not for me to decide yes or no. But what I’d like to see is a bit more of a public conversation about it, and I’d like to see policymakers in this area listen to the evidence, for example, in terms of what citizens actually think about these technologies, and how they feel about ads that function on human emotion. At present, we’re not even making policies in these areas, the technologies are just being used without any kind of real debate of scrutiny”.
If some level of notification was preferable, what would this look like?
“It’s not easy because with the Piccadilly Circus example, it’s a big area, so would notification arrive in the form of a giant poster with notifications? To be fair to the Piccadilly example, the data strictly speaking isn’t personal, and some legal specialists will disagree with me there. It does appear that legal experts are making the same recommendations and reaching the same conclusions that I am. If it’s not strictly personal, if it’s not strictly identifiable, then it’s not personal data, and if it’s not personal data then the sensitive qualifiers don’t come into play”.A prominent paper recently published was that of the White Paper on Artificial Intelligence: a European approach to excellence and trust by the European Commission in mid-February this year. I asked for Prof McStay’s thoughts on the Commission’s piece, specifically this aspect of trust which was so prominent.
“It’s interesting because it’s about excellence and trust in AI, but what does this actually mean? Are we trusting any technology? Are we trusting the hardware, the algorithms, or are we trusting in the organisations? It seems to me that it’s more about the trust in the technologies, and that to me seems a little bit strange because these technologies are deployed by organisations. Historically most companies are trying to do the right thing within this field, but if we look at the tussles Europe has had, with Facebook being the most prominent example, most of these organisations have played fast and loose with personal data, so there’s every reason to assume this will happen with AI”.
“Regarding other reactions, the camera facial recognition issue was interesting in that the Commission dropped the idea of a temporary ban on the use of facial recognition technologies. I think that a moratorium on facial recognition would have been a good idea, so that we can have a better conversation on the uses and merits. We in the UK have felt this personally in having police testings and usages in and around London. Focuses more on the issue of trust, with the work I’m doing on the emotional side, I’m finding a lot of ingrained skepticism around the use of these technologies, so I can understand the Commission’s desire to build an appropriate governance mechanism to create trust in these technologies”.
In addition to this narrative of trust, Prof McStay referred heavily to the actual training of data used in these technologies;
“I think that greater transparency around training data is something that is forward thinking and something that will become very important because of how our AI devices will function is really only as good as the data that the algorithms are informed by. Therefore, this transparency around data and how it’s collected, whether it’s properly representative is a real positive step in the right direction, particularly in my field with emotions. The way people express emotions within Europe varies pretty widely. Italian’s are typically framed as more emotionally expressive rather than from the UK, our behaviour is not quite as display orientated, so even within Europe you have clear variations in emotional behaviour. But if we move this idea to Japan, emotion detection display behaviour is very different. So this need for examination in training data, where it’s collected, how that data is informing machine decisions is really important”.
Our attention turned towards the main regulatory challenges within Europe, with specific attention around emotional AI;
“I don’t think we have to reinvent the wheel. For one, the wheel is not going to get reinvented and I don’t think we have to either. I do believe that many of the emerging issues here can be answered by applying existing rules from biometrics, with biometrics being the sensing of the body. But that’s typically for purposes of identification, however there’s scope within GDPR and e-privacy to cover many of the issues. I would like to see more explicit acknowledgement of emotion in GDPR, even if it’s just to signal to the courts that this is an issue for lawmakers within Europe. And I think this matters because within Europe over the last 10 years, emotion-sensing technologies have been a fringe issue. The big companies are now developing these technologies; Intel are creating emotional AI products for education; Amazon has a big cloud service; and Microsoft has a centre for empathy and emotion. So, as the larger companies come into view, there needs to be greater signalling from the Commission of greater interest in emotion”.
The second main regulatory challenge discussed surrounds the idea of group protection, specifically regarding advertising.
“In essence the legal situation is that groups of people can have influences made about their emotions and expressions, they have no choice about that. So despite the fact that cities are meant to be public spaces that belong to citizens, we have companies that can essentially commodify, commercialise and turn into a financial product data about emotion. Now, to whether that’s right or wrong, again, that’s for society to decide. But I think we need more of a public debate around this issue of non-identifying data about emotions, and what i referred to earlier in the conversation about intimate data. We do need more policy interest, and it needs to be evidence-based and data-based informed. Personally, from the evidence I’ve collected over the last 5 years, certainly from a UK perspective, UK citizens are not keen, and with no huge variations in the thinking across Europe, there is definitely a need to open these debates”.
To conclude, I asked what his main recommendations would be to policymakers concerning the issues and examples we’ve discussed today.
“They need to be aware that AI will reach intimate and sensitive dimensions of human life. Reaching into domestic, lived and experienced, intimate and sensitive dimensions of human life, I would like to see more acknowledgement of that. It’s not just about data protection, it’s not just about security, it’s not even just about privacy, it’s about the encroachment of highly powerful sensing technologies into more and richer qualitative dimensions of human life”.
I’d like to thank Prof McStay for his time and providing such a rich insight into this field of emotional AI. Links to his publications and research projects can be found below, as well as other articles which can add more depth to this discussion.
If you’d like to contribute to this blog series for the BDVe, you can contact myself at joseph.hardwell#arroba#tno.nl
Bibliography and Useful Links
Van Dongen, Lisa and Timan, Tjerk, (2017) “Your Smart Coffee Machine Knows What You Did Last Summer: A Legal Analysis of the Limitations of Traditional Privacy of the Home Under Dutch Law in the Era of Smart Technology” (September, 2017) SCRIPT-ed, Vol.14, No.2. Available at SSRN
McStay, Andrew (2018) Emotional AI: The Rise of Empathic Media. London: Sage.
McStay, Andrew (2020) Emotional AI Publications
More information about the projects discussed during the interview
White Paper on Artificial Intelligence – A European Approach to Excellence and Trust