[et_pb_section fb_built=”1″ admin_label=”section” _builder_version=”3.22″][et_pb_row admin_label=”row” _builder_version=”3.25″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″ _builder_version=”3.25″ custom_padding=”|||” custom_padding__hover=”|||”][et_pb_text admin_label=”Text” _builder_version=”3.27.4″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”]
“I expect to see a constant arms race the years to come,” Guillermo Súarez-Tangil predicted. The Research Assistant Professor at Spanish research institute IMDEA Networks was talking to 6GWorld about his work exploring methods to reduce the impact of hate speech online.
While this may seem at best tangential to 6G for purists in disciplines such as radio engineering, “Digital Trust” is an element that governments worldwide are aware will be necessary for future network and service adoption. Unless people are convinced that 6G is a positive or, at worst, neutral force we will see continuous pushback of the kinds which have faced 5G; cell tower burnings, false panics and more. This trust includes data management and security, but also helping people feel safe to be online – which is as much an exercise in psychology as engineering.
“The minute I understand that maybe a detection system like Twitter has flagged my tweets because I’ve used certain language, I transition to something else,” Súarez-Tangil explained. “I start using slang, or I start using a term that may not be picked up by the detection system, or may not be understood by a broader community and that is only understood by a smaller set of users.”
We have seen many such examples online of this behaviour – for example, referring to cannabis as 420 in areas where its use is banned or some of the ‘dog whistle’ terms which are designed to be understood by a particular audience and not noticed by others. These are particularly problematic, according to Súarez-Tangil.
“We’ve had cases where you see images that look really funny and in reality they have a hateful meaning behind it. Politicians have sometimes actually accidentally shared images that have a different meaning,” he explained ruefully.
“I aim at doing systematic detection, trying to use data-driven approaches to understand better what is happening, using automated methods to detect hate speech.”
Hate Speech and Censorship
This might sound disturbingly similar to reports of censorship, and pressure to self-censor online, in China. There are many people who hear this and are concerned that this will impact upon their freedom of speech. However, there is also a legitimate need for people to feel safe expressing themselves – and, as Twitter has experienced lately, current methods of content moderation are proving problematic.
One of the problems lies in the complexity of language and intent – and even the concept of what constitutes hate speech. Criticising a person for their policies or actions should be part of free discourse, but the Encyclopaedia Brittanica definition of hate speech defines it as a denigration based on their participation, or perceived participation, in a particular group.
“We have tried to… understand if you’re trying to use hateful words against an individual or against the broader community,” Súarez-Tangil explained. “That is itself quite challenging because much of the [machine learning] technology being used nowadays is still in its infancy and makes a lot of mistakes. So it might take some articles and believe that maybe the hatred is directed towards a general population, but in fact is talking maybe about a specific politician.”
Ironically, many of the machine learning techniques to follow and moderate the occurrence or impact of hate speech rely on a balancing act. Whereas coherent censorship would aim to eliminate references to, for example, Winnie the Pooh or a particular date, moderating hate speech in the ‘arms race’ of changing language actually depends upon the existence of groups exercising hate speech.
“We try to understand what is happening by looking at certain communities that we know for a fact are hateful, and that gives us an idea of their usage of language,” said Súarez-Tangil.
“We can learn the most when we apply some qualitative analysis on top of the data. When we look at something only quantitative instead of qualitative, we really struggle to understand how their language is changing. And at the end of the day, the best thing you can do is to keep monitoring these platforms.”
A criticism from some people is that “snowflakes” will want to “cancel” anything they consider offensive, whether it is a fair criticism, hate speech, or just a rude personal comment. Understanding how communities use hate speech among themselves can help algorithms to identify where complaints of hate speech in other settings are more or less justified.
“It is a bit tricky looking at the kind of complaints that you get from users when it comes to hate speech. Trusting inputs from users that you don’t know is problematic because that can give you a lot of mixed signals and a lot of noise,” said Súarez-Tangil.
Handing Control to AI?
The use of automation and systematic detection sounds like essentially handing over responsibility to AI, for example to predict what is hate speech and suppress it automatically. 6GWorld asked Súarez-Tangil about this. He was keen to emphasise that this is not the case.
“Trying to create early detection mechanisms that would prevent the user from posting something into a social network or even malware infecting others is a very interesting approach,” he mused, “Which reminds me of a paper we did a couple of years ago. That was precisely trying to detect in advance – before you even posted a video on YouTube – if a video could have dog-whistles, or could be about something controversial that we have seen in the past has been a subject of hate attacks.”
The process needs a human somewhere in the loop to exercise judgment. Any pointers from machine learning-powered decision support can be impactful in getting ahead of any issues, however.
“If, during their upload time, you already know which videos could be eventually receiving hate speech, you can invest more efforts into moderation and it gives you a way to prioritise what content you need to vet a bit more carefully.”
Given all the challenges with context, language, intent, and personal insults versus hate speech it sounds like an almost insuperably complex challenge. In fact, “We were able to model the types of videos that typically would be attacked by trolls posting hateful comments. It appears that some of these communities really focus on certain types of victims – that they are triggered by the same set of things, and that those things can be predicted.”
Can Learning About Hate Speech Prevent Malware?
The focus on hate speech is a relatively recent focus for Súarez-Tangil. Prior to this he was focussing on how to prevent people from accidentally downloading malware, in particular with the spread of different kinds of device thanks to IoT.
“Sometimes the countermeasures are radically different, but they have something in common, which is that there is this arms race I was mentioning before,” Súarez-Tangil observed. “Humans are the weakest link in this whole process. And criminal communities, when it comes to malware, have realised it a long time ago. Social engineering is a big, big problem.”
Happily, the developers of malware and hacks are people too, with their own communities and forums where they sell products, advertise and teach.
“By analysing these underground communities, I can try to put myself ahead of the game and understand what the new ransomware will be, what the next Wannacry will be,” Súarez-Tangil enthused. There is a common underlying challenge when it comes to hate speech, hackers’ forums and malware itself.
“An ecosystem is constantly evolving. Hate speech changes as people start using slang to evade systems, but malware also continues to change and mutate to evade detection systems.
“I’m hoping I can understand how to systematically learn and adapt to those changes, so that we can react on time and can even anticipate what is going to happen in the future.”
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]