This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

The offensive blanket of free speech online

News
Share:
The offensive blanket of free speech online

By

The internet has become the 'tail that wags the dog'; it's time for Google, Microsoft, and the social media giants to clean-up the mess, writes Dr Loretta Trickett

Several MPs recently condemned Google for its refusal to ban an anti-Semitic video posted on YouTube. During the parliamentary inquiry into hate crime on social media it was argued that the video did not 'breach Google's guidelines'. While acknowledging the video was 'anti-Semitic, deeply offensive, and shocking', Google's vice president, Peter Barron, insisted: 'It doesn't meet the test for removing under our guidelines.'

Similarly, Nick Pickle, Twitter's senior public policy manager, recently commented on a Twitter user who wanted to 'deport all Muslims': 'We reviewed that particular tweet and that particular image and found it wasn't in breach of our hateful content policy.' He added that the 'context of tweets was crucial', whether an image had been targeted at a particular person or 'simply posted'.

Arguably, if something about a social group is posted online it can be said to be targeted at a specific group. In any event, you do not need to target at a specific person for something to be hate speech.

Facebook also came under fire after the BBC used the site's 'report button' to flag 100 indecent images, only for the social media giant to respond with an automated message saying the photos did not breach its 'community standards'. The images, of which 82 were not removed, included under 16s in sexual poses, pages aimed at paedophiles, and an image apparently taken from a child abuse video.

The government has also challenged Google about extremist material being placed next to government and media adverts.

It is time the 'so-called' self-regulation of social media content by internet providers is put to legal challenge. For too long, they have hidden beneath the 'blanket of free speech' and 'access to information'. This has created a perception that activity, which would be labelled as crime on the street or in another public place, is not criminal if it occurs in cyberspace. Indeed, many social media videos and posts which constitute crimes are deemed simply 'offensive' and worthy of free speech protections.

It appears the internet is a place where freedom of speech is the more pressing priority and the idea of regulation is abhorrent. This is despite free speech being a qualified and not an absolute right; one capable of being curtailed if a threat to public safety is identified.

These 'internet myths' have allowed death threats, hate crimes, hate speech, child pornography, and incitement to terrorism to flourish because the prevailing view is that nothing can or should be done. The internet has become the 'tail that wags the dog'.

There is evidence of repeated denials from social media companies about what hate crimes and hate speech actually are. There have been numerous examples, however, of racial, religious, disablist, sexual, and gendered hostility being directed at individuals on social media, many of which have placed them in fear of violence.

Hate speech, defined as an expression of hatred towards another person or group of people using various means such as writing, speech, or any other form of communication, would cover many of the posts or videos that have come under heavy criticism. Hate speech carries a maximum prison sentence of up to seven years and fines can also be imposed.

The International Treaty on Convention and CyberCrime came into force in 2004 to unite nations and harmonise laws to stop internet crime, such as child pornography, terrorism, and internet fraud. Yet one of the recurring problems has been that of anonymity online. Those producing hate speech material can hide behind an online shroud and can easily shut down a website or social media account.

Crimes are being committed on social media platforms daily. Internet providers have a social duty to do something about it. In December 2016, Google, Facebook, Twitter, and Microsoft pledged to work together to develop a shared database of digital fingerprints known as 'hashes' to identify images and videos that promote terrorism. Once one company identifies and removes content, the others would be able to use the hashes to do the same on their platforms. Surely these can be used to identify other examples of hate crimes, hate speech, and child pornography.

However, if internet providers fail to remove such 'criminal' material, they can be held liable as accessories to a range of crimes. Under the law on accessories a secondary party does not need to be aware of all the details of the type of crime, simply having knowledge of the type of crime is sufficient. This means that where an internet provider is made aware of a criminal posting they could be prosecuted as an accessory for failing to remove the material, even if the perpetrator cannot be found.

The law must be used to address the current vacuum of responsibility in virtual space. It is time to clean up the internet.

Dr Loretta Trickett is senior lecturer in criminal law and criminology at Nottingham Law School, part of Nottingham Trent University @LawNLS