REVEALED: Facebook gets more than 16,000 reports of users posting revenge porn EVERY DAY

596

Facebook fields nearly half a million reports per month from users flagging revenge porn on its platform, says a new report.

According to NBC, the stream of reports have persisted despite increased efforts from Facebook to quickly identify and remove the images – nude pictures that have been uploaded without a person’s consent.

Among those reports, reports NBC, are accounts of nude images posted as revenge against another person and also what’s known as ‘sextortion,’ in which nude images are posted to the site in an attempt to black mail someone. 

Facebook removes around 500,000 images classified as revenge porn every month according to a new report  (Stock image)

Facebook removes around 500,000 images classified as revenge porn every month according to a new report  (Stock image)

Facebook removes around 500,000 images classified as revenge porn every month according to a new report  (Stock image)

To deal with the influx of revenge porn, Facebook employs a team of 25 people which, in tandem with an algorithm developed to identify nude images, helps vet reports and take pictures down.

According to NBC, Facebook has worked to revamp its process of flagging harmful images since missteps from several years ago in which the company asked users to submit their nude photos to the platform preemptively.

In that pilot, Facebook attempted to use nude photos submitted to the platform as a means of training its artificial intelligence to identify and remove pictures if they should appear on the platform.

That program was met with a mostly negative response with some skeptics casting doubt on whether their pictures would be seen by content reviewers working for Facebook.

Facebook has worked to combat the influx of toxic content on its platform using a mixture of artificial intelligence and content moderation (Stock image)

Facebook has worked to combat the influx of toxic content on its platform using a mixture of artificial intelligence and content moderation (Stock image)

Facebook has worked to combat the influx of toxic content on its platform using a mixture of artificial intelligence and content moderation (Stock image)

‘It sounds like it would be a helpful tool, but then you start thinking about sharing this photo with other people,’ Nicole Brzyski, a victim of revenge porn told NBC. 

‘We are already so uncomfortable and traumatized. I never did it because I was so afraid these photos would somehow be leaked again. Who knows who is looking at them?’

Facebook says that it doesn’t catalog images given to the platform in picture form, but rather converts them to a type of digital fingerprint called a hash that it then uses to identify the image across its constellation of apps, including Instagram and WhatsApp.

Facebook uses that same system to weed out pictures and video of sexual abuse.

WHAT CONSTITUTES REVENGE PORN ON FACEBOOK?

Much of what constitutes revenge porn is covered under Facebook’s rules on nudity.

In March 2015, however, the social network brought in specific community guidelines to address the growing problem of revenge porn.

The section, entitled ‘Sexual Violence and Exploitation’, deals specifically with the subject.

The guidelines say: ‘We remove content that threatens or promotes sexual violence or exploitation. 

‘This includes the sexual exploitation of minors and sexual assault.

‘To protect victims and survivors, we also remove photographs or videos depicting incidents of sexual violence and images shared in revenge or without permission from the people in the images.

‘Our definition of sexual exploitation includes solicitation of sexual material, any sexual content involving minors, threats to share intimate images and offers of sexual services. 

‘Where appropriate, we refer this content to law enforcement.’