Social media have changed our lives…Today all of us can be “stalkers”. That’s right! This may seem exaggerated but in one click we can know the age, the school, the hobby, everything we want to know about a person; even contact in less than a second someone we haven’t heard for a long time or openly express what we think without anyone forbidding it … in short, we have in our hands a powerful weapon for spreading the flow of information!

According to a scientific study, teenagers cannot live without a phone connected to an internet network: it is like living without water, food, or living in the dark! It can seem ridiculous but it is true… with a small box, we build ourselves!

The site https://today.yougov.com/topics/lifestyle/articles-reports/2019/10/25/teens-social-media-use-online-survey-poll-youth reports the estimates of the percentages of social media most used by girls and boys:

Fortunately, there are now Community Standards which describe what is allowed and what is prohibited on Facebook.

These regulations are based on community feedback and expert advice in areas such as technology, public safety and human rights.

The Community Standards were written to ensure that everyone can express themselves, and Facebook pays close attention to creating regulations that include different views and opinions, especially those of people and communities who might otherwise be overlooked or marginalized.

At this point, the question arises … why are they so important? And that’s why I decided to ask Facebook. He replied like this:

“It is of primary importance to us to make sure that groups continue to be a safe place to make meaningful connections. Groups created to encourage hate speech or intimidation have no place on Facebook.

It is important that groups continue to be  safe places for people to connect. Groups are proactively monitored for hate speech and incitement to violence through a combination of cutting-edge technology and human analysis. If we find similar behavior in a group, the group is removed and, if necessary, we notify the police.

Posts that violate Community Standards regarding hate speech, for example, are removed, as are groups that continually violate those standards. This enforcement policy ensures that when we analyze a group to decide whether or not to remove it, we review the content of administrators and moderators for any violations, including posts by members they have approved.

For members who repeatedly post infringing content, we may require admins to approve all of their future posts before showing them in the group.”

There is therefore a need to protect individuals, thus providing a certain freedom of speech and expression, however, underlining the fine line between freedom of expression and hate speech.

But how safe are we really on social media? And in the case of harassment, how can we defend ourselves?

Social media are undoubtedly a space open to everyone in which to do and say what you want but due to the lack of specific regulation, it has made it pure anarchy! So, on social media – even if very precise rules have been established today – prudence is never too much!

What anti-abuse measures have they introduced recently about systems to report abuse?

Currently, social media firms including Facebook, YouTube and Twitter operate as hosts, rather than publishers, in the UK. As such, they are exempt from being held legally liable for user-generated content if they act to remove content when notified.

And what about ways to remove offensive content?

Facebook and others are obligated to remove illegal content when notified of its presence on their platforms. A user reports a piece of content, it is reviewed by a human and removed if it violates community standards.

How to block/report abusive users?

Most social media platforms rely on the ability to block or mute individuals, filter out certain phrases or keywords, and report the content and account for harassment. Twitter has anti-abuse filters that block notifications from certain types of accounts that are not verified with a phone number or email address and temporarily freezes accounts where its machine learning systems detect signals of abuse. Public figures often get stronger tools than the average individual, with more advanced filtration systems. Twitter’s “quality filter”, available only for public-figure “verified” accounts, is a more aggressive version of the anti-abuse filters, for instance.

How to flag any abusive content?

Social media companies can directly report incidents to the police, but most harassment is left to the victim to report. Some companies make that easier than others. Twitter will provide a summary email of links to messages that can be forwarded to police, while Facebook has no such system in place. Prosecution for harassment can result in up to six-month imprisonment and a fine, and threats to kill carry a possible sentence of 10 years’ imprisonment, but attribution is difficult. Social media platforms can be used by people anonymously or with fake profiles, with little in the way of verification. At the same time, harassment from other jurisdictions makes prosecution of offenders difficult.

As you have read, the creation of a system that limits harassment as much as possible is still far away … for this reason I appeal: Always remember that our freedom ends when that of others begins! Hearing on the news or even worse reading racial, sexist, homophobic comments and any other type of comment that offends the other just for the simple fact that a person is like that, is repugnant!

Alberta Elia,

4^G