Facebook Artificial Intelligence SUICIDE PREVENTION | Social Media

The giant network announced putting to use the  artificial intelligence to detect possible attempts. 

Zuckerberg, however, warns that the tool can also be used for harassment and hate.

The Facebook wall aims to become the psychologist of the world. The responsible for the Facebook social network team (1,860 million people at the end of 2016) announced on Monday that they will begin to apply artificial intelligence research to detect depressed moods that could lead to suicide.

HOW DOES THE FACEBOOK WORK?

Facebook will scan comments, photographs, and videos. A quest to find patterns, phrases, and attitudes that suggest that someone is about to take their own lives. Their algorithms will then send an alert to the user offering help. But, at the same time warn employees of his/her company and alarm the police that someone is in danger. And from there, back to analog life: checks and medical assistance if necessary.

An important key to the puzzle is the fact that they are machines and not people. So far, Facebook allows to inform (or denounce) a user that one of his friends from the wall could be in a position to think about suicide and a team of reviewers hired by the social network reviews it and launches the alert, if applicable.

There are complaints about users who committed suicide while using the Facebook Live function, which retransmits live video. While others allegedly “friends” of their circle cheered or joked about it. This is the exact reason for why the company had to rethink its policy. So, now, they will implement automatic filters in texts, photos, and videos.

According to affirmed this Tuesday the founder and the person in charge of Facebook, Mark Zuckerberg, announced that this is an important step forward to help to prevent suicides.

TRAINING THE EMPLOYEES FOR SIMILAR ARTIFICIAL INTELLIGENCE PROJECTS

According to Guy Rosen, vice president of product management of the company, this antisuicide campaign also implies “dedicating more reviewers” to analyze the complaints and improve how we identify the first responders. In the US there are agreements with NGOs dedicated specifically to this subject.

Rosen also affirms that in the official statement made through Facebook, that his circuits will continue to prioritize the human messages launched from the application and that artificial intelligence will be a complementary mechanism. He also assures that his technicians have received specific training on how to detect suicidal tendencies or self-harm.

The automatic appeal will be implemented in several countries. The European Union is not one of them. Although the manager of the social network does not yet explain the reasons, in a later interview with Reuters, he attributed the reason to the “different sensitivities” on the subject of privacy protection.

For mental health professionals, however, there is a detectable pattern. Beyond obvious messages like “I can not take it anymore,” a person with the intention of committing suicide can show traits such as posting sad songs, expressing impulsive attitudes such as showing a certain promiscuity, or a recent failure.

OTHER USES OF THE FACEBOOK ARTIFICIAL INTELLIGENCE

After the announcement of the new tool, Zuckerberg went further on and openly stated that in the future, the Facebook artificial intelligence will be able to better interpret the subtle nuances of language, and will be able to identify different issues beyond suicide. For instance, different types of harassment, racism, and hate.

So far, Facebook allows users to report things they feel are a violation.  A violation of harassing someone, publishing nudity or sexual acts, people who spam, make unauthorized sales, and people who proclaim hatred towards any group. Terrorism and pedophilia are also one of those.

Facebook has been hiring people to censor content, some with controversy. But, now, with the help of the Facebook artificial intelligence, the censorship will be applied by a machine. This raises ethical dilemmas.

  • What right does someone have to diagnose a depressive state without being a professional?
  • Who to decide to notify?
  • What actions to take?
  • How will you use that information later?
  • What if the person is someone from your family or your friends?
  • What if it generates false positives or false negatives is Facebook to blame?

Where is the limit of using personal information from the social networks, in times like these when companies are hiring based on algorithms like Facebook artificial intelligence? It is a dangerous tool because today they will use it for suicide or terrorism and tomorrow to know how I get up and manipulate my intentions.

You might also like More from author