Instagram to warn users over ‘bullying’ language in captions

Instagram is to warn its users when they are using in their that may be perceived as offensive or .

The social media company said it will use artificial intelligence to spot language in that could be deemed potentially harmful.

A similar feature, which alerts users when the comments they're leaving on other people's posts contain possibly harmful language, was launched earlier this year.

When an Instagram user posts a caption that could be seen as bullying, a message will appear on their screen informing them that their caption looks similar to others that have previously been reported on the platform.

They are then given the option to edit the caption, learn more about why it has been flagged by the feature or to post it as it is.

Earlier this year, the head of Instagram Adam Mosseri published a statement outlining the Facebook-owned firm's commitment to combatting cyberbullying.

In the statement, Mosseri said the social media platform is “rethinking the whole experience of Instagram” to address the issue.

“We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves,” he said.

“It's our responsibility to create a safe environment on Instagram. This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem.”

Instagram has been criticized in the past for failing to take adequate measures to protect its users from online abuse.


In February, the social media company stated it was committed to removing all images related to self-harm on the platform.


Eight months later, Instagram announced plans to extend its ban on self-harmsuicide-related images to drawings, cartoons, and memes.

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More