Facebook’s recent technical glitch reveals how it uses AI to tag images

On 3 July the core Facebook app, along with WhatsApp and Instagram faced a huge that impacted millions of users across the globe. Earlier today, the company announced that the problem has been fixed. However, during the outage, some users got a little peek into how exactly does Facebook use AI to tag our images.

We already know that Facebook uses machine learning to recognise and tag images on the platform. But what we don’t know is how AI categorises or defines our photos.

Facebooks recent technical glitch reveals how it uses AI to tag images

Image: Pixabay

Due to the glitch, when users looked at the uploaded images, instead of, for instance, a picture from a family vacation, it would show “image may contain: people smiling, people dancing, wedding and indoor”. The latter is machine learning in play.

But what’s the big deal about it?

True, we have known for years that Facebook uses AI to tag our images, but did you know that it has gotten so good. And now that you have realised its accuracy, imagine the potential of using this data to target ads. How does that work? Well, imagine this, you post pictures with dogs very often, Facebook’s AI calculates that you either have a dog or want one. You are now a great target for animal adoption ads, animal food ads, animal grooming products ads, etc.

Also Read:  Facebook’s Creator Studio gains a mobile companion

This is just an example, the information posted in the photos can be used in several ways. While there is no confirmation if Facebook really does that but if it really does, users have no idea the amount of information that they are sharing on social media every single day via images that they post.

You might also like More from author

Comments are closed.