Facebook’s recent technical glitch reveals how it uses AI to tag images
On 3 July the core Facebook app, along with WhatsApp and Instagram faced a huge technical glitch that impacted millions of users across the globe. Earlier today, the company announced that the problem has been fixed. However, during the outage, some users got a little peek into how exactly does Facebook use AI to tag our images.
We already know that Facebook uses machine learning to recognise and tag images on the platform. But what we don’t know is how AI categorises or defines our photos.
— Danielle Abril (@DanielleDigest) July 3, 2019
Due to the glitch, when users looked at the uploaded images, instead of, for instance, a picture from a family vacation, it would show “image may contain: people smiling, people dancing, wedding and indoor”. The latter is machine learning in play.
Oh yeah! I forgot Facebook uses machine learning to tag our photos with what it sees in the picture.
To be fair, “one person, beard” is pretty much a spot-on description of me. pic.twitter.com/fCpydUxtpz
— Zack Whittaker (@zackwhittaker) July 3, 2019
— Josh Fruhlinger (@jfruh) July 3, 2019
But what’s the big deal about it?
True, we have known for years that Facebook uses AI to tag our images, but did you know that it has gotten so good. And now that you have realised its accuracy, imagine the potential of using this data to target ads. How does that work? Well, imagine this, you post pictures with dogs very often, Facebook’s AI calculates that you either have a dog or want one. You are now a great target for animal adoption ads, animal food ads, animal grooming products ads, etc.
This is just an example, the information posted in the photos can be used in several ways. While there is no confirmation if Facebook really does that but if it really does, users have no idea the amount of information that they are sharing on social media every single day via images that they post.
— Troop4Christ (@Troop4Christ) July 3, 2019