What it is, Where it Comes From, and How to Defend Against It | Tips & Tricks
Did Tom Hanks really murder five? Well, not really. That story is from the satirical newspaper The Onion, and Tom Hanks has notably not killed anyone. Most people likely recognize that this article isn’t true, and if they’re not sure, a quick search is all it takes to confirm his innocence.
Imagine that instead of a single light-hearted humor article, though, that someone who really didn’t like Tom Hanks was writing and spreading dozens of these articles, targeting groups of people who don’t know much about the actor. This would now qualify as a cognitive hack executed via “weaponized information.” In short, it’s an attempt to change the information in your brain in order to accomplish certain goals.
These tactics can and have influenced events in the real world, sometimes with deadly results. People have been taken in by hoaxes and propaganda for about as long as communication has existed, but precise audience targeting and near-instantaneous distribution is pretty new.
Cases like the QAnon conspiracy theory show how easily false information can be used to control people, and the fake news epidemics that came into the spotlight around the 2016 American presidential election are still very much a concern. It’s hardly a cause for panic since the vast majority of people aren’t typically fooled, but as it becomes easier to effectively target niche groups, it may be worth considering how we can use technology to get better “cognitive security.”
What is “weaponized information?”
Weaponized information is anything that checks one or more of the following boxes:
- Intentionally falsified or misleading
- Meant to influence opinions, behavior, or perceptions of truth
- Targeted toward and designed to mislead specific populations
- Could be classified as propaganda, fake news, satire, a conspiracy theory, etc.
- Spread using primarily social media and niche websites
- May involve automated sharing by botnets and fabricated comments/sources to increase apparent popularity and legitimacy
- May be created either to push a certain agenda or to play on existing divisions for monetary gain
In a nutshell, think of weaponized information as advertising with no limits. Ads are targeted at certain groups and intended to change their opinions about products, but they generally stop short of outright lies. An ad like “cigarettes are good for you!” won’t fly, given that the opposite has been proven true. Conversely, weaponized information is free to use whatever data it wants about you and has no minimum truth requirement.
Whose weapon is this?
Sometimes there may actually be a single entity orchestrating campaigns of weaponized information. Evidence continues to emerge that Russia is one of the leading powers in the misinformation wars, and there are likely other entities that have weaponized information for political goals. Due to the pseudonymous nature of the industry, though, it’s hard to figure out what has been run for primarily political purposes. The fact that it can be so lucrative confuses the issue even more.
Fake news is a profitable business mostly because it acts as a magnifier for existing divisions in society. Things that people argue about get clicks, and because these things are usually politically charged, it becomes almost impossible to distinguish between information that is weaponized for political ends and information that is written somewhere in Macedonia for the advertising revenue. It’s like throwing dynamite into a lake to catch fish: to you, the dynamite is a money-making tool, but as far as the lake’s ecosystem is concerned, someone has attacked it.
How does information become “weaponized?”
Rand Waltzman of the RAND Corporation describes the process of weaponizing information this way:
- Break the target population down into communities or groups based on any criteria: political affiliation, hobbies, interests, etc.
- Identify the people in each community who would be most likely to believe false information
- Analyze the communities and figure out how they communicate
- Look for the narratives and stories that commonly pop up in the community’s conversation
- Design your own narrative that pushes your viewpoint, then insert it into the community through whatever media they tend to gather on
- Monitor the community and adjust your message depending on how people react
This is generally what separates satire and clickbait from weaponized information. Regardless of intention, every producer of fabricated information needs to have a process to help them find and target susceptible communities. The more information they have on the behavior of their target groups, the more effective they can be, which is part of what makes data breaches like Cambridge Analytica fairly concerning.
There are two main ways to combat the spreading of weaponized information: either stop it from being widely distributed or educate individuals to recognize it. Companies like Facebook, Google, and Twitter have made efforts to remove falsified stories from their platforms, and other startups are also doing their part, but it’s a constantly evolving struggle, and there’s a fine line between quality control and censorship. Until it stops being profitable, either financially or politically, for information to be weaponized online, it’s unlikely to stop being produced.
Combined with increased efforts to increase readers’ discernment, though, these tools could cut down a lot on the spread and impact of weaponized information. There are already efforts at many different levels to increase people’s awareness of fake news, but this is also an arms race. One of the biggest advantages of online weaponized information is that it can adjust to its audience almost instantly. If people start checking sources, a quick redesign could easily make them appear more reliable.
Weaponized information sounds scarier than it is
In the long run, the most powerful cognitive security asset that exists is the average human brain. Weaponized information sounds scary, but it’s not an irreversible mind virus. The most susceptible people tend to be the ones who probably held some less-than-factual beliefs already, and they’ve always existed in some form or another.