All the hubbub around fake news and the presidential elections got me thinking of AIs; how we find, review, and share content (a long term topic for me); how human trust and belief hinge upon millennia-old social strategies. And I’ve read a bunch of articles around how fake news has set off a bunch of navel gazing, soul searching, and finger pointing.
A BuzzFeed News analysis has identified the 50 fake news stories that attracted the most engagement on Facebook this year. Together they totaled 21.5 million likes, comments, and shares. Of these stories, 23 were about US politics, two were about women using their vaginas as murder weapons, and one was about a clown doll that actually was a person the whole time.
Human in the machine
I like to consider myself a relatively seasoned netizen, one who has developed a few habits to fend off spam, phishing links, and disreputable content on blogs and social media. With respect to fake news, I’m a sceptic already, even questioning news from legitimate sources, so I think what’s between our two ears as regular humans is a good start for getting savvy to fake news.
Indeed, Kyle Chayka, in the Verge, has a thorough article showing aspects of fake news that stand out stylistically online. Alas, he also points out that these stylistic differences persist, partly due to Google and Facebook stylistically homogenizing all news in our preferred mobile interfaces, but also because the fake news providers do not see any benefit to improving their stylistic features. Though, this would certainly change if there were any positive impact to these stylistic features.
When enormous, undiscerning platforms like the two tech giants hoover up content, they disguise it, no matter the source. It doesn’t have to be that way.
With that ‘analytics between the ears’ sort of spirit, Google and Facebook think they can solve this problem (urgent for them, since they are the primary vehicles for fake news and fake news stylistic homogenization). Facebook has toyed with editorial boards, human moderators, and upping their objectionable content process to include fake news. Seems like Facebook is making a concerted effort to also include existing fact checkers, being more visible labeling suspect content, and tweaking their ad model to reduce click-bait incentives.
Facebook is inherently a human-based business, so good to see them including humans in the process of tackling fake news. Google, on the other hand, is the big SkyNet, AI in the sky. Their take on fake news is better algorithms, not always with a decent outcome.
Ghost in the machine
There is a business model behind fake news and changes to the playing field will lead to changes in the look and feel of fake news, so long as the business model supports those changes. Therefore, we’re in for an arms race.
Fake news fighters are up against determined and smart individuals who will eventually use AI to keep ahead of anti-fake news systems in a battle worthy of a Turing Test. For example, altering images is an old technique, but what happens when AI can make convincing image (and audio and video) manipulation happen at a large and overwhelming scale (smile)?
Oh, you say, but AIs won’t be able to write the fake news itself.
Already many legal notices, sport scores, and other semi-formatted content are being algorithmically generated for online publication. Even AI novices can create a content generator. I had mentioned before how there’s a whole field to computational literature and how we are applying AI to creative endeavors we think are uniquely human. What happens when AIs create fake news that look stylistically real, sound real, and claim things that seem real?
Evil in the machine
Fake news isn’t going away any time soon. Hackers will take over legitimate channels to spread fake news. There are a ton of very big elections coming up in Europe, and fake news is already rearing its ugly head. And while Facebook and the German government are working hard on the legal, professional, and technical aspects of combating fake news before the elections, how do you counter sources of fake news outside the legal structure, outside your borders? How do we filter the signal from the noise, separating the good from the bad? We struggle with this filtering even when sources are named and content is brief.
Us in the machine
I truly believe that the social skills we have to be skeptical, deal with claims, and understand information extend well into the online world. The challenge is to maintain the cues and context we are accustomed to use IRL and map them and make them useful online (I explored an aspect of this in my ramblings on noise posts almost 10 years ago).
The online world has scaled up our capacity to create content and communicate. What hasn’t scaled is our ability to grok it all in the way we’d do face to face. And the social ties that bind us, inform us, and provide context have been frayed or blurred online, making judgment calls even harder (just witness the echo chambers reinforcing themselves on Facebook).
To me, the break down we see with fake news is the gap between who we are as social beings and the tools we use online. The challenge is to take fake news as an opportunity for us to reassess what we do online, how we continue constructing the layer of humanity that is the online world, and how we use the online world as social beings endowed with certain unalienable social abilities.
Image by Christopher Dombres