In my article on big data mining that describes what fuels a lot of big tech’s AI algorithms that smaller tech firms generally don’t rely on, I made the case that heavily crowdsourced AI are subject to exploitation and manipulation.
If you tell a neural network that 2 + 2 = 5 enough times, then sure enough that AI will start regurgitating nonsensical answers since it does not rely on first principles to check the answer but relies instead on arguments ad populum.
Case in point, Facebook’s awful AI to scope out “fake news” (source):
A satirical article from the Babylon Bee is being flagged by Facebook’s AI as “fake news”. In other words, the AI is completely incapable of parsing sarcasm and humour, which is understandable given the subjective and complex nature of parsing context in conversation, a skill many humans frequently fail at themselves when trying to detect another person’s sarcasm. We shouldn’t expect AI algorithms to figure it out either from first principles. AI may be taught how to interpret sarcasm with the current “shortcut” methods of crowdsourced input (e.g. laugh when you see X, Y and Z), but then what we would expect from those methods is a magnification of the human incapability of detecting sarcasm.
In other words, instead of the bogus equation 2 + 2 = 5, the linguistic equation words X + Y + Z = not sarcasm and fake news is being taught instead.
Snopes.com taught the AI to flag such articles like the example above as “fake news” due to its failure to detect sarcasm. Consequently, that same AI is simultaneously demonetizing and de-platforming what it thinks are purveyors of “fake news”, the collateral damage to anyone that dare opposes the allegedly objective all-knowing AI overlords.
Humour is not universal. Some people find some things funny, while other find those same things abhorrent. Sites like Snopes.com have subjective opinions on certain topics while self-proclaiming them as truths. These subjective realities are being fed into AI algorithms and you get what you see today: subjectivity magnified because computers don’t form their own opinions, they only objectively follow the rules they have been taught and very rigidly so. They are taught to be objectively subjective, as paradoxical as that sounds, by amplifying the opinions of its teachers.
Sadly, the teachers of big data tech firms include heavily biased outlets, like Snopes.com, the SPLC, and many other groups under the influence of postmodern neo-Marxist doctrine. AI based on big data should scare us not in the sense that they may develop a mind of their own and take over the world, but because they enable their teachers and developers to discretely disseminate subjective opinions under the guise of objective truths, and hide behind that veneer of objectivity by letting the AI do their censorious dirty work.
* * *
If you enjoyed this article, be sure to share, leave a comment and subscribe to the RSS feed.