Yesterday’s article points out that AI is far away from having a mind of its own and robot domination is irrational and unfounded.
However, that doesn’t mean that the role of AI is diminished in today’s world. In fact, a dangerous precedent is set when AI is used to accompany nefarious human behaviour.
When not used in a nefarious manner, it provides a leap forward in human development. Take the chess world for example. One shouldn’t see the defeat of Garry Kasparov by Deep Blue as a tragic turning point in humankind. Rather, when chess players team up with rapid calculating AI, the two together can advance the theory and knowledge of a particular subject once restricted by a human’s limitation of time. That is, some theories once too time-intensive to prove are now solvable with the assistance of AI.
What you see today in the chess world are current grandmasters such as Magnus Carlsen rapidly advancing the state of chess, using chess engines as an assistant to evaluate new and different theories. That mixture of rapid calculation and human ingenuity is a powerful combination that vaulted players like Carlsen past all previous grandmasters in the history of chess. It’s not always the engine-like moves, but the mix of creative and atypical moves that throw off opponents, moves AI wouldn’t necessarily come up with on their own, that give the top grandmasters an advantage. Most recently, Vladimir Kramnik’s game versus Levon Aronian in the 2018 Candidates tournament shows how new theories mixed with AI study can be a lethal combination.
On the other hand, when AI is used to advance an agenda that is clearly destructive in nature, we have to fear its misuse. For example, take the whole set of censorship algorithms employed by Google and Facebook. We know that left on its own, AI is terrible at parsing context and nuance, but excellent at performing a high volume of calculations accurately at high speed. The alarming amount of false positives of YouTube’s demonetization algorithm and the “shoot first ask questions later” mentality of the program shows that too much trust is put into AI, perhaps intentionally so by its backers.
The amount of credit given to AI being “smarter” than humans gives the backers with a bad agenda a scapegoat: in pursuit of a perfect algorithm, they can merely state there will be some casualties along the way, but eventually the AI will be flawless and incapable of making mistakes.
Apart from giving the backers a free pass on their own bad decisions, an AI based on big data is only as smart as the inputs and outputs it receives and its developers as this blog consistently likes to point out.
Could you imagine in the previous chess example how stupid the AI would play if a database of games played between beginners were the input of AI engines? Furthermore, what if the rules of the game were not so linear and rigid, such that the end goal wasn’t simply checkmating the opponents king, but a more chaotic, silly and subjective function such as what moves cause the opponent to stand up out of his chair? Without rigid, finite and clearly definable goals and proper inputs, chess AI would simply be worse than a coffeehouse amateur.
That is the scenario we face today with AI assisted censorship on the Internet. Its teachers are ideologues, providing goals more obscure and subjective than the prior chess example. The teachers have been marking videos with their unstable interpretation of “hate speech” or “fake news”. Their interpretation of “hate speech” largely doesn’t match others’ idea of impermissible content, but because the AI is misaligned with being “brilliant” and “flawless”, the ideologues can use AI to masquerade their subjective interpretations as objective. They then use AI to their advantage, striking fear into those that face the wrath of its rigid, rapidly calculated decisions.
Aside from improper training of AI to advance an agenda, properly trained AI can be employed with an equal level of destruction. In the same way chess grandmasters team up with AI to gain an advantage over their opponents, big data companies use AI similarly to gain an advantage over their competition, by maintaining a veneer of an information monopoly to persuade its users that their platform is the only choice available for “accurate” information. By tracking usage patterns and catering results and content to strike a chord with humans’ inherent narcissism, ego and addictive personalities, Facebook and Google return results with a greater likelihood of administering dopamine hits — results that generally are in line with a user’s interests and opinions.
In other words, AI is being used to exploit human psychology for profit, social consequences be damned. As I argued though, all it takes is the ability to think rationally about the situation to drop these exploitative platforms rather than ask for government intervention and regulation.
Seeing though the majority of users have no interest or are now incapacitated from thinking critically due to over-reliance on tech, perhaps it will take an AI to spit out the suggestion that social media and many big data solutions are bad for humankind to save humankind from bad AI.
* * *
If you enjoyed this article, be sure to share, leave a comment and subscribe to the RSS feed.