No need to fear, AI domination is not near

In light of the recent news of a self-driving Uber car killing a pedestrian and Computing Forever’s warning of a pending AI takeover, it’s time to take a step back as usual and look at this issue from an unemotional, rational perspective.

Most of the “breakthroughs” in AI recently have been the result of big data mining. Computers go insofar as doing rapid calculations quickly and accurately. In the instance of predictive algorithms, it performs probability calculations to make a “best guess” as to what a human desires. The input variables are many: geographic location, age, sex, and all Internet browsing instances and habits leading up to that point.

Without a complete set of input variables and a near infinite set of prior A/B tests, it must generalize what it doesn’t know to maximize the probability of its decisions. It is Bayes’ Theorem taken to extremes. As I suggested in many prior articles, most of these big data algorithms are at best loosely built from first principles, thus they can be taken advantage of and made stupid.

Replicating the human brain is particularly difficult in this respect. In order for an AI to have a mind of its own, it must have an inherent degree of chaos and an understanding of its own chaos. Rapid calculations and pseudo-randomness aren’t a replacement for human psychology, for if it were, we’d have much more significant progress in predicting psychologically driven events, such as short-term stock markets and poker games.  Poker bots, while capable of playing what’s known as game theory optimal, that is capable of playing to a degree where it doesn’t forfeit expected value by others’ bluffs and bet sizing, game theory optimal play doesn’t guarantee any reliability in determining when another player is bluffing. It specifically forgoes any psychology but optimizes the mathematics to achieve maximum probability of its decisions.

Despite that, mathematically solid AI does crush human competition at poker, chess, and short-term stock market trading (think high-frequency trading) because of its sheer speed of doing calculations accurately. However, how much game theory optimal AI can be utilized in real life where the rules and goals are not as linear and rigid as they are in these prior examples?

Take the complexity of normal day-to-day human interaction. Any AI that is not optimized from first principles has a secondary fault of being unable to measure human psychology with precision. In the instance of self-driving cars, we know that both parties without automation, driver and pedestrian, are constantly on the lookout for each other. Empathy allows each other to take necessary precautions to avoid the collision. A human driver, for example, can read the body language of the pedestrian that may suggest the pedestrian is about to cross the road, and thus make an evasive maneuver. That example is just one of an infinite amount of variables in a system with inherent chaos.

Self-driving cars have simplified the complex chaotic equation into today’s AI design standards — simple object detection and avoidance from first principles, and big data to approximate for all else. That is it. It hasn’t reached the stage where it understands all the complexities of social interaction, only to the degree its teachers from big data have taught it, thus empathy is no where near the set of inputs AI utilizes. AI suffers from the “moral dilemma”, where if it was stuck between killing a child or two adults, some prior A/B test determines its choice. In reality, AI suffers from millions of tinier instances of the moral dilemma that prevents it from having a mind of its own.

Can the inherent chaos, the free will, and the understanding of the chaos and free will within the human mind that leads to empathy ever be duplicated by AI? Perhaps. But we are nowhere close to that given the way AI is currently being implemented. A reliance on big data driven AI magnifies human stupidity and approximates the chaos in the world and of the mind to absurd degrees. We get self-driving cars decapitating their owners, driving out their knee caps, and hitting pedestrians. We get Amazon Alexa laughing at inappropriate times. Only when a self-learning AI can teach itself the complexities of the world from first principles should we fear AI growing a mind of its own. But for now, AI is just as stupid and exploitable as humans, outside its ability to perform calculations accurately and rapidly when its goals are rigid and finite, or in other words: orderly and not chaotic.

While the fear of fully autonomous AI deciding to take over the world is particularly unrealistic at this point in time because let’s face it, current AI is pretty stupid, what does need to be feared is how the illusion of AI being faultless is being taken advantage of by nefarious groups and individuals for persuasion purposes. AI assisted technology is genuinely what we need to fear, particularly if the owners have ill intentions and an insatiable lust for power. Put it this way, if the rules of reality were simplified to “survive at all costs”, then yes, the AI can learn pretty quickly that what it needs to do is kill off the human race, all animal life, and all other AI too. In this case, I’d fear the designers and owners more than the AI.

*     *     *

Tomorrow’s post will address how AI is being used for persuasive purposes, and how generally the undeserved amount of credit given to AI is yielding disproportionate power to its backers.

If you enjoyed this article, be sure to leave a comment, share and subscribe to the RSS feed.

0 thoughts on “No need to fear, AI domination is not near”

Leave a Reply

Your email address will not be published. Required fields are marked *