Another naive software engineer bites the dust in Silicon Valley for not heeding the warning signs this blog has pointed out: outside of games with rigid rules and a finite set of moves, current AI is mostly just plain stupid.
In the awe of AI like AlphaZero, believing that AI will take over the world because of its dominance in linearly defined subjects such as chess or big data pattern recognition, one has to note that much of the formidable AI we see today are built on neural networks. With neural nets, there must be repeated experiments with billions of failures to learn from mistakes. Unless a problem can be solved precisely and accurately using only first principles and mathematics, to avoid harsh approximations and thus stupid AI there needs to be billions of trials of A/B testing in order to achieve some semblance of optimized results in a neural net.
AlphaZero didn’t learn how to play good chess right away, and Google search didn’t return relevant search results based on quantifiable inputs until much later when it gathered enough data and outputs from the first billion or so trials.
If you were to apply the amount of learning progress to judge the performance of self-teaching AI today, Tesla Autopilot, if self-taught at all, is at the bottom of the pile. Unless Tesla can run its Autopilot in simulated reality for billions of iterations, it simply won’t learn the implicit rules of the road, the psychology of other drivers, and all the other chaotic systems of reality to be considered “smart”. It can only be as smart as how much it can precisely and accurately calculate from first principles, and evidently, human vision and psychology are tough to model mathematically. Realistic VR isn’t exactly a solved problem either.
You can’t just have a bunch of cameras scanning for moving objects, white lines and walls, and program an object avoidance model and call that an AI. And if the AI was indeed learning along the way what constitutes driving hazards, what differentiates between snow and pavement markings and so on, unless it can train itself rapidly in a simulation with reality’s infinite amount of permutations, the AI is still in its infant stages of learning and thus, still very dumb.
It’s safe to say that this software engineer and all other former Tesla Autopilot victims gave too much credit to technology and AI to trust Autopilot with their lives. The AI may assist in calculating breaking distance, reacting to sudden obstacles, and helping the car stay within the lines, but that only serves to atrophy the driver’s reflexes and driving skill. AI used in this fashion isn’t benefiting anyone.
On the other hand, these Tesla victims provided perhaps maybe a dozen of the trillion or so trials needed to advance self-driving AI closer to an optimal state.
We just need another trillion or so failing trials. Note that in a microsecond, the number of Tesla driving trials is quite small compared to the number of trials a self-teaching chess AI on a computer would perform. At the current pace, given that each self-driving trial is on the scale of days to months, self-driving cars will be ready in a trillion years, give or take.
* * *
If you enjoyed this article, be sure to share, leave a comment and subscribe to the RSS feed.