Almost six months ago (May 28th 2018) I posted the “AI winter is well on its way” post that went viral. The post amassed nearly a quarter million views and got picked up in Bloomberg, Forbes, Politico, Venturebeat, BBC, Datascience Podcast and numerous other smaller media outlets and blogs [1, 2, 3, 4, …], triggered violent debate on Hacker news and Reddit. I could not have anticipated this post to be so successful and hence I realized I touched on a very sensitive subject. One can agree with my claims or not, but the sheer popularity of the post almost itself serves as a proof that something is going on behind the scenes and people are actually curious and doubtful if there is anything solid behind the AI hype.
Since the post made a prediction, that the AI hype is cracking (particularly in the space of autonomous vehicles) and the as a result we will have another “AI winter” episode, I decided to periodically go over those claims, see what has changed and bring some new evidence.
First of all a bit of clarification: some readers have misinterpreted my claims, in that I predicted that the AI hype is declining. In fact to the contrary, I started my original post by stating that on the surface still everything looks great. And that is still the case. The NIPS conference sold out in a matter of minutes, surprising even the biggest enthusiasts. In fact some known researchers have apparently missed out and will not be going (probably the most interesting part of this conference this year will be the whole drama with its name anyway). This does not contradict anything, hype is a lagging indicator of what is actually going on. I discussed that, as well as other immediate feedback in “AI winter addendum”. Let’s review what happened over that last several months.
The twitter activity of many of the prominent researchers I follow was rather sparse over the period in question. Dr Fei Fei Li gave testimony in front of congress, with a highlight in the tweet below (and a hilarious reply):
I’m not sure how long people can go for spreading such utter nonsense without any harm to their reputation. I suppose for quite a while, but hopefully not forever.
Andrew Ng, one of my favorite “enthusiasts” is busy building his swarm of startup companies, but luckily found the time to give a short interview to Fortune, in which he gloats about how he singlehandedly transformed Google and Baidu into AI companies. Andrew Ng is a rare example of a person who jumped from an academic bubble into an even bigger AI hype bubble. He combines certain amount of contagious enthusiasm and a fair amount of arrogance which apparently appeals to many young people today. Anyway, the story I heard from several sources familiar with the situation was that he was actually, to put it delicately, relieved of duty from at least one of these companies, but this is obviously unconfirmed rumor. His recent gig is about industrial inspection. In one video he presented an example in which a deep learning system recognizes a faulty solder pad. I actually worked for a company specializing in visual inspection of electronics, so I know rather well what is the state of the art. The machines routinely scan large PCB’s and within fractions of a second create a full 3d reconstruction of the inspected part, analyze each solder joint for imperfections and a set of well defined defects, create interpretable scores which could be used for binning and so on. In face of that, the little video from Andrew seemed rather comical and if anything indicated Dunning-Kruger effect. Now I’m not saying that deep learning could not improve this industry in some way, but the entry level is set pretty high. And same is true for many other industries that rely on very well optimized and often elegant algorithmic solutions.
Now that we are on the subject of colorful characters in the AI scene, another one of my favorites, George Hotz from Comma.ai announced he is resigning from the role of CEO at comma. Hotz, known among other things for hacking playstation, made some ripples in the industry in 2015 when he teased Elon Musk, that he could build a self-driving-car system for Tesla, and he could do it much better and cheaper than Mobileye, which was the company that Tesla used at that time. Not too long after that, a first fatal autopilot accident happened (Joshua Brown) and Mobileye dropped the relationship with Tesla, apparently noting that their use of technology was irresponsible. Anyway, back then Hotz was confident he could build a self driving car in his garage with some old phone parts and deep learning magic. Since then his narrative has changed by some 180 degrees, and now George is on “crusade against the ‘scam’ of self-driving cars“… I guess I could leave this one without a comment, though Picard’s face palm meme would be in order here. I’m pretty certain that following Hotz’s lead, many of today’s AI hype blowers will be screaming how they’ve been warning about AI winter all along, once the bubble bursts.
Speaking of Tesla, as of yet the long promised autonomous coast-to-coast drive has not happened. Earlier this year Elon Musk – CEO of Tesla, stated that the drive would certainly happen by the summer 2018, but on August 1 2018 conference call he indicated that the coast-to-coast drive is actually not a priority anymore and will get delayed (no new timeframe was given). This is quite a change, particularly in the light of the conference call on which the so called full self driving feature was announced almost exactly two years ago. By August, first “autonomous features” were supposed to be unrolled in Tesla’s via over the air update, but no such thing happened (noteworthy, in the beginning of 2018 Musk was rather confident that full self driving will noticeably depart from advanced autopilot by summer). Instead the new autopilot v9 can sometimes switch lanes but by many commentators, the new system actually requires even more attention than the previous one and still is buggy . In fact it is not uncommon to hear opinions that even the current Tesla autopilot with respect to stability and reliability barely approaches the Mobileye solution from several years ago. On October 18’th the full-self-driving (FSD) option disappeared from all Tesla models configurations, without much explanation other than that it was “confusing“. At the time of writing this article it is not clear if the option will be coming back or if those who already payed for it will get a refund. It is still not clear to me if Elon Musk fell a victim of the hype himself or did he deliberately blew the AI bubble, but he certainly contributed substantially to the AI mania between 2015 and now, especially with that egregious stunt with Stephen Hawking.
I should add here perhaps, that I actually think Tesla autopilot is a pretty impressive piece of engineering, just not anywhere near being safely deployable beyond as fancy cruise control. The August Tesla quarterly conference call was also interesting in that a lot of time on the call was spent on the new hardware that Tesla was building that is supposed to offer an order of magnitude improvement over the current nVidia drive PX 2 system, and that this new system will finally allow for full autonomy. I remain skeptical, since having an order of magnitude more compute capabilities than drive PX is nothing extraordinary, a rig of two GTX 1080Ti’s will accomplish it without any problem, and this is what many companies are putting on their test vehicles along with expensive lidars etc, and yet the full autonomy remains elusive. Secondly, it seems that cars which Tesla had sold as “full self-driving compatible” will require a computer swap, which will not be cheap. One can get at least some feel of what stage is Tesla at with respect to autonomy by taking a look at the slides (from May 10, 2018) of Andrej Karpathy, director of AI there. Half way through the presentation he realizes that the driving reality is full of corner cases, which are not only hard for neural nets, but are even not obvious to label by a human. The other half of his presentation is spent on what he calls software 2.0. It is a concept in which computers are no longer programmed but are trained. This reminds me a lot of the hype in the 80’s, the fifth generation computers which were supposed to program themselves based on high level logical specification written in Prolog or similar functional language. This hype cycle was closely related to the expert system mania, which collapsed by the end of 80’s causing an AI winter episode. I’m pretty certain this software 2.0 nonsense will share the same fate.
To conclude the Tesla case, the most recent Q3 conference call for Tesla which took place on October 24th did not bring much resolution to the above uncertainties. Musk reiterated that he believes in the self driving Tesla fleet however full self driving remains “off menu” as it was too confusing (two years after introduction of the item?!), Karpathy quickly mentioned that new hardware will support bigger neural networks which work “very good” and the new version of Autopilot will allow to navigate on the freeway, with the restriction that lane changes will require confirmation from the driver (read, any incidents will be blamed on the driver). No word about the coast to coast drive. No timeline on full self driving.
While on the self-driving car subject, one of the main criticisms of my original AI winter post was that I omitted Waymo from my discussion, them being the unquestionable leader in autonomy. This criticism was a bit unjustified in that I did include and discussed Waymo extensively in my other posts , but in these circumstances it appears prudent to mention what is going on there. Luckily a recent very good piece of investigative journalism shines some light on the matter. Apparently Waymo cars tested in Phoenix area had trouble with the most basic driving situations such as merging onto a freeway or making a left turn, . The piece worth citing from the article:
There are times when it seems “autonomy is around the corner,” and the vehicle can go for a day without a human driver intervening, said a person familiar with Waymo. Other days reality sets in because “the edge cases are endless.”
Some independent observations appear to confirm this assessment. As much as I agree that Waymo is probably the most advanced in this game, this does not mean they are anywhere near to actually deploying anything seriously, and even further away from making such deployment economically feasible (contrary to what is suggested in occasional puff pieces such as this one). Aside from a periodic PR nonsense, Waymo does not seem to be revealing much, though recently some baffling reports of past shenanigans in google chauffeur (which later became Waymo) surfaced, involving Anthony Levandowski who is responsible for the whole Uber-Waymo fiasco. To add some comical aspect to the Waymo-Uber story, apparently an unrelated engineer managed to invalidate the patent that Uber got sued over, spending altogether 6000 dollars in fees. This is probably how much Uber payed their patent attorneys for a minute of their work…
Speaking of Uber they substantially slowed their self-driving program, practically killed their self driving truck program (same one that delivered a few crates of beer in Colorado in 2016 with great fanfares, a demo that later on turned out to be completely staged), and recent rumors indicate they might be even looking to sell the unit.
Generally the other self driving car projects are facing increasing headwinds, with some projects already getting shut down by the government agencies, and others going more low-key with respect to public announcements. Particularly interesting news came recently out of Cruise, the second in the race right after Waymo (at least according to California disengagement data). Some noteworthy bits from the Reuters article:
Those expectations are now hitting speed bumps, according to interviews with eight current and former GM and Cruise employees and executives, along with nine autonomous vehicle technology experts familiar with Cruise. These sources say that some unexpected technical challenges – including the difficulty that Cruise cars have identifying whether objects are in motion – mean putting GM’s driverless cars on the road in a large scale way in 2019 is looking highly unlikely.
“Nothing is on schedule,” said one GM source, referring to certain mileage targets and other milestones the company has already missed.
And a few paragraphs further:
“Everyone in the industry is becoming more and more nervous that they will waste billions of dollars,” said Klaus Froehlich, a board member at BMW and its head of research and development.
Briefly on other players on the AI scene: DeepMind was rather quiet (last time I checked Montezuma’s revenge remained unsolved for AI in the general case), but OpenAI had a small PR offensive with their DotA 2 playing agent. After the initial tournament in which the system won, it quickly became apparent that the game was in many ways restricted in favor of the computer. Hence another tournament was organized, in which most restrictions were lifted, and the tournament was … spectacularly lost to humans… Bummer after OpenAI spent obscene amounts of money training their agents. Now I could not care less about results in game domains, since as I stated multiple times on this blog [1, 2], the only problem really worth solving in AI is the Moravec’s paradox, which is exactly the opposite of what DeepMind or OpenAI are doing, but I nevertheless found this media misfire hilarious.
While touching on Moravec’s paradox, one of the handful of companies that actually tried to move robotics forward, Rethink Robotics, shut down its operation. This shows that making robots do anything beyond what they already do very well in controlled factory production lines is not only difficult technically but also poses a challenging business case, even with the experience of Rodney Brooks. Unlike most other startups in this field, whose main asset is the ego of their founder fueled by some cheap VC money, Rethink actually accomplished many impressive technical achievements and the news saddened me quite a bit. Robotics will need to be rethought again, likely many times over.
Today more people are working on deep learning than ever before — around two orders of magnitude more than in 2014. And the rate of progress as I see it is the slowest in 5 years. Time for something new
This tweet at the time of writing of this post has been retweeted nearly 350 times and liked >1300 times.
So there you go: the state of AI towards the end of 2018 – it is borderline comical (hence Krusty the clown in the title image). Certain things have changed however, some of the smoke dissipated and some of the mirrors cracked. Since my original post, a lot more mainstream media articles have shown up, where the reporters are at least willing to exercise a possibility that we are on top of a giant AI bubble that is already letting the air out [e.g. this one and many others cited in the text above]. This spike of skepticism is the natural next step in the inevitable disillusionment, but I think it will take a while before this bubble will finally deflate. The next 6 months are likely to be particularly interesting in this AI circus.
If you found an error, highlight it and press Shift + Enter or click here to inform us.