Artificial Intelligence Bubble vs Reality

Have you heard what they said in the news today or at least recently? The artificial intelligence train has departed already. If you are not on the train you have missed the opportunity or your technology is already outdated. Is it just me or we can hear late nineties World Wide Web bubble 1.0 pumping phase resonating in these statements? Back then the web was supposed to be the next big thing which it actually was except not nearly as big as the size of the speculation investment bubble. That train left, kept speeding up and fell off the cliff leaving many investors with losses that have never been recovered from. Is AI big, bubble or something in between?

Artificial Intelligence concept is much older than the computer age yet no substantial progress has been made those days. The idea of a non-human creation thinking is as old as humankind with the concepts of possessing god powers. Once the matter comes from fantasy closer to more daily practice it proves to be tricky. This brings us to the first fundamental dilemma; do we want to create an artificial counterpart of human brain or rather get a task solved? In which order?

Depending how you look at it the question could be answered in more than one way. While artificial intelligence was established as a separate division of science only in 1956, Alan Turing machine, widely considered as a first computer has its blueprints based on decryption machine that cracked Nazi Germany’s enigma codes back in World War II.

Then came the post WW2 era with a great belief in technology. Rocket science, nuclear explosions development and other achievements in warfare technology during the world conflict set very high expectations for rapid advance. Since it was known by then already that, the brain works using electric impulses the hope was to recreate human mind in the form of an electronic machinery. As it often turns out with high hopes, the reality check result could be pretty hard. Nuclear energy is a great example of this process; over 60 years after the first controlled nuclear reaction we are decommissioning more nuclear power plants than are planned to be built. Nuclear propulsion has never successfully reached beyond military vessels where the cost is never a first factor. Thermonuclear electric power plants remain to be a few years away from being implemented for the past 40 years and a breakthrough does not seem to be in sight. Lunar landing proved to be a great achievement of humanity but also cost prohibitive to be continued at this pace once the military and political goals behind it have been reached. Space shuttle program turned out to be too ambitious and proven deadly when shuttle type turnover time tried to be achieved too aggressively. Was artificial intelligence project fate sealed the same way?

In many ways it actually was but luckily for AI and its enthusiast the patient is far more resilient and hungry for life and the less lucky technology counterparts. One of the more important reasons for that is the fact that no significant progress in rocket or nuclear science can be made in garage setup while substantial advance was secured in these very conditions in the area of computer technology.

Unsurprisingly the focus of 1950s and 60s in the AI field was military oriented. With secured funding and Cold War armament race pressure no potential was to be overlooked. Encouraged by Nazi enigma cracking success the scientist and engineers turned to believe that machine based instant translation from Russian to English was possible. It would have given the US intelligence and military a great advantage over their rival counterparts. While Russian and English have immensely deep differences in grammar and syntax, Noam Chomsky work from the time seemed to be able to help overcome this obstacle. What lied a foundation for the project failure were the common, causal language parts, the phrasal verbs so obvious in every language that many people forget they exist. These disambiguations, while trivial at the human level proved killers for the computers turning the translation job results into jokes and anecdotes fodder. Large funds committed into failed research must have ended the way it did. Spending cuts and many years of promising projects collecting dust on the shelves waiting for funds and talents to pick them up again. In this particular struggle of human against machine the human was a winner but certainly not the men and women behind the project.

We still learn today how difficult this project was. Google Translator, probably the most well-known and one of the most advanced translating solutions is based on the human translation principle. Instead of trying to translate using dictionary it uses a method based on the Rosetta stone. The famous piece of rock was found at the end of 18th century and it contains a decree written in Ancient Egyptian hieroglyphs, Demotic script and Greek. The historians’ community considers it one of the most important piece in learning ancient Egyptian writing. Similarly, the Google translator uses big data containing large amounts of human translated documents to find an accurate counterpart for a phrase in another language and still not without errors.

Many well-funded projects of seventies and eighties of the twentieth century failed in similar way to the instant translator fiasco. I do not want to sound like an armchair general but most of these failures follow the same scenario. Expectations have been set stellar to attract and secure funding. Once the research and development started, it was not long until king was found naked. This naturally caused spending cuts and shorter or longer winters within the subject area. Sounds exactly like a speculation bubble, doesn’t it? Maybe except for funding, public vs private.

It may sound by now I am only looking for the failures and setbacks in the process of developing artificial intelligence but instead I want to point out how difficult and complicated the process be. Citing Galileo Galilei ‘All truths are easy to understand once they are discovered; the point is to discover them.’ It is part of the western culture to believe that once something has been achieved it must have been easy to achieve while the reality is quite the opposite. While we all like to celebrate success, paying attention to the disappointments that led the way to that success might be a better forward looking strategy.

Has anything worth mentioning been achieved then in the area of machine mimicking human in rational thinking process? We have certainly seen impressive progress is selected divisions where the progress was not necessarily expected. Scientist and engineers defeated so many times decided to fight back in the areas where general audience is to be impressed. Beating Garry Kasparov, the world chess champion at the time by the computer was headline news. Sure, the skeptics pointed out that the IBM Deep Blue code was modified during the game but still the computer managed to defeat a champion. Moreover, this was just the beginning.

Chess was always considered a geeky math game with its capacity to predict the largest number of future moves as the main advantage of the winner over the loser. Math is the domain of the computers, they talk math so sooner or later the machine had to be able to outrun humans in this discipline. The real challenge would be a synthetic mind that would be able to compete with humans in their native tongue competitions and winning the TV knowledge show Jeopardy was definitely a desirable trophy.

It was the computer giant IBM to pick up the glove again. The beginnings were not very promising however as the idea sounded too gimmick and resembled failed attempts from the past to imitate human reasoning at the higher level of conceptual thinking. Paul Horn, then director of IBM Research, said the project was in its embryonic stage when he left IBM in 2007 http://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/. Yet it only took another 4 years for Watson to win Jeopardy against two champions in 2011. That was one impressive growth from an embryo to an adult in just 4 years. How such an impressive progress has been achieved and how the idea was missed so many times in the past? Well, Rosetta stone again …

The language structures human thinking. If it is not difficult enough that each language or at least language family follows different grammar and syntax it’s also the fact that no human used language is a rigid and predictable as math strings. Human language cannot be translated into math accurately and the computers how we are able to design them can only understand math. Extending computer capacity beyond understanding math attempts all failed miserably in the past. What Watson team has done different is realizing computing engine strengths and weaknesses. Synthesized minds as we know them cannot think like humans but they can memorize and sort developed human thoughts without necessarily understanding them. Just as Labrador breed dogs leading the blinds and understanding over 200 voice commands don’t actually need to understand every action they perform. The goal is to execute an action called.

Memorizing a phone book is an impossible task for an ordinary human but rather a trivial job for even simple system like a phone even from before smartphones age. All that needs to be done to win Jeopardy is to memorize commonly available data, index it properly so it is quickly available and have an algorithm to bring it by formulating an appropriate question. Once the stellar dream of mimicking human mind was abandoned and highly selective activity goal was set the task has become achievable. In addition, it only took 4 years.

Still, wouldn’t it be a waste of resources to spend all these years in order to win a TV quiz? Media attention, sure but this could be achieved much cheaper, especially in tabloid journalism era we are in today. Even giants like IBM is unlikely to do it just for the fun of it and sure, they did not. Watson next target is health care and it is already at work in the area of radiology. Careful selection of this branch of medicine again shows strength of the computers in the opposition of the human weakness. What radiologists do is analyze the X-ray images of human body. From a synthetic brain point of view, it is looking for specific patterns in the ocean of dots, a tedious and eye straining activity for a human vision system. In addition, when it comes to how many reference images a person’s mind can store compared to a database is not even fair as the scale is completely different. When doctor Smith, even if he or she is a devoted radiologist might have seen thousands of images and read about another hundreds in specialized literature the image library that Dr. Watson has in its synthetic mind is counted with millions. Does this mean human radiologist profession faces extinction? Certainly not in my opinion. At the end of the day, it is human health and life we talk here; not to mention professional liability that comes with it. The bottom line is that Watson is a radiologist instant access and always updated knowledge base but not a doctor himself. I would be even surprised that the radiologist population shrinks because of Watson advantage. This particular profession is among the best paid in the industry, as the volume of work is immense. With larger throughput, I would rather expect an increase in medical imaging demand as it stays one of the best way to achieve noninvasive diagnostics.

Are we then at the edge of a new breakthrough in technology where nothing will be the same again? With millions of jobs slashed like phone books and more to come once the AI advances. Well, I would not hold my breath for that. All the mishaps mentioned above along with Dr Watson career path show how complicated the process of synthesizing human mind is. Humanoid robots able to pass Turing test seem galaxies away and will likely to occupy fantasy space of fiction rather than walking your home town streets with people unable to identify them without custom id badges. What I expect is a continuation of what works which helping humans where human capacity falls short of the computer one and being a human is certainly not one of these.