Human intelligence in machines: how AI captured the imagination of the world
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So, we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish,” said Elon Musk (CEO, Tesla/SpaceX) about the impending rise of Artificial intelligence.
Artificial intelligence entered the public domain ever since science fiction glorified it by touting it as the greatest innovation ever to be developed. Its history is a lot humbler though.
Small beginnings
In 1950, there were discussions around Artificial intelligence (AI) as being the “missing link” between human intelligence and machines. These discussions and debates came just 10 years after the first electronic computer was manufactured in 1946, along with storage capabilities in 1949. Computer scientists were extremely interested in the idea at the time, and the same level of forward-thinking has continued to inspire generations since.
Norbert Wiener, a mathematician and a philosopher, came up with the idea of AI and became one of the first to theorize that all intelligent behaviour was the result of feedback mechanisms. For example, if I teach you something, my feedback on your learning makes you intelligent. This holds true for almost all human activity, whether needlework or manufacturing phones. Norbert was said to be one of the visionaries who inspired computer scientists Allen Newell, Herbert Simon and Cliff Shaw to design the first AI programme called, “The Logic Theorist” (1955-56).
However, the first person to coin the term “Artificial intelligence” was John McCarthy, who is also touted as the father of AI. In 1956, he organized a conference called ‘The Dartmouth summer research project on AI’, and got talented coders and designers to partake in the research. After the success of the programmes at Dartmouth (the mecca of AI), other universities started to take attention, with research picking up the pace at MIT, Keele University, the University of Michigan, and others. Research centres started to form in other Ivy League schools, as everyone wanted to crack the code of “Artificial intelligence”.
The rationale was simple. AI would help in the creation of systems that could solve problems more efficiently as well as the construction of systems that could learn by themselves. So computer scientists took to designing a software that could combine these two factors into one coherent machine, that would herald the next AI breakthrough. In the 1950s, Alan Turing was also instrumental in theorizing machines that could “think” like a human being and play chess, through his seminal 1950 paper Computing Machinery and Intelligence. The concept of AI was slowly becoming a mainstream phenomenon.
Then, in the 1990s, buzz arose again around AI. The technology had finally caught up to the requirements of the field, and people started developing machine learning and algorithms that self-taught at a very basic level. AI was in the spotlight again, as researchers, scientists, and major tech giants started to push its boundaries. True Artificial intelligence captured the imagination of the world, even making its way into pop culture - as even mainstream movies started to toy around with the subject. Robots taking over planet Earth started to become a huge cultural movement.
In 1995, inventor Richard Wallace developed the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity) with the addition of natural language and sampled thousands of data points to finally create something resembling AI. A beautifully designed prototype, ALICE showed simple signs of life at an early stage. Two years later, the computer Deep Blue beat the reigning chess champion Garry Kasparov at the game using AI. This was a watershed moment in AI interest and research – advancement grew as the developers created better models and more functional machinery.
Does anyone remember the Furby? It was a great toy that took the US by storm, and it had AI implemented in it to recognize questions and answer them accordingly. This was in 1998, and AI was getting more attention at this point.
In 2000, Honda manufactured the ASIMO robot, that could perform certain functions similar to humans and had a basic level of intelligence woven into it. An almost human-like entity, it was one of the first instances of technology approximating/mimicking human interactions. The very next year, Steven Spielberg released a movie called AI (Artificial Intelligence) about a boy who was programmed to experience emotions. This was one of the first instances of AI entering popular culture, and audiences around the world were suddenly exposed to the idea of technology entering their homes.
After the tech-bubble crashed and markets went into a frenzy around the turn of the millennium, DARPA (Defense Advanced Research Projects Agency) issued a worldwide challenge in 2004 to create an autonomous robot that could drive 150 miles on a desert route. This was another shot in the arm for the AI game as interest about developing AI-related technology spread. However, the technology was clunky and required a lot of hardware with little result. Despite this, several people remained optimistic about the future of AI and its potential.
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization, a billion-fold.” – Ray Kurzweil (Author and Founder, Singularity University)
However, brilliant minds like Stephen Hawking have publicly disapproved of the idea of AI and its entry into the real world. He has said, “The development of full Artificial intelligence could spell the end of the human race. It would take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
The world of AI today
Today, every tech company is determined to bring AI to the mainstream as quickly as possible. Google’s Larry Page is optimistic about the technology and wants to commercialize it. He has previously said, “Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”
Others like Elon Musk are more wary of the dangers behind AI and can’t imagine how an autonomous robot can handle all the workload without problems. “The pace of progress in Artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast – it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” said Musk.
Currently, AI has just entered the early stages of development, where small companies and departments make scaled products that benefit consumers. Companies like Facebook, Google, Apple, etc., have already started to use AI at some level to ensure that customers get a better experience while using their devices.
Even governments have weighed in on the issue, with former US President Barack Obama mentioning the topic in an interview after reviewing its pros and cons. “We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy… But it also has some downsides that we’re going have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.” he said.
The human-work dynamic can change forever, and it’s a huge responsibility to figure out what we’re doing to educate the millions of people who won’t be needed for tasks that can be performed more efficiently by AI.
Closer to home, KR Sanjiv, Chief Technology Officer at Wipro, believes that companies are investing quickly in the AI space so that they don’t get left behind when the next big wave hits the world. “So as with all things strange and new, the prevailing wisdom is that the risk of being left behind is far greater and far grimmer than the benefits of playing it safe.” said KR Sanjiv.
Emotional intelligent AI – the future?
2017 alone has seen major advancements as far as AI has concerned, both in terms of the quality of AI as well as its acceptance in society. From Saudi Arabia giving a robot “citizenship” to the prospective AI politician in New Zealand, more and more people and powers-that-be are allowing scenarios for integration of AI into everyday lives. But where does this path lead?
Nobody truly knows the extent of AI’s future, as predictions have come and gone, and we still don’t have an overarching AI tech company. Many will try to fight off the revolution, and some will profit from its opportunities. While certain public hardware such as Google’s cars (Waymo) and Tesla’s Roadsters have natural AI embedded in their software to propagate the technology further, true AI (a robot being as intelligent as a human) may still be a few decades away, according to most scientists and researchers.
While there may be naysayers of this technology, not everyone has bought into the idea of “gloom and doom”. Bill Gates has previously said, “The so-called control problem that Elon [Musk] is worried about, isn’t something that people should feel is imminent. We shouldn’t panic about it.” Mark Zuckerberg agrees with this idea and wants to invest heavily on the AI side of things for the future, despite recent setbacks.
The 2045 institute, created by Russian Billionaire Dmitry Itskov, is working with researchers and companies to create the next AI revolution. “I’m 100 percent confident it will happen. Otherwise, I wouldn’t have started it. The 2045 initiative hopes to have functioning ‘avatars’ by 2020 where a human will be able to control a robot via their brain,” says Dmitry.
Sounds too good to be true? Well, who knows? The future may be just around the corner.