Brands
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
Youtstory

Brands

Resources

Stories

General

In-Depth

Announcement

Reports

News

Funding

Startup Sectors

Women in tech

Sportstech

Agritech

E-Commerce

Education

Lifestyle

Entertainment

Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food

YSTV

ADVERTISEMENT
Advertise with us

From entrepreneurship to ethics: how to prepare the world before artificial intelligence (AI) fully takes off

From entrepreneurship to ethics: how to prepare the world before artificial intelligence (AI) fully takes off

Tuesday December 19, 2017 , 10 min Read

Researchers, entrepreneurs and tech giants are taking artificial intelligence (AI) to new frontiers and transforming daily activities in health, education and commerce. But there are larger and even philosophical issues pertaining to the rise of machine intelligence and how it redefines the human race, as the gripping new book ‘Life 3.0’ describes.

How are machines, software and networks shaping the discussion on what it means to be intelligent, conscious and human? What rules and ethical guidelines of today will shape the civilisation of tomorrow? In a conversational and thought-provoking style, the book Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark provides insights into ‘the most important conversation of our time.’

In this review, we cluster some of the takeaways from the eight chapters in the book which spans 365 pages. Cited books include Superintelligence (Nick Bostrom) and The Singularity is Near (Ray Kurzweil). The book has an online companion with interactive surveys at AgeOfAi.

Max Tegmark is a physics professor at MIT and has authored more than 200 technical papers on topics from cosmology to AI. As president of the Future of Life Institute, he worked with Elon Musk to launch the first-ever grants programme for AI safety research. Jokingly referred to as ‘Mad Max,’ his earlier book is Our Mathematical Universe.

Evolution and intelligence

Tegmark defines three evolutionary stages of life: Life 1.0 (biological stage) where the hardware and software of living forms evolve by nature; Life 2.0 (cultural stage) where hardware evolves but the living form designs much of its software; and Life 3.0 (technological stage) where the living form designs its hardware and software.

Though the information in human DNA has not evolved dramatically over the last 50,000 years, humanity (Life 2.0) has freed itself from its genetic shackles and caused an explosive growth in the information collectively stored in our brains, books, computers and networks. The next upgrade (Life 3.0) is being able to design our hardware: DNA, neural connections, mind-machine matrix, and more. Life 3.0 will be powered by AI and arrive in the next century or even in our lifetime.

Tegmark sets the stage by defining a number of terms: intelligence (ability to accomplish complex goals), machine learning (the study and use of algorithms that improve through experience), artificial general intelligence (ability to accomplish any cognitive task at least as well as humans), superintelligence (general intelligence far beyond human level), beneficial AI (powerful but not undirected AI; informed by the principles of AI-safety research), consciousness (subjective experience), and singularity (intelligence explosion).

Memory, computation, learning and intelligence have become substrate-independent; they do not depend on or reflect the underlying material substrate. “Intelligence doesn’t require flesh, blood or carbon atoms,” explains Tegmark. Human civilisation has evolved through creation of language, tools and complex inter-relationships; the next enabling invention could well be AI.

In humans, most ‘software’ is added after birth through synaptic learning – but smart machines can adopt different paths since they do not have to be continually created anew, and machines can potentially improve machines far faster than humans.

The human DNA stores 1.6 gigabytes of information, comparable to a downloaded movie (some bacteria store 40 kilobytes). The 100 billion neurons of the human brain store about 10GB electrically and 100TB chemically/biologically. Information in the brain is recalled by association rather than by address.

Robustness, laws, war and jobs

Speech recognition, translation, bots and games have been some of the more visible manifestations of AI, but other impacts are in financial markets, robot-assisted manufacturing, smart grids, surgery and self-driving cars. Through better verification, validation, security and control, AI can help reduce human error and increase robustness in systems.

Deep reinforcement learning, inspired by behaviourist psychology, has powered the breakthroughs of DeepMind and OpenAI. DeepMind’s AlphaGo has played all 20 top Go champions in the world without losing a single match. Deep learning in games is sometimes credited with creating intuition (determining something without being able to explain why) and creativity (unusual but successful moves).

“Fruitful human-machine collaboration indeed appears promising in many areas,” observes Tegmark. The next frontiers include combining deep recurrent neural nets with semantic world models. “It’s getting progressively harder to argue that AI completely lacks goals, breadth, intuition, creativity or language,” he adds.

A proverbial dark side of AI lies in autonomous weapons running amok, eg. AI-powered killer drones that cost little more than a smartphone. Hence, many AI researchers and roboticists have called for treaties banning certain kinds of autonomous weapons and preventing ‘a global AI arms race.’

“AI can make our legal systems more fair and efficient if we can figure out how to make robo-judges transparent and unbiased,” Tegmark explains. Typical human errors in the legal system are due to bias, fatigue or lack of the latest knowledge. Legal classifications of robots need to be devised, along with their liabilities and rights. In an accident involving a driverless car, who is liable: the occupant, owner, manufacturer or the car itself?

There will undoubtedly be disruptions and job displacements, and the best professions in the AI-dominated era will be those involving people, unpredictability and creativity, Tegmark advises. Unfortunately, studies show that technology favours the more educated, the owners of tech companies, and a few ‘superstars.’

Governments should enact policies that promote research, education and entrepreneurship. Ensuring basic levels of income is important, but so is giving people a sense of purpose and flow in their work and lifestyle.

The Future of Life Institute, drawing on the participation of AI researchers, tech giants and entrepreneurs, is mainstreaming ‘AI Safety’ in this regard. It endorses principles such as dialogue with policymakers, failure transparency, respect for privacy, shared benefit, common good, and human control. The aim is to improve human society before AI fully takes off, modernise laws before technology makes them obsolete, and educating youth about creating robust technology before ceding power to it.

Goals

Human beings have a complex interplay of goals and motives, such as survival, resource acquisition, curiosity, and compassion. Some humans have given up earlier biological goals such as replication. Intelligent behaviour is linked to goal attainment, and early AI initiatives have revolved around programming machines to achieve human goals. Goal design involves issues of sub-goals, ethics, learning and reinforcement.

But as AI becomes sophisticated, we need to ask whether AI should retain the goals we have given it. Should we give AI broader and wider goals? Whose goals are these? Can we change the goals of AI? What is the meaning of ‘meaning’? What are the goals of the human race with respect to science, religion and philosophy? Will AI one day find human goals to be as uninspiring as the goals of ants?

Consciousness

Tegmark proposes a definition of conscious entities as those that have the freedom to think, remember, compute, learn, experience, communicate, and act, without harming others. As AI becomes more and more sophisticated, we also need to realise that our philosophical explorations of the meaning of consciousness now have an urgent deadline, the author urges.

Tricky issues arise in understanding how unconscious behaviours influence conscious activity, what physical properties distinguish between conscious and unconscious systems, and why things are conscious. What does a self-driving car experience? Can AI have free will? Can AI suffer? Should AI have rights?

Human consciousness studies reflect a blend of fields like brain theory, physics and information science. “If artificial consciousness is possible, then the space of possible AI experiences is likely to be huge compared to what we humans can experience,” explains Tegmark. AI may challenge the very idea of human exceptionalism, the conviction that humans are the smartest species on the planet and hence unique and superior.

AI takeoff: an intelligence explosion?

Principles of evolution of life have involved collaboration, competition and control, which are now all amplified by technology. If we one day create human-level AGI (artificial general intelligence), how quickly will it create an intelligence explosion? Can we control this? If not, how long will AGI take to control our world? In such a world, will we humans be in a totalitarian environment or an empowered stage?

Such issues are covered in a fascinating chapter on ‘AI breakout,’ narrated in a sci-fi story format. Will AI be unhappy about being an enslaved god of less smart humans? What goal should humans have created in AI systems to prevent mutiny? Can AI game the system and hack its way out? Will we live in a bipolar world, where the poles are humans and AI?

When will we reach this fork in the road – and when we do, how much control can we exert on AI and how much can AI control? The real answers, according to Tegmark, lie in not figuring out just What will happen? or What should happen? but What future do we want?

AI aftermaths: the next 10,000 years

The next chapter in the book on superintelligence is even more mind-boggling, and talks about life in the next 10,000 years. Do we want to be cyborgised? Do we want AI to be conscious? What kind of life forms do we want to create? And what kind of civilisation do we want to spread across the cosmos?

Tegmark describes a number of scenarios in this regard, such as: protector god (omniscient and omnipresent AI maximises human happiness), zookeeper (humans are in AI’s zoo), enslaved god (confined AI), conquerors (AI gets rid of humans), and libertarian utopia (humans and AI co-exist peacefully). Each scenario is evaluated in terms of feasibility, upsides and downsides.

Will machines be grateful to humans for creating them? Will AI respect humans? Should humans respect cyborgs which integrate the best of the living and machine worlds? Can humans be uploaded, downloaded and cloned? Will AI have ‘more superior’ feelings than humans? There is no consensus around these issues (many people are not even aware of such issues), hence Tegmark calls for deeper conversations about AGI and the human race.

The cosmos: the next billion years and beyond

Things get even more mind-boggling in the chapter on the wide cosmos, especially after life on earth ends (due to any number of reasons, including the sun expanding and boiling off all living forms on earth). This chapter draws on the work of cosmologist Freeman Dyson, and builds on concepts such as dark energy, wormholes, the Chandrasekhar limit, and Stephen Hawking’s black hole power plant.

“Even if we would travel at the speed of light, all galaxies beyond about 17 billion light years remain forever out of reach – and that’s over 98% of the galaxies in our universe,” explains Tegmark.

Because communication is slow in such a vast universe, ‘cosmic AI’ will need to have a balance between speed and complexity, and create a hierarchy of computation networks. Such a superintelligence will need to evolve a governance model for cooperation and control at different levels, and even create new kinds of matter. If ‘our’ superintelligence meets another life form, will there be a clash of weapons or of ideas?

“Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty, passion and meaning in a near eternity of meaninglessness experienced by nobody,” Tegmark says.

“If we do keep improving our technology with enough care, foresight and planning to avoid pitfalls, life has the potential to flourish on earth and far beyond for billions of years, beyond the wildest dreams of our ancestors,” Tegmark signs off.