Skip to content

3 Reasons To Believe The Singularity Is Near

2016 June 8
by Greg Satell

When Ray Kurzweil published The Singularity Is Near in 2006, many scoffed at his outlandish predictions. A year before Apple launched its iPhone, Kurzweil imagined a world in which humans and computers essentially fuse, unlocking capabilities we normally see in science fiction movies.

His argument though, was amazingly simple. He pointed out that as technology accelerates at an exponential rate, progress would eventually become virtually instantaneous—a singularity. Further, he predicted that as computers advanced, they would merge with other technologies, namely genomics, nanotechnology and robotics.

Today, Kurzweil’s ideas don’t seem quite so outlandish. Google’s DeepMind recently beat legendary Go world champion Lee Sedol. IBM’s Watson is expanding horizons in medicine, financial planning and even cooking. Self driving cars are expected to be on the road by 2020. Just as Kurzweil predicted, technology seems to be accelerating faster than ever before.

Reason #1: We’re Going Beyond Moore’s Law

For the last 50 years, the technology industry has been driven by Moore’s Law, the famous prediction made by Intel co-founder Gordon Moore that the number of transistors on a microchip would double about every 18 months. That’s what enabled computers the size of refrigerators to shrink down to devices we can hold in the palm of our hand.

Now we are approaching the theoretical limit and the process is slowing down. The problem is that you can only shrink transistors down so far before quantum effects between atoms cause them to malfunction. While chip technology is still advancing, at some point you can’t cheat mother nature anymore. Moore’s law will come to a halt sometime around 2020.

Yet Kurzweil has pointed out that microprocessors are in fact the fifth paradigm of information processing, replacing earlier technologies such as electromechanical relays, vacuum tubes and transistors. He also argues that the numbers of transistors on a chip is a fairly arbitrary way to measure performance and suggests to look the number of calculations per $1000 instead.

And it turns out that he’s right. While the process of cramming more transistors on silicon wafers is indeed slowing down, we’re finding a variety of ways to speed up overall performance, such as quantum computing, neuromorphic chips and 3D stacking. We can expect progress to continue accelerating, at least for the next few decades.

Reason #2: Robots Are Doing Human Jobs

The first industrial robot, Unimate, first arrived on the GM assembly line in 1962, welding auto bodies together. Since then, automation has quietly slipped into our lives. From automatic teller machines in the 1970’s to the autonomous Roomba vacuum cleaner in 2002, machines are increasingly doing the work of humans.

Today, we’re beginning to reach a tipping point. Rethink Robotics makes robots like Baxter and Sawyer, which can work safely around humans and can learn new tasks in minutes. Military robots are widely deployed in battle and soldiers are developing emotional bonds with them, even going as far as to hold funerals for their fallen android brethren.

And lest you think that automation only applies to low-skill, mechanical jobs, robots are also invading the creative realm. One book written by a machine was even recently accepted as a submission for the prestigious Hoshi Shinichi Literary Award in Japan.

The future will be more automated still. The Department of Defense is already experimenting with chips embedded in soldiers brains and Elon Musk says he’s thinking about commercializing similar technology.  As the power of technology continues to grow exponentially—computers will be more than a thousand times more powerful in 20 years—robots will take on even more tasks.

Reason #3: We’re Editing Genes

In 2003, scientists created a full map of the human genome. For the first time, we knew which genes were which and could begin to track their function. Just two years later, in 2005, the US government started compiling the Cancer Genome Atlas, which allows doctors to target cancers based on their genetic makeup rather than the organ in which they originate.

Now, scientists have a new tool at their disposal, called CRISPR, which allows them to actually edit genes, easily and cheaply. It is already opening up avenues to render viruses inactive, regulate cell activity, create disease resistant crops and even engineer yeast to produce ethanol that can fuel our cars.

The technology is also creating no small amount of controversy. When you start editing the code of life, where do you stop? Are we soon going to create designer babies, with predetermined eye color, intelligence and physical traits? Should we alter the genome of mosquitoes in Africa so that they no longer carry the malaria virus?

These types of ethical questions used to be mostly confined to science fiction, but as we hurtle toward the singularity, they are becoming all too real.

The Future Of Technology Is All Too Human

The idea of approaching a technological singularity is both exciting and scary. While the prospects of technologies that are hundreds of times more powerful than what we have today will open up completely new possibilities, there are also inherent dangers. How autonomous should we allow robots to become? Which genes are safe to edit and which are not?

Beyond opening up a Pandora’s box of forces that we may not fully understand, there is already evidence that technology is destroying jobs, stagnating incomes and increasing inequality. As the process accelerates, we will begin to face problems technology cannot help us with, such as the social strife created by those left behind as well as others in developing countries who will feel newly empowered and demand a greater political voice.

We will also have to change how we view work. Much like in the industrial revolution when machines replaced physical labor, new technologies are now replacing cognitive tasks. Humans, therefore, will have to become more adept at things that machines can’t do, namely dealing with other humans, and social skills will trump cognitive skills in the marketplace.

The truth is that the future of technology is all too human. While technologies will continue to become exponentially more powerful, the decisions we make are still our own.

– Greg

3 Responses leave one →
  1. JimMM permalink
    June 8, 2016

    Who’s to say that dealing with other humans won’t soon be learned by AI/robots soon? Our concept of work will soon have to change. Won’t there be fewer and fewer high-paying jobs? Kuka still uses humans to build robots, but are working towards a day when robots build robots.
    Also, the speed of these changes suggest that all of the monetary benefits will likely accrue to the handful of people that own those companies (like Kuka). Will their be enough “good” jobs for people to afford an average standard of living? Our tax structure and how goods and wealth are taxed will need to change to support those that have more trouble competing for the remaining jobs. How do you coach/council a college freshman for what career to prepare, when it is hard to say for certain what jobs will still be around in 10 yrs?

    [Reply]

  2. June 12, 2016

    “​Es ist dafür gesorgt dass die Bäume nicht in den Himmel wachsen​”
    —Johann Wolfgang von Goethe (1749-1832)

    [Reply]

    Greg Reply:

    “What is true for trees is not necessarily true for the creations of man”

    – Greg Satell (1969- )

    [Reply]

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS