Skip to content

The “Next Big Thing” Always Starts Out Looking Like Nothing At All

2017 February 12

In the 1930’s, the great industrialist George Eastman engaged in a minor debate with his friend, the education reformer Abraham Flexner, about who contributed most to science. Eastman pointed to Guglielmo Marconi, the inventor of radio and transformed the world.

Yet Flexner argued that given the discoveries of scientists like James Clerk Maxwell and Heinrich Hertz, Marconi’s invention was inevitable. While neither men pursued any practical application of their work, it was their boundless curiosity that led them to the principles that created a revolution.

Chris Dixon has written that “the next big thing will start out looking like a toy,” but that is only half of the story. The truth is that the next big thing starts out looking like nothing much at all. Many great discoveries, such as that of penicillin, spent years lying in obscure journals before someone noticed that they could have a practical use. That’s where the future lies.

Einstein’s Childhood Dream

When he was a boy, Albert Einstein fantasized what it would be like to ride on a bolt of lightning.  Later, while he was working as a clerk in a Swiss patent office, he gazed at the clock that stood by a nearby train station and extended his daydream to imagine what the clock would look like to a passenger on a train traveling at the speed of light.

Einstein spent much of his time on thought experiments like these and they led him to some utterly impractical ideas. In Einstein’s world, time and space are not the absolute qualities that we normally experience, but open for interpretation. An hour spent or a mile travelled in one place may differ widely from an hour or a mile somewhere else.

All of this seems hopelessly abstract until you get in your car to go to a meeting. Switch on your navigation system and you’ll be transported into the world that Einstein imagined, because the GPS satellites you connect to move fast enough for Einstein’s laws to matter. If they aren’t calibrated according to Einstein’s equations, you’ll find yourself hopelessly lost.

Clearly, Einstein never fathomed any of this. In fact, he never knew for sure whether any of his work would have any practical use. In 1905 there were no satellites or computers and very few cars. Yet while Einstein’s work had very little relevance in his time, it has become supremely important in ours.

Darwin’s Big Idea

Many think that Charles Darwin invented the idea of evolution. He did not. The idea that life on earth evolved gradually had been around for a long time before he took his famous voyage on the HMS Beagle and formulated his theory of natural selection. There were several theories about how it all worked, such as that Jean-Baptiste Lamarck.

What Darwin did, essentially, was propose an algorithm. He observed that more living things are born than can survive, that they vary in their traits and that those with the attributes best adapted to their environment would survive and propagate the traits that made them successful. It was these three elements, superfecundity, variation and selection which causes things to adapt.

In Darwin’s time — and to a great extent in ours — his ideas dominated public discourse more for their theological than for their scientific significance. Much like those of Maxwell and Hertz, they had little practical value. Yet, combine Darwin’s algorithm with a computer and it become something else altogether.

Today, genetic algorithms based on Darwin’s theory of natural selection are used for a variety of complex optimization functions. Many of these, such as logistics software used by retail and package delivery firms, positioning satellite antennas and configuring electricity generation systems to be more efficient, we benefit from daily without even knowing it.

Of course, Darwin had none of this in mind when he published his work in the 1850s, but ideas often take on a life of their own and adapt to their environment as it changes.

A Biological Computer

The late 1960’s were a hotbed of radical ideas. Free love, the civil rights movement and the space race all captured the imagination of youthful revolutionaries everywhere. Yet what excited Charlie Bennett most was Watson and Crick’s discovery of the structure of DNA. So he got a degree in biochemistry at Brandeis and then went to Harvard to earn his PhD.

He excelled at Harvard and eventually became James Watson’s teaching assistant, indoctrinating young students in the intricacies of how genetic information regulated the functioning of cells. Yet his life changed when he took an elective course on “mathematical logic and the theory of computing” and was introduced to the idea of a Turing machine.

What struck Bennett was how similar Alan Turing’s theoretical representation of a universal computer was to what he taught in Watson’s class. DNA, it seemed to him, was essentially a biological version of what Turing described. It was that insight — that the world of computation could be more than a sequence of ones and zeros — that set him on his course.

Today, Bennett is recognized as one of the founding fathers of quantum information theory, the science behind quantum computing. Like the ideas of Darwin and Einstein, Bennett’s ideas have had few practical consequences until very recently. Yet with Moore’s Law about to come to an end, the machines he envisioned are beginning to gain traction.

Both Google and IBM already have working prototypes of quantum computers and may have commercial products ready in the next decade. D-Wave, a Canadian company, is already selling a scaled down version. Jeremy Hilton, a Senior Vice President at D-Wave told me, “the quantum computing revolution may be even more profound than the digital computing revolution a half century ago and it will happen much faster.”

The Usefulness Of Useless Things

Today, just anybody can access technology so advanced that it would have seemed like science fiction even a decade ago and design a product around it, but that shouldn’t blind us to even bigger challenges and opportunities that lie in the not too distant future.

How will artificial intelligence affect the skills your employees will need in a decade? How will nanotechnology affect property insurance rates? How will genomics and aging affect medical costs? None of these are immediate concerns, but they are all inevitable forces that will shape how we run our businesses in the years to come.

Albert Einstein, Charles Darwin, Charlie Bennett and countless others never had any idea what the practical consequences of their discoveries would be, because none of their ideas had any relevance to work being done at the time. They were, much like Columbus or Magellan, explorers more than anything else.

Yet that is exactly the point. Innovation needs exploration. We can never create the future by targeting the present, but must imagine new possibilities. Those, almost by definition, seem utterly impractical today.

– Greg

 

An earlier version of this article first appeared in Inc.com

2 Responses
  1. Robin Solis permalink
    May 11, 2017

    I helped invent the very first National digital distribution business in the world. In 1991, it was extremely costly and the technology either had to be invented or the existing tech pushed to the limit. My point is, though, that it was a business *addressing the present* and solving a time constraint “problem” for the advertising industry (a problem that they didn’t even know they had till we solved it, btw).

    For my next company, I did ask a question about the limitations of the present media technology and pose an answer to strive for, based on the future that my company would need to build. We spent the next decade building our technology answer. But then, we spent another whole decade trying to implement the technology in a viable business vertical. FYI, there was a happy ending.

    You never know how the future will come to you but it always starts with solving some sort of problem.

    Greg Satell Reply:

    Very true. Thanks Robin.

    – Greg

Comments are closed.