The Productivity Paradox Revisited
Computers have been around for awhile. For a long time though, nobody could say if they were doing us any good or not. As economist Robert Solow put itin 1987, “You can see the computer age everywhere but in the productivity statistics.”
MIT researcher called Erik Brynjolfsson called this the Productivity Paradox and had some ideas about why it persisted, such as time lags for the benefits to take hold and the difficulty of measurement, but the truth was that no one really if IT investment was profitable or not.
How times change. Now the worry is not whether investment in technology is productive or not, but whether we are. Brynjolfsson and Andrew McAfee now have a new book out spelling out in detail how machines are displacing the work of humans. This is not idle speculation or science fiction, it’s very real, very scary and happening as we speak.
The New Industrial Revolution
In Brynjolfsson’s original paper, one bright spot of the productivity story was manufacturing. While information technology had little effect in service, clerical or managerial staff, manufacturing productivity gains were robust in the 70’s and ‘80’s. We were consistently able to make more stuff, at higher quality, with few worker hours.
That’s now accelerating and the shift is so dramatic that many are proclaiming a new industrial revolution. Whereas before, robots were mostly deployed for very specific, often dangerous tasks, they have since improved dramatically. Today’s robots are smart, easy to reprogram for new tasks and can work around humans without causing a safety hazard.
Generally, the chattering classes have taken the rise of the robots in stride. After all, when poorly educated blue collar workers’ jobs are being replaced, they can be trained to do higher level jobs. The attitude has basically been, “when computers can replace human thinking and intuition, then we’ll worry about it.”
Well, it’s time to worry.
Our New Computer Overlords
The game show Jeopardy! is one of the most successful in the history of TV. It has run for over four decades and won numerous accolades including a record 30 daytime Emmy awards. The game itself is a unique blend of knowledge and intuition. The contestants are given answers and have to guess the questions. For example:
Hard times,” indeed! A giant quake struck New Madrid, Mo., on Feb. 7, 1812, the day this author struck England.
To get to the correct answer, “Who is Charles Dickens,” you would have to understand that “Hard Times” is a hint (in this case, the title of a novel), not a reference to the earthquake that hit Missouri and also know that Charles Dickens wrote in the 19th century and was likely to be born in 1812.
It’s not the kind of thing you would assume that computers could do well. Getting it right involves a complex process of elimination as much as it does knowledge and calculation. Nevertheless, in February 2011, IBM’s Watson not only played competently, but positively wiped the floor with two of Jeopardy’s most accomplished champions.
One of the competitors, Ken Jennings, the holds the longest winning streak and is the biggest money winner in the show’s history, wrote as his final answer, “I, for one, welcome our new computer overlords.”
If computers can win at Jeopardy!, they can do a whole lot more.
The Transformation of Legal Research
A big lawsuit can involve hundreds of thousands of documents in the discovery process. To go through it all, an army of highly paid lawyers sit in a room for days, chugging coffee and reading every scrap, looking for the stray thread that can break the case. The bill often runs into millions of dollars. However, as a recent New York Times article explains, that’s changing fast.
New e-Discovery firms such as Blackstone, Clearwell and Autonomy can do the same work at a fraction of the time and cost, with much greater accuracy. More than mere keyword searches, the algorithms can understand concepts and identify anomalies such as changes in tone and mode of communication.
The upshot is that one lawyer can do the work of hundreds. What’s more, the computers never get tired or bleary eyed, never have personal problems that might affect their work and never ask for a raise. In fact, if history is any guide, we can expect their price to plummet.
The Automation of Content Creation
There is probably no activity more uniquely human than the creation of culture and information. We are, in fact, the only animals on earth who are capable of doing it. The ability to understand concepts in art, music and other humanistic domains has always been something that separates us.
However, as I previously explained in detail, computers are invading the creative domain with as much speed and force as machines conquered the domain of physical work a century ago. As outrageous as that may sound, you’ve already enjoyed their handiwork without even knowing it.
Music labels use Music X-Ray software to judge the potential of new hits, while movie studios deploy a similar service, called Epagogix, to evaluate screenplays before sinking serious money into a project.
You’ve also probably read sports updates and financial profiles from Narrative Science, a company that turns raw data into very humanlike articles and might have even picked up one of the 200,000 books written by an algorithm Phillip Parker designed on Amazon.
Admittedly, most of the computer generated content is pretty basic, albeit competently done. However, David Cope, a music scholar and composer has built algorithms that make music of such high quality and emotion that even critics can’t tell the difference. As the power of our technology continues to improve at an exponential pace, we can expect the lines to blur further.
Paging Doctor Watson
Clearly, IBM didn’t go to the time and trouble to build Watson just to show up Alex Trebek on Jeopardy!. The ability to sift through millions documents and make actionable conclusions is something that has a variety of applications. The newest frontier is medicine.
IBM is now sending Watson to medical school and major healthcare companies like WellPoint plan to begin deploying the technology to suggest diagnosis and treatment to doctors. As electronic health records become standard and other data intensive technologies such as genomics mature, we can expect computers to take a larger role.
After all, we can’t expect human doctors to instantly read every new journal article, keep track of every potential drug interaction and spot obscure anomalies among thousands of patients, but supercomputers like Watson can. If the trend follows what’s happening in legal research, we may see medical automation transform the expected doctor shortage into a doctor surplus.
The Simulation Economy
So how can computer outperform accomplished humans in highly intuitive tasks that normally require years of training and experience. As I’ve argued before, a lot of it has to do with simulation. A computer can try millions of permutations in a matter of seconds and then choose the best course.
For example, the traveling salesman problem is known as one of the toughest in mathematics. Choosing the shortest route encompassing even a few dozen stops is incredibly complex, so much so that there is no formula that is capable of solving it. However, computers can run millions of permutations in seconds and choose the most efficient path.
In very much the same way, executives simulate business models in Excel, just as engineers simulate new designs in CAD software. Much like the logistics systems that use powerful computing to discard bad solutions and increase efficiency, these simulations enhance productivity through failing cheaply in cyberspace, rather than expensively in the real world.
Racing With The Machines
This ultimate resolution of the productivity paradox is a scary thing, which is why Brynjolfsson and McAfee entitled their book “Race Against the Machines”. It was tough enough when we had to compete with each other, then outsourcing from low wage countries raised the bar and now we are in danger of being put out of work by robots!
However, the authors point the way towards a viable solution:
The John Henry legend shows us that, in many contexts, humans will eventually lose the head-to-head race against the machine. But the broader lesson of the first Industrial Revolution is more like the Indy 500 than John Henry: economic progress comes from constant innovation in which people race with machines. Human and machine collaborate together in a race to produce more, to capture markets and to beat other teams of humans and machines.
Unfortunately, the details about just how to do that are a bit blurry. However, there are some solid principles that we can carry forward.
Forming Intent: As I argued in a previous post about creative intelligence, they key role of humans going forward is forming intent. We are not the least bit put out by farming machines plowing fields or bulldozers digging holes, because we recognize that they are tools to more easily achieve our objectives.
There are some things that computers will never do. They will never strike out at a little league game, have their heart broken or see their child born. It is through seeking fulfillment that we form intent and the humans role will increasingly be enhancing the lives of other humans (the field of marketing being a primary example from the last century).
Barbers Become Stylists: Back when men tended to have the same haircut, you went to barber shops mostly for the conversation and a quick trim. These days, it’s hard to find a regular barber, they’re all stylists.
We now demand a suite of services, including financial planners, yoga instructors and other personal service consultants that few of us would have bought a generation ago. Take a look at any list of promising jobs in the future and they generally fall into two categories: Technical work requiring a lot of education and jobs requiring people skills.
The Organizational Imperative: John Hagel, in an excellent review of the Brynjolfsson and McAffee’s Race Against The Machine, points out that we will have to revamp our organizations for the digital age. In the industrial era, large scale organizations were optimized for creating efficiency, rather than value.
He suggests that we need to develop “scalable pull platforms” along the lines that he described in his bestselling book, The Power of Pull. He envisions that the enterprise of the future will focus less on predetermined tasks on more on leveraging automated digital assets for individual creative problem solving.
Business Model Innovation: Saul Kaplan provides another perspective in his book, the The Business Model Innovation Factory. He points out that since we can’t expect a stable business environment in a time of great technological change, we need to constantly experiment with new business models, just like the old industrial companies did with products.
One thing is clear, we are in uncharted territory and there are no easy answers. Now that we’ve resolved the productivity paradox, we will have to learn how to deal with the consequences and find a way to make peace with our machines.