Our Emergent Digital Future
What will the digital world look like in ten years? The trends are already clear.
Capacities in bandwidth and storage will continue on their exponential path. The explosion in the volume of information and number of devices will persist. Our data will be linked and most likely be processed in qubits rather than bits.
However, trends tell us very little. It’s discontinuities that drive history. Everything seems fine and then boom! E-commerce comes along, then search engines, social media, smart phones and on and on. Much like the flood that set Noah on his journey, such events, although driven by trends, take us in completely new directions and create new orders.
We used to have massive mainframes, which were housed in a basement somewhere. Users could log on and, if they had booked some time, could use the computer and the output would show up on their screen or get sent to a printer. Then came the PC revolution and you could do it all by yourself. No more begging for computer time.
In those early days, the driving force was computing power. For small devices to be useful, they had to be able to do lots of calculations. However, as Moore’s law has progressed, most of us have more computing power than we need. It is the exponential increase in storage efficiency (shown below), along with similar trends in bandwidth that is driving the cloud.
Notice the logarithmic scale. The amount of data we can store, along with the power to transmit it at a reasonable price, is growing at a dizzying pace. That means that our ability to access information has transcended the power of the devices we use. We can tap into huge stores of data, wherever, whenever we want.
This represents a true paradigm shift. We used to access large mainframes to run programs for us that we couldn’t run on our own devices. Now we tap into them for the convenience of housing data in one place and being able to access it from anywhere, anytime we want.
Meanwhile, On The Client Side…
When Tim Berners-Lee created the Web back in 1989, it was mainly for the purpose of organizing documents. Web pages were static, meaning that they basically were sections of electronic paper linked together. Nevertheless, the fact that the Web provided a universal standard for displaying documents was a big step forward.
Utilizing a technique called client side scripting, they communicate with the server for you and do some of the work themselves, so you don’t have to wait for the screen to refresh to do simpler tasks. It’s hard to imagine social media working without these technologies. Waiting for your browser to reload just isn’t that chatty.
Recently, Google demonstrated what the future of this area will look like:
That’s not only cool and fun, but represents a sea change in the way such things are created. Before, we would have had to use a proprietary standard like Flash, which means that it didn’t work well with the rest of the Web. The Google Doodle above, however, was done in HTML5, the basic fabric of the web.
Much like the jump from static to dynamic web pages, this is really a big deal. While the cloud makes more and more data accessible, radical improvements in client side scripting are transforming the way we interface with it.
When Tim Berners-Lee published his autobiographical Weaving the Web back in 1999, most of us were still getting to know the first web, but he was already articulating a vision for a second, even more ambitious, one that he called the semantic Web.
While the first Web linked documents together through HTML, the second one would link data together through a new standard called RDF. The idea was to free up all the information trapped in unique databases through a system of universal tags and then create online dictionaries called ontologies, much like we use cross-language dictionaries when we travel.
As I described in an earlier post, this second web is now getting underway. The open graph protocol links online social networks together, while XBRL is revolutionizing financial reporting. Other initiatives, like the BIM standard in construction, are creating amazing efficiencies in a variety of industries.
Here’s Berners-Lee showing the progress made with government data:
Another clear trend underway and sure to continue is the mobile revolution. My agency, Moxie, has recently issued a report showing the increasing impact of post-PC computing. We are no longer tethered to our desktops but literally carry around in our pockets computing power greater than that which put astronauts on the moon.
To get an idea of how strong this trend is, just look at this chart:
While the proliferation of cell phones has been growing for more than a decade, smart phone usage is just getting started and will surely accelerate in the years to come.
Much of the future in this area is easy to predict: Economies of scale will bring down prices while the devices will only get better. Greater usage will spur on even more innovation and consumer adoption.
The best is truly yet to come.
So those are the trends. They are very real and fairly simple to follow. However, as I wrote earlier, trends are for suckers. The really exciting stuff is the discontinuities that are, by their very nature, impossible to predict.
As Steven Johnson pointed out in his book Emergence, more is different and while computing entities are multiplying exponentially, the connectivity between them is increasing at an even faster pace. Much like strange loops which tangle hierarchies and bring forth new forms of order.
What will that mean for our emergent digital future? I have no idea. The next big thing will catch us sleeping just like all of the previous big things did. It will surprise us, delight us, scare us, make us question our conception of our world and where we ourselves fit in it.
Seek and you will discover.