The Digital Age Explained

(this page is a part of the essay I wrote for the Open Government Book. For copyright info, see the introduction)

Index

The Digital Age explained

To really understand the nature of the problem, we need to step back and establish a few simple definitions. All the forms of data I mentioned earlier - which I’ll just refer to as “documents” for the sake of simplicity - are increasingly being created, processed, distributed, and read digitally - but just what is a digit?

A digit is a single character in a numbering system. Internally, computers can generate, recognize, and store only two states: the presence or absence of a small electric charge, called a bit. Consequently, they can represent only two digits, 1 or 0, just like we’d be forced to do if we had only one hand with only one finger. Commands, signals, and data are called digital when they are translated into series of ones and zeros.

Normally, the bits are bundled in groups of eight called bytes. When done right, digitization is good. It reduces every kind of data management to operations on bit sequences, which in turn are easy to manage with computers. If all conceivable kinds of documents (from texts to music, maps, images, and 3D models) can be represented as series of bits, we need only one class of generic, completely interchangeable devices to store them. Back in the twentieth century, we couldn’t save love letters or movies on an LP album, nor could we preserve live music on sheets of paper.

Today, instead, flash cards made for digital cameras will store PhD theses, songs, or tax forms without ever noticing that they aren’t photographs. For the same reason, if everything is digital we can get rid of the telephone systems, the TV and radio broadcasting systems, the telegraph, and so forth and employ just one (very large) class of telecom networks to act as bit transporters. The cost and time savings enabled by this approach to information management are so big that the trend toward digitization is unstoppable.

However, digitization has several traps. Everything we do to make meaning out of bits - to turn a VoIP transmission into our child’s beloved voice, to display a legal document for editing, to check Google Maps for a location - involves a specification that says what each group of bits means and how they should follow one another. Digital documents require complete format specifications to remain usable, now and in the future. For the same reasons, clearly defined rules known as protocols are necessary when bits travel between systems, whether as email or as computer animation.

Theoretically, agreement on file formats and protocols is all that is needed for different computers and software programs to work together, no matter how the data is generated, stored, or transmitted: programs on one remote computer could automatically retrieve data from other computers, process the data in real time, and send the result—for example, the best deal on an airplane ticket—directly to your home computer. In the real world, legal restrictions and implementation issues impair the value of file formats and communication protocols. Companies can change them unexpectedly and prevent anybody they choose from using their formats by legal means. Where good will prevail, ambiguities can lead to incompatible products.

Thus, format and protocol specifications have real value for users only when ratified as official standards which everybody can reuse without legal restriction or paying any fees. When they choose to, governments can mandate standards of this kind as compatibility requirements in public requests for proposals, and can have confidence that such standards provide high-quality features, reliability, and real interoperability both now and in the future.

Go to part 3: Standards and the Problems with Digital Technology