On almost every computer system, external devices communicate relevant asynchronous events, such information being made available from the network card or the keyboard, using a hardware interrupt mechanism. Each device has an assigned hardware interrupt (IRQ) number and reports important developments by changing the voltage on a designated hardware line inside the computer, corresponding to this particular IRQ. The change is then interpreted by a device called a programmable interrupt controller (PIC), which serves as a personal butler for the main processor (or processors).
Once instructed by the CPU, the PIC decides if, when, how, and with what priority to deliver requests from the external devices to the main unit, which makes it easier for the processor to manage events in an efficient and reliable manner. Upon receipt of a signal from the PIC, the processor postpones its current task, unless of course the CPU had chosen to ignore all interrupt requests at the moment (if it’s really busy). Next, it invokes a code assigned by your operating system to handle feedback from this device or group of devices. Once the program handles the event, the CPU restores the original process and its context – the information about the state of its environment at the time of the interruption – and continues as if nothing has happened.
Delivering Interrupts: A Practical Example
In practice, many additional steps are involved in detecting an external condition and then generating and receiving an IRQ. For example, Figure 1-1 shows the sequence of events triggered by pressing or releasing a key on the keyboard. Before you even touch a single key, a tiny microcontroller chip inside your keyboard, serving as a keyboard controller, is busy sweeping the keyboard for any changes to its state.
The keyboard is organized as an array of horizontal and vertical wires. Keys (microswitches or pressure-sensitive membrane switches) are installed at the intersection of each row and column. The controller tests every row and column separately, at very high speed.
If, for example, the keyboard controller detects a closed circuit when testing row 3, column 5 (which is signified by low resistance when voltage is applied to these lines), it concludes that the key at this particular location (J) is pressed. When the keyboard controller senses a change, it converts row and column coordinates into a scan code, a value that identifies a key by its unique identifier. The scan code information is then queued in the internal buffer of a chip, which then tells the CPU that there’s new data and goes back to minding its own business.
An input controller chip is the keyboard controller’s counterpart on the motherboard. The input controller usually handles all basic input devices, such as the mouse and keyboard. It receives a single scan code from the keyboard chip and signals an appropriate interrupt to the CPU’s butler, the PIC. As soon as the PIC determines that it can deliver this particular IRQ, the PIC passes this signal to the processor, which then usually interrupts its current task and invokes the interrupt handler installed by the operating system. The handler is expected to read the data and to tell the chip that it has read the scan code successfully. The input controller then resumes its normal operations and eventually reads another scan code from the keyboard if there is any data in the buffer
This scheme is important to random number generation, although its significance is indirect. The computer, using the asynchronous event notification scheme (interrupts), receives almost instantaneous and precise feedback about user activity – perhaps most interestingly, accurately measured delays between keystrokes. Although the information is not always unpredictable, it is perhaps the best source of external, measurable, somewhat indeterministic signal the machine can get. And so, in order to work around the deterministic nature of the computer and to insert randomness in their calculations, authors of secure PRNG implementations resort to gathering entropy from generally unpredictable behavior of certain devices, such as the mouse, keyboard, network interfaces, and sometimes disk drives. To do so, they add an extra code inside an interrupt handler for the operating system that records certain parameters for every acceptable event.
Although it can be argued that neither of those sources provide truly random feedback all the time – for example, it is likely that after the user types aardva, the next two characters are going to be rk – some of the behavior, such as my thinking of aardvarks to begin with, is indeed rather unpredictable, from a practical standpoint (and not getting into an academic discussion of free will and deterministic universes). This method of adding entropy works reasonably well because it incorporates several factors that cannot be reasonably considered and monitored or predicted by an attacker while still maintaining their sanity. By gathering data from all those sources for an extended period of time, the laws of probability tell us that we will collect a certain amount of entropy. By collecting the data in a buffer, we construct an entropy pool that can be full or depleted, depending on the supply and demand for unpredictable data. Unfortunately, these small bits of randomness within the pool – where our typing was influenced by cosmic events – is still mixed with plenty of easily predictable data and as such can’t be immediately used for random number generation.
To ensure that the amount of actual entropy collected in the process of maintaining and replenishing the entropy pool is spread evenly over all PRNG output bits (with all unpredictable data expended), the pool has to be hashed; that is, it has to be stirred and mixed throughly so that no section of the data is easier to predict than any other. Every bit of the output must depend equally on all the input bits, in a nontrivial way. Achieving this without knowing which pieces of information are predictable and which are not (information that is not readily available to a computer monitoring keystrokes or mouse movements) can be a difficult task.
One-Way Shortcut Functions
Luckily enough, secure one-way hashing (“message digest”) functions, a flagship product of modern cryptography, can assist us with mixing data to get the most entropy into every bit of output, regardless of how nonuniform the input. These are functions that generate a fixed-length shortcut: a unique identifier of an arbitrary block of input data. But that is not all.All one-way hashing functions have two important properties:
All one-way hashing functions have two important properties:
- It is easy to calculate the shortcut, but not possible to deduce the original message or any of its properties from the result. Any specific change to the message is just as likely to affect all properties of the output as any other change.
- The likelihood of two distinct messages having the same shortcut is determined only by the size of the shortcut. With a sufficiently large shortcut (large enough to make exhaustive searches impractical, nowadays set at around 128 to 160 bits, or circa 3.4E+38 to 1.46E+48 combinations), it is not possible to find two messages that would have the same shortcut.
As a result, shortcut functions provide a means for distributing entropy present in the input data in a uniform way over the output data. This solves the problem with generally random but locally predictable entropy sources: we gather an approximate amount of entropy from the environment, mixed with predictable data or not, and can generate a shortcut that is guaranteed to be just as unpredictable as the entropy collected in the first place, regardless of how the input entropy was distributed in the input data.
How do shortcut functions work? Some again rely on mathematical problems that are, as far as we know, very difficult to solve. In fact, any safe symmetrical or public key cryptography algorithm can be easily turned into a secure hashing function. As long as humanity does not come up with a really clever solution to any of these problems, relying on this approach should be fine.
Yet, by rolling out heavy artillery, we end up with slow and overly complicated tools to generate shortcuts, which is often impractical for compact implementations, particularly when integrating such a solution with an operating system. The alternative is to process the data so that the interdependency between all bits of input and output is sufficiently complex so as to fully obfuscate the input message and hope this is “good enough” to stop known cryptoanalysis techniques. Because “hopefully good enough” is actually the motto for a good chunk of computer science, we gladly accept this as a reasonable approach.
The advantage of the latter group of algorithms, which includes popular functions such as MD2, MD4, MD5, and SHA-1, is that they are generally much faster and easier to use than their counterparts based on difficult mathematical challenges and, when well designed, are not susceptible to cryptoanalysis tricks of the trade. Their weakness is that they are not provably secure because none of them reduces to a classic, hard-to-solve problem. Indeed, some have been proved to have specific weaknesses
As suggested earlier, a great service of shortcut functions to pseudorandom number generation is that they can be run on a segment of data that contains n random bits, and any number of predictable bits, to produce a shortcut that will spread n bits of entropy evenly across all bits of the shortcut (thanks to the two fundamental one-way shortcut function properties discussed earlier). As a result, the shortcut function becomes a convenient entropy extractor. By running a sufficient amount of data collected from a generally unpredictable interrupt handler through a shortcut function, we can generate random numbers without disclosing any valuable information about the exact shape of the information used to generate the number, and without the risk of imperfect input affecting the output in any meaningful way. All we need to do is to ensure that there is a sufficient amount of entropy collected and feed into a shortcut function within a chunk of interrupt data, else we risk compromising the entire scheme. If the attacker can predict considerable portions of the data we use for random number generation, and the remainder has only a handful of possible combinations, they can throw a successful brute-force attack against our implementation by simply trying and verifying all possible values. If, for example, we use a shortcut function that produces 128-bit digests, no matter how much data we actually collected, be it 200 bytes or 2 megabytes worth of keyboard tapping, we must be sure that at least 128 of these input bits are unpredictable to the attacker before hashing it.
The Importance of Being Pedantic
As an example of when things can go wrong, consider a user who decides to write a shell script when a system entropy pool is empty, perhaps due to some random number-hungry operation that was performed a while ago. The attacker notices that the user is writing a script because vi delallusers.sh is being executed; they further assume that the script must have started with something along the lines of #!/bin/sh. Although they cannot be sure what is coming next, they can reasonably expect that the script will open with an invocation of a shell command and that it is somewhat less likely to continue with a tacky poem about aardvarks.
At this point, an encryption utility of some kind suddenly asks the system for a 128-bit random number to be used as a session key to protect communications. However, the system fails to correctly estimate the amount of entropy available in the buffer that recorded the process of writing the first lines of the script, and the attacker now has an easy task. The computer is devoid of the information whether this particular action performed by the user at the very moment is predictable to others or not. It can only speculate (aided by the assumptions made by the programmer) that, over the course of a couple of minutes or hours, users’ actions will sum up to something that could not be precisely predicted and that, on average, this much of the input indeed would depend on factors unpredictable to the attacker.
The attacker, at this point, knows most of the entropy pool contents and is left with barely thousands of options to choose from when it comes to the unknown part – despite the fact that the operating system is convinced that there is far more entropy in the buffer. These thousands are hardly a big challenge for someone assisted by a computer. Consequently, instead of getting a 128-bit random number, which has a 39-digit number of combinations, an unsuspecting cryptography application ends up with a number generated from input that could have been only one of a couple thousand of options, easily verifiable by the attacker by trial and error, and the attacker can soon decrypt the information that was supposed to remain secure.