Randomness Tests in Hostile Environments

# Randomness Tests in Hostile Environments

- By Admin --
- Wednesday, 08 Aug, 2018

**Abstract:**

An acceptable way to assess the quality of an RNG (PRNG) is to apply a standard battery of statistical randomness tests to a sampled output. Such tests compare some observed properties ofthe sample to properties of a uniform distribution, with the hope to detect deviations from the expected behavior. Consider a (P)RNG that outputs *M*-bit values which, due to a failure or an attack, are coerced to a subset of *{0, 1}M*of only *2n*elements, for some *n < M>. Such outputs are predictable with a probability of at least 2-n> 2-M, but the standard randomness tests do not necessarily detect this behavior. We show here deterministic M-bit sequences (M=128) that belong to a subset of size 2n, but pass the DIEHARD Battery of Tests of Randomness [1] and the NIST Statistical Test Suite, even with a relatively small value of n = 29. To address the difficulty, we propose a detection method that is feasible even for large values of n(e.g., n=64). As a practical example, we apply our method to rule out the existence of the speculative stealthy hardware Trojan that is discussed.*

**Existing System:**

An RNG is an apparatus that uses a non-deterministic entropy source, which is typically post-processed in order to extract the entropy and make the output uniform. When RNG’s are too slow in practice, an applicationcan use software/hardware PRNG’s that rely on a relatively slow entropy source as an input seed to a deterministic algorithm, and generate pseudo random outputs at a higher rate.

To make the outputs of a (P)RNG appropriate for crypto-graphic applications, they need to be indistinguishable from a random distribution: an adversary who observes some output bits should not be able to predict the next bit with probability that is significantly higher *1/2*(nor be able to obtain information on the seed).

In practice, the quality of an (P)RNG is often validated by applying batteries of randomness tests to an output sam-ple. Two examples for acceptable randomness test suites are the DIEHARD Battery of Tests of Randomness and the NIST Statistical Test Suite. Both suites include multiple statistical tests that compare the properties of the sample to the properties of a uniform random distribu-tion. The tests are carefully designed to detect even slight deviations from the expected behavior. If the tested (P)RNG fails the randomness tests, its outputs are obvi-ously inappropriate as keying material. On the other hand, passing a “full battery” of randomness tests in-creases the user’s confidence (at least empirically) in the output of the (P)RNG.

**Proposed System:**

From the practical viewpoint, we note that our test can be interrupted at any point in time, and subsequently resumed. This property can make the lengthy tests more practical to execute. In addition, the tests can be distribut-ed in the following sense. Multiple users of the same type of platform can run the test (in parallel) on different cop-ies of that platform, and draw a conclusion as a group, to collectively rule out (or detect) the presence of a Trojan.

We thank Georg Becker and Christof Paar (authors of [3]) for sharing information on the parameters that were used with the NIST Statistical Test Suite, for confirming that our model matched the model that was used in [3], and for many other valuable discussions.

*
*

*
*