Skip to content

DNS and Randomness

Over the last few days, we have heard a lot about DNS cache poisoning and how we need to get our recursive resolvers to use random source ports. We are being told that this is a flaw in the protocol, but no details are going to be available until a presentation at Blackhat in August. DNS cache poisoning of course has been around for a long time, most notably when the 16-bit query IDs were not randomized. Here are some good references:
Oarc in the meantime has made a port testing server available. A simple invocation of dig tells you if your recursive resolver is vulnerable:
dig +short TXT

The TXT record assesses a resolver's source port randomness as poor, fair or good. Unfortunately, on my network, I found this record constantly cached from other resolvers, so I wrote a small Python tool that analyzes the randomness of both your source port numbers as a well as your query IDs. The tool can be downloaded from:

Its usage is pretty simple:
$ ./ --output model.resolver
Queries: 256
Port Statistics:
Median: 0.0 Mean: 0 StdDev: 0 (NR)
Runs (Up) (Z-Score): 1=-17.354154 (NR) 2=-10.620342 (NR) 3=-5.481028 (NR) 4=-2.500278
Runs (Down) (Z-Score): 1=-17.354154 (NR) 2=-10.620342 (NR) 3=-5.481028 (NR) 4=-2.500278
Qid Statistics:
Median: 657.0 Mean: 50 StdDev: 25068
Runs (Up) (Z-Score): 1=-0.989196 2=0.396252 3=1.087083 4=2.299189
Runs (Down) (Z-Score): 1=-0.989196 2=-0.262860 3=1.451978 4=2.299189

The tool works by issuing DNS queries that return the source port and query ID as part of the resolved answer. To test a different resolver specify one via --resolver=ip. The statistics are based on the differences sequence for port and query ID. We compute the standard deviation; a high standard deviation means more randomness; and also the number of up an down runs, a higher Z-Scores means lower randomness.

In the meantime, I guess we all need to wait for Dan Kaminsky to spill the beans.


No Trackbacks