Andrew “bunnie” Huang is just the person to help us out. He’s got a PhD in electrical engineering from MIT, is a technical advisor for Make magazine, created the Chumby hackable digital toy, and cracks computer hardware security for fun on weekends.
He has a pair of blog posts about how you can build your own analog-to-digital converter out of cheap, general-purpose parts. (Note: an FPGA is a “field-programmable gate array”, which is a fancy way of saying that it’s a chip you can redesign on the fly. You can feed it a new design specification, and it will change its behavior from a USB controller to a digital radio receiver to whatever you want. As such, it’s perfect for making an A/D converter. Another cool use of FPGAs is the Universal Software Radio Peripheral, which you can use with GNU Radio, a free software-based radio system.)
One interesting thing from the theory part of the article is that, as bunnie says, “digital technology is on the verge of coming out of the analog closet” (emphasis added):
At very high signal speeds or densities, there is an important energy-time trade-off that digital signal designers must consider. The faster/denser you send/store bits, the less energy and time are available to interpret the information. In fact, the term Bit Error Rate (BER) is starting to appear more and more in product literature. BERs are typically specified in terms of expected failures per bits transmitted. For example, a brand-name hard drive today has a non-recoverable error rate of 1 per 1014 bits—in other words, once every 12,500 Gbytes transferred. This state of the art hard drive today stores 500 Gbytes of data. Chew on this: if you were to read data off of this drive continuously, you should expect an unrecoverable bit error just once every 25 times through the entire drive’s contents. Another way of looking at this is one in 25 hard drives performing this experiment should expect a bit error after one complete beginning to end read pass. Feel worried about the integrity of your data yet? Don’t look now. Hard drives encode data so densely that quite often there is insufficient energy stored in a bit to detect the bit on its own, so hard drives use Partial Response, Maximum Likelihood (PRML) techniques to recover your data. In other words, the hard drive controller looks at a sequence of sampled data points and tries to guess at what set of intended bits might have generated a particular response signature. This technique is combined with others, such as error correction techniques (in itself a fascinating subject), to achieve the intended theoretical bit error rates. Have valuable data? Become a believer in back-ups. Our robust digital complacency is starting to ooze back into the analog days of pops, clicks and hiss. Bit errors are not confined to storage, either. Many high-speed serial links are specified to perform at BER’s as low as one error per 1012 bits. While error-free enough for a single user to tolerate, these error rates stack on top of each other as you build more complex systems with more links in them, to the point where managing error rates and component failures is one of the biggest headaches facing server and supercomputer makers today.
Currently listening to: Leg End by Henry Cow