Daqarta
Data AcQuisition And RealTime Analysis
Scope  Spectrum  Spectrogram  Signal Generator
Software for Windows Science with your Sound Card! 

The following is from the Daqarta Help system:

Features:OscilloscopeSpectrum Analyzer 8Channel

Applications:Frequency responseDistortion measurementSpeech and musicMicrophone calibrationLoudspeaker testAuditory phenomenaMusical instrument tuningAnimal soundEvoked potentialsRotating machineryAutomotiveProduct testContact us about


ADC (AnalogtoDigital Converter)This device (also abbreviated A/D) converts realworld analog signals into the digital form needed by the computer. (A complimentary DAC or D/A converts digital values to analog output signals; a codec (coderdecoder) has both in a single package.) The analog input signal may take any instantaneous value within its range; for example, if the ADC range is +/1 volt, then an input value of +0.1234567 volts is perfectly valid. The analog input thus has "infinite" resolution, but the digital output will be constrained to only certain values, evenly spaced over the corresponding input range. The job of the ADC is to select the closest value for any given input. The number of possible digital values is 2^N, where N is the number of bits of the ADC. A 16bit ADC thus has 65536 possible output values; an 8bit ADC only 256. If an 8bit ADC handles a +/1 V range (2 volts total), then half the range (128 values) encodes positive signals and half negative. The resolution is thus 1/128 or 0.0078125 at best. So our analog input example of +0.1234567 volts might be encoded as +15/128 = +0.1171875 volts, or more likely as 16/128 = 0.12500 volts, but it it will never be "exact". The difference is called "quantization error", and can be reduced by using an ADC with more bits. Most sound cards have 16 bits or more. Each bit represents a binary value of 1 or 0, which is just a True/False or Yes/No state. A 1bit ADC could be made from a comparator that simply compared the input voltage to zero volts and set its output high (logical 1) only if the input was zero or above, or else set its output to 0 for negative inputs. A 2bit ADC covering a +/ 1 volt range could have 4 possible output states: 00 Below 0.5 volts 01 Between 0.5 and 0.0 volts 10 Between 0.0 and +0.5 volts 11 Above +0.5 volts This could be made with 3 comparators: The original 1bit comparator would decide the leftmost or most significant bit (MSB) as before. Two added comparators would be set to switch at 0.5 and +0.5 volts, with some digital logic gates to set the least significant bit (LSB) by sorting out the combinations: The LSB is set to 1 if the MSB is set AND the input is above 0.5 volts, OR if the MSB is set AND the input is below +0.5 volts. These comparisons can be done very quickly, and a new output state appears essentially as soon as the input voltage changes. But the number of required comparators is 2^N  1, and each of these requires a different precision reference voltage, plus a lot of logic gates. Such "flash" converters are typically used only for highspeed applications (above 1 MHz), and are limited to 8 bits or less. Another method often used in laboratorytype data acquisition ADCs is called successive approximation. In this approach a trial digital output is created and converted back to analog form by a DigitaltoAnalog Converter (DAC), and compared to the input signal. The trial value is then increased or decreased and the operation repeated until the full output accuracy is obtained. The successive approximation scheme begins with a trial value (and hence a DAC output voltage) that is half of the full range. This corresponds to setting the most significant bit of the binary output value to a one. If the input voltage is at least half of fullscale, then that bit will remain set as part of the final output, otherwise it is cleared to zero. We have thus "approximated" the input voltage by a single bit value... either it is or is not as big as halfscale. So far, this is the same result as for our simple 1bit ADC. Then the next bit is set, adding a quarterscale voltage to the DAC output. The DAC will now be at 3/4 if the first bit was set, or 1/4 if not. Again a comparison is made, and the new bit remains set or is cleared depending on the outcome. We now have a "two bit" approximation to the input. In like manner, each successive bit is tested. An 8bit ADC thus requires 8 separate setandtest operations, each giving a successively better approximation to the true analog input value. When all 8 are done, the final digital value is "latched" on the ADC output lines, and the ADC waits for the next command to begin another conversion. But all those separate setandtest operations take time. If the input voltage changed during the conversion, the ADC could give erroneous results. For this reason such ADCs are always preceded by a circuit called a sample and hold (S/H) or track and hold amplifier: Just before a conversion starts, this circuit takes a very brief "snapshot" or sample of the input voltage and holds that value (as a charge on a capacitor) while the conversion is in progress. Another approach called a deltasigma ADC (sometimes called sigmadelta) is more popular in modern sound cards. In its simplest form this is just a comparator (recall our 1bit ADC) that compares the input signal to the output of an analog filter that acts to integrate the output of what is essentially a 1bit DAC:
Assume that we start with both the input and the comparator output at 0. If we then apply a positive voltage to the input, the output of the comparator will go high. Ignoring the "Sync" function for a moment, this high output will be applied to the lowpass filter and start to charge up its capacitor. But as soon as the capacitor charges up past the new input voltage, the comparator output will switch low and begin discharging the capacitor... and when it falls below the input, the comparator will switch high again, and the cycle repeats. So when the input voltage is constant, the comparator just switches continually high and low at a rapid rate. Whenever there is a change in the input, there will be a brief interval where the comparator will stay high or low for a while until the capacitor reaches the new input voltage, then the comparator will resume its constant oscillation. The voltage on the capacitor will track the input voltage, with a slight lag and/or overshoot imposed by the filter and switching operation. Note that we could apply the comparator output to a second, identical filter and its capacitor voltage would exactly match the voltage on the first... which tracks the input. So the comparator output is essentially a 1bit digital version of the analog input, but it can have very high resolution because any instantaneous value is encoded by the whole history of the highspeed data stream. The digital data stream is not yet in a particularly useful form. The first thing it needs is the Sync function, which only allows output changes at specific times according to pulses from a highspeed digital "clock" input. This gives the output stream an orderly timing structure that allows it to be dealt with by highspeed computer functions, which can convert it down to a standard sample rate and a standard binary encoding scheme. This overall system still has a problem: With a large, abrupt step change in the input, the capacitor may take a while to reach the new level. The output stream will respond to the step by going high while the capacitor charges up... the best it can do. If the capacitor and resistor are made smaller to give faster charging, then with a constant input level there might be a noticeable sawtooth on the output due to "hunting" around the comparator threshold. These problems (and other, more subtle effects) are dealt with by fancier logic in the Sync function and a fancier filter. One result is that the capacitor is charged faster when the comparator output stays constant for a while, indicating it is trying to reach a new level. Then when the comparator output resumes swithching off and on again, the charge rate is reduced to give a smoother output. Unlike flash or successsive approximation ADCs, the deltasigma ADC does not depend upon precision matching of multiple resistors for accuracy or resolution. Designers have a lot of control over the ultimate bit resolution and dynamic range, by changes in the digital processing. The ADC can be optimized especially for audio work, where our ears are very sensitive to noise during quiet passages, but very insensitive to soft sounds during loud passages. Another big advantage of the deltasigma ADC is that the antialias filter can be much better. Because the initial sample rate is so high, in the hundreds of kilohertz to megahertz range, the Nyquist frequency will be far removed from the region of the input signals. A simple analog filter is all this type of ADC may need ahead of its input, since the real work of antialiasing comes during digital conversion to the lower rate. By then the signal is already in digital form, so a very sharp digital filter can be employed right in the ADC chip as part of the rate conversion process. Since deltasigma converters look at the history of the input, they are not usually suitable for multiplexing, where one converter is used to acquire data from multiple independent channels by switching them one at a time to the ADC input in rotation. There must be a separate deltasigma ADC for each input channel. That's why labtype acquisition boards typically use successive approximation converters, since they usually need to handle 16 or more input channels. 

GO:
Questions? Comments? Contact us!We respond to ALL inquiries, typically within 24 hrs.INTERSTELLAR RESEARCH: Over 30 Years of Innovative Instrumentation © Copyright 2007  2020 by Interstellar Research All rights reserved 