How does a processor use just 1 bit?

Status
Not open for further replies.

Onceler

Golden Member
Feb 28, 2008
1,262
0
71
Example 1 bit DAC on CD players.
How can you do anything without eight bits?
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
Sequential. Serial vs parallel.

In the case of the 1 bit DAC, pulse width modulation with a special type of DAC that integrates the pulses.

Many things that are "1 bit" on their external interfaces use shift registers on the sending/receiving ends and only use 1 bit over the wire to minimize pin counts and traces eg: SD cards.
 
Last edited:

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
Example 1 bit DAC on CD players.
How can you do anything without eight bits?

A 1-bit DAC is just a marketing name for a delta-sigma modulator. Wiki gives a good , if somewhat hard-going description of how this works.

The problem with DAC (and ADC) is that you need to filter the signal in the analog domain to remove high frequencies outside of the sampling range. (In ADC, the high frequencies will confuse the ADC which will render them as low frequencies. With DAC, high frequencies will be added to the signal artefactually, because a DAC creates a "stepped" waveform).

Good quality analog filters are difficult and expensive to build, need careful calibration and can drift out of calibration with aging, temperature, etc. Any drift in performance can lead to corruption of your desired signal or could allow artefacts of the conversion in.

If you can build a digital interpolator that expands a 40 kHz digital signal to e.g. 10 MHz, then you can use a digital algorithm in your interpolator that does not produce any spurious signals in the 20 Khz - 5 MHz range. You can then use a very basic analog filter to filter out the 5 MHz+ signal - as your filter only needs to operate miles away from your desired signal, even the crappiest analog filter will offer outstanding performance on this very easy job.

Building a 10 MHz DAC is harder than a 40 kHz DAC. But, that doesn't matter so much - you can use a much lower resolution DAC and use "dithering" (rather like FRC on LCD monitors) to simulate a higher resolution DAC, and let your analog filter clean up the signal (which even a crappy filter will do beautifully, because the noise you are intentionally adding to the signal is miles away from your deisred signal).

In short, if your interpolator can expand a 40 kHz signal to a 10 MHz signal, then by dithering you can reduce the resolution from 16 bits to 1 bit (i.e. on off) and still retain full quality.

This 1 bit DAC design is cheaper and simpler to construct than a "real" 16 bit DAC. A "real" 16 bit DAC often needed precision calibration (on-die resistors would be laser trimmed at the factory), leading to massive prices. I remember paying over $100 for a 192 kHz 16 bit DAC chip for a scientific experiment back in the mid 90s. Even with precision calibration these chips often suffered from defects (e.g. a non-linear response). By contrast, a 1 bit DAC needs no calibration - it's just a switch - and because of the digital signal processing it is absolutely perfectly linear in its response.

In fact, things go further than that. The noise produced by a "real" DAC is produced equally at all frequencies (it's pure white noise). If you tweak your digital interpolator slightly, you can have it bias the noise into the high-frequency range (where it will be filtered out), giving even higher fidelity than a "real" DAC could have produced.

The benefits of the delta-sigma design of DAC/ADC are so enormous, that with the exception of ultra-fast converters (e.g. RAMDACs, monitor ADCs, LCD panel DACs, and cell phone/base station radio ADC/DACs) virtually all converters on the market are now delta-sigma.

I'm told that modern top end audio DACs, have moved away from the "1-bit" construction, and they tend to use 2 bit or 4 bit conversion as the core of their delta-sigma systems (apparently, this can "shape" the noise even better, giving essentially no detectable noise within the audio band).
 
Last edited:

DDR4

Junior Member
Feb 2, 2012
16
0
0
Processors can be 1 bit, and so can DACs, a 1 would be a certain interval, and a 0 another interval, so in total 2 intervals, however with an 8 bit DAC you have 255 intervals, though you can't do anything with a 1-bit DAC, so my guess is that it's serial
 

gorobei

Diamond Member
Jan 7, 2007
3,904
1,385
136
A 1-bit DAC is just a marketing name for a delta-sigma modulator. Wiki gives a good , if somewhat hard-going description of how this works.

The problem with DAC (and ADC) is that you need to filter the signal in the analog domain to remove high frequencies outside of the sampling range. (In ADC, the high frequencies will confuse the ADC which will render them as low frequencies. With DAC, high frequencies will be added to the signal artefactually, because a DAC creates a "stepped" waveform).

Good quality analog filters are difficult and expensive to build, need careful calibration and can drift out of calibration with aging, temperature, etc. Any drift in performance can lead to corruption of your desired signal or could allow artefacts of the conversion in.

If you can build a digital interpolator that expands a 40 kHz digital signal to e.g. 10 MHz, then you can use a digital algorithm in your interpolator that does not produce any spurious signals in the 20 Khz - 5 MHz range. You can then use a very basic analog filter to filter out the 5 MHz+ signal - as your filter only needs to operate miles away from your desired signal, even the crappiest analog filter will offer outstanding performance on this very easy job.

Building a 10 MHz DAC is harder than a 40 kHz DAC. But, that doesn't matter so much - you can use a much lower resolution DAC and use "dithering" (rather like FRC on LCD monitors) to simulate a higher resolution DAC, and let your analog filter clean up the signal (which even a crappy filter will do beautifully, because the noise you are intentionally adding to the signal is miles away from your deisred signal).

In short, if your interpolator can expand a 40 kHz signal to a 10 MHz signal, then by dithering you can reduce the resolution from 16 bits to 1 bit (i.e. on off) and still retain full quality.

This 1 bit DAC design is cheaper and simpler to construct than a "real" 16 bit DAC. A "real" 16 bit DAC often needed precision calibration (on-die resistors would be laser trimmed at the factory), leading to massive prices. I remember paying over $100 for a 192 kHz 16 bit DAC chip for a scientific experiment back in the mid 90s. Even with precision calibration these chips often suffered from defects (e.g. a non-linear response). By contrast, a 1 bit DAC needs no calibration - it's just a switch - and because of the digital signal processing it is absolutely perfectly linear in its response.

In fact, things go further than that. The noise produced by a "real" DAC is produced equally at all frequencies (it's pure white noise). If you tweak your digital interpolator slightly, you can have it bias the noise into the high-frequency range (where it will be filtered out), giving even higher fidelity than a "real" DAC could have produced.

The benefits of the delta-sigma design of DAC/ADC are so enormous, that with the exception of ultra-fast converters (e.g. RAMDACs, monitor ADCs, LCD panel DACs, and cell phone/base station radio ADC/DACs) virtually all converters on the market are now delta-sigma.

I'm told that modern top end audio DACs, have moved away from the "1-bit" construction, and they tend to use 2 bit or 4 bit conversion as the core of their delta-sigma systems (apparently, this can "shape" the noise even better, giving essentially no detectable noise within the audio band).

nice post.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
I might as well just add a bit more to it:

There is a rather niche audio format known as SACD (super audio CD) which was designed as an audiophile version of CDs. At the time, it was designed the delta-sigma approach was well established an in general use in audio gear.

The industry was looking for an upgrade; a higher bit-depth and a higher sample rate. However, achieving both was challenging. A delta-sigma approach was the only practical option; but there came the problem of the digital signal processors in the ADC and the DAC. How could the cost of this be brought down?

Sony and Philips came up with a clever solution. Rather than record high resolution samples at a high rate onto a DVD, they chose simply to put absolutely minimal signal processors in the ADC and DAC. In particular, the sample-rate reduction filter was axed from the ADC, and the interpolation filter was axed from the DAC. Instead, the untouched 5 MHz 1-bit stream from the delta-sigma ADC was captured and saved directly to disc.

The 1-bit 5 MHz stream is a lot bigger than a sample-rate reduced stream would be; but DVD physical format was chosen for use, and with storage up to 9GB, even with the inefficient direct bit stream, there would be no issues with space.

The performance of the SACD stream is not directly equivalent to a conventional sampled digital stream. However, the performance is approximately equivalent to a 20 bit, 96 kHz stream.

The format struggled due to high cost, few audio recordings, restrictive licensing (e.g. digital outputs were not permitted on SACD players) and poor manufacturer support.
 
Status
Not open for further replies.