WARNING long story ....I am always amazed/amused about the (mis)information about different types of DAC chips.
The 'theory' is that the output level of a ladder DAC (Multibit) would be more accurate, in absolute value, compared to Delta Sigma which is considered just a 'poor' approximation.
Delta Sigma exists in 2 basic forms.
A pure 1 bit version (DSD also is a pure 1 bit) and a 'multibit' version where a few bits are ladder type which are then Delta Sigma modulated.
The theory most people 'follow' is that DS is bad (because it's incredibly cheap to make) and old fashioned 'multibit' is good (costs a LOT to make a good chip).
Driven on sentiment really a bit like the 'vinyl is better' story.
Most assume or have read/heard somewhere that only a ladder DAC outputs the absolute correct voltage levels. Assuming bit perfect playback of course.
Those using the volume control of the player/PC or DSP can already forget about bit-perfect....
It is important to realise what an analog to 'digital' converted actually is.
An analog wave form is 'sampled' at a specific point in time and sampled again a fixed time period later.
This 'sampling' basically is measuring very quickly at a point in time what the input voltage level exactly is.
The measured voltage is 'converted' in a digital value closest to the nearest 'step size' in which a signal can be measured (65536 steps for a 16 bit size file.
The more bits the smaller the steps are, the more accurate the representation of the original voltage is.
As we are talking about music the analog amplitude is changing constantly and 'sampled' at fixed intervals.
The shorter the interval the higher the max frequency range will be.
For CD each second 44100 'samples' are taken.
Remember the sample is a single point in time ... not a continuous 'held' signal till the next sample is taken.
A 'sample' thus does not represent a voltage during 22.7us.
It just takes 22.7us before the following sample is 'taken'.
On playback we should thus also have a single point in time with the same value converted followed by 'nothing' till 22.7us later the next point-value comes along.
Of course this doesn't work in practice... it works this way during A-D conversion though but NOT the other way around (during DA conversion).
a bit of history:
The first 'digital' converted signal was a 1 bit signal by the way and not a PCM signal.
BUT that took very fast swicthing and LOTS of datapoints (DSD) which wasn't feasable commercially at that time at all.
So PCM was born which was 'easier' to make. Less points in time (so no high speeds needed) and the sample value was noted in 16 bits.On playback the easiest way is to create a sample-and-hold D-A converter.
Such a converter sets its output (very fast) to the 'sample value' and HOLDS that output voltage till 22.7us later the next sample value sets the output voltage to another value.
That value too is 'held' till the next sample arrives 2207us later. And so on....
So recap.... on recording a single POINT in time (extremely short time period of a few nano seconds) is taken every 22.7us and on reproduction that 'value' (which represents that extremely short POINT in time) is 'held' over a period of 22.7us to that sample value.
This thus is quite different than the sample process.
Think: the all familiar 'step graphs' you often see when digital is mentioned.
(the image above is from
legacy.earlham.eduWell... in MB DAC's those steps actually are there directly on the DAC
CHIP output pin(s).
The first question is how ACCURATE these steps can actually be.
This is determined by tolerances. The smallest step in a 16 bit signal 32768x smaller in value than the largest 'step'.
Something like that is very difficult to produce because the largest 'value' must be exactly 32768x bigger than the smallest 'step' and thus must not be 327667 or 32769 x bigger.
Tolerances... very hard to achieve in real life.
The accuracy of the biggest 'bit step' thus must be better than 0.001% in order to achieve that accuracy.
This is why those MultiBit chips are so expensive. They are laser trimmed or other trickery is used such as 'selection'.
Then there is also the fact that temperature changes also influence that accuracy as well as stability over time.
Because of this the actual accuracy of 16 bit ladder DAC's often isn't much better than what could be achieved with 13 to 14 bit step sizes.
Back in the old days is was even less.
The newer '18 bit, 20 bit or 24bit' DAC's also do NOT reach their desired accuracy.
Those chips 'accept' 18, 20 or 24bit digital 'words' and is why they are called 18, 20 or 24bit chips and the actual conversion stage also has that many 'steps' available.
The biggest steps, however, because of even HIGHER accuracies are even more difficult to create.
Some manufacturers use trickery like split DAC-sections or other tricks to still achieve an as good as possible (but far from perfect) conversion.
Back to the conversion itself:The from digital to analog conversted signal doesn't LOOK anything like the original 'point' in time at all ... (see image above) it is a 'stripe' over a certain time and thus is NOT exactly the (average) value it had over 22.7us.
So... no... MB DAC's do NOT represent the analog signal MORE accurately despite what most 'assume'.
Aside for some filterless NOS DAC's that some prefer to use because of their 'purist' beliefs, NO other DACs (as in device, not chip) in existence have 'stairsteps' as an output signal.
This is where the 'MultiBit is much better' theory falls flat on its face.
The people claiming MB is more 'accurate' are not listening to that 'stairstep signal' NOR to single 'point in time' samples at all.
They are actually listening to a 'gliding' analog signal (like the bottom signal in the drawing above) that is an APPROXIMATION of the sample (just like DeltaSigma is an approximation)
This is because of the 'brickwall or reconstruction' filter that MUST follow the DAC chip.
That filter must be 'steep' and thus is very complex. It has to be because we don't want to reproduce the sample frequency (assume 44.1kHz) nor 'garbage above that' but we DO want all signals BELOW 20kHz.
The stairstep signal on playback thus is ONLY representing the 'original value' (as close as it can) just after the analog value appeared on the output of the chip.
Then it is 'held' there BUT on recording the signal was still changing in value.
The MB DAC chip does NOT change its value though UNTILL the next 'sample value' comes along.
SO .. NOT as accurate as most believe.
The reconstruction filter's steepness 'smoothes over'/connects in a 'sliding' way the AVERAGE values of those 'held for 22.7us' voltage levels.
Thereby creating an analog value that is only briefly the exact same value as the original sample was at that point and NOT during 22.7us.
So much for 'the MB DAC has the exact values' ... it's nonsense the analog reconstruction filter ensures there are no steps.
Therefore the MB DAC is NOT an exact representation of the digital value at all which is what 'believers' like to believe.
They believe it is more accurate and thus they HEAR it as more accurate/real whatever as well. (Mindset)
Now Delta Sigma works different than the much more easily understood ladder DAC.
What the reconstruction filter behind the MultiBit DAC does (create analog values between the sample points), the DeltaSigma does as well but 'creates' many sample values 'inbetween' the 'known' sample values. Except not in small steps but HUGE steps of which the AVERAGE voltage value is VERY VERY close to the digital sample value.
It is a quite complicated process involving many previous and or later sample 'values' to create the correct 'average' value.
The DS process thus creates MANY inbetween values.
Those small (time) steps (which have a very large voltage swing) can be 'post filtered' by a very simple and NOT 'steep and complex' filter.
Often consisting of just 1 resistor + capacitor (6dB/octave) filter.
So the average voltage between those original values are mathmatically generated values that VERY closely approximate the values as they (probably) had been when the original analog signal was sampled.
The fun part is that those 'approximations' are actually even MORE accurate than the analog representations the MB ladder DAC can put out because of tolerances and the sample point NOT being a sample 'period of time'.
This in contrast to what MB afficianados 'claim' that SD is a POOR approximation where in fact that approximation is closer to the reality than what MB can do IN REAL LIFE.
Just search for oscilloscope pictures of -90dB sinewaves of DS DAC's and MB DACs and then see which signal is more accurate.
So BOTH values are inaccurate representations and the MB is actually WORSE in accuracy than the DS DAC... because of physics (tolerances) and the effect of the reconstruction filters.
Of course we see 32 bit SD DAC's around which doesn't mean it can make actual 'voltage steps' that small.
They can't, also because of physics.
The frequency at which the output can be switched is limited and thereby restricting the actual step size.
Also noise levels become an issue.
All components have MORE noise than can be described as 32 bits 'steps'.
The 32 bit DAC specification simply means the chip just accepts 32 bit 'words' but also 16, 18, 20 or 24 or whatever you throw at it that will be interpolated to 32 bits.
So there you have it ... the Multibit hype is just what it is.. a hype driven by demand of audiophiles.
MB is NOT more accurate, in fact it is arguably less accurate than DS.
The inaccuracies of both MB and DS are smaller than anyone can detect though.
You need to BLAST music at 130dB SPL (deafening) in order to be able to detect the smallest 'deviations' which are so soft that at night in a quiet room you could faintly hear 'something' while the actual music signal would vary between 100dB SPL and 130dB SPL (IF your speakers can reach it and your amp can deliver the power).
So take the hype for what it is... a hype .. driven by demand and websites 'discussing what they hear'.
IF I had to choose between the Bifrost with AK4490 (VERY accurate few-bit-type-DS) which can probably resolve around 18 bits, maybe even 20 bits ? and a much more expensive multibit version with a 16 bit ladder DAC that can resolve to about 13.5 bits than for me the choice would be simple.
The AK4490 please ... MUCH more accurate and closer to the original.
But WAIT everyone keeps saying the MB sounds so much better, more 'organic' more 'vinyl', more realistic...
I am not going to comment on the why of it but it most certainly is not what many 'believe' to be the cause of this.
So... in the end you should choose what you 'think/feel/believe' is THE best and perhaps have to trust other peoples ears.
Audition a DAC if possible and have a listen.
If you are really serious about NOT wasting money than I suggest you test blind (need a second person to switch the DAC's) instead of KNOWING what you are listening to is all I would like to say about this.
Then there is the analog (filter) stage, the power supply, the PCB layout, the used chips, the chosen algorithms, the (digital) signal handling, implementation of signal analysis, the actual output voltage level etc. that can actually measurably change the output signal in a significant and audible manner.
I think that those 'differences' are BIGGER than the actual difference in 'output voltage accuracy' between those 2 different conversion methods.
In short.... the DS is more accurate and cheaper.
The MB is more expensive and less accurate BUT you have something most don't have.
To me the choice is obvious based on technical merits.
I would choose a DAC over functionality and looks rather than 2 different types of conversion chips and hype trains.
Others may prefer the hype or having something 'different'.
When someone hears substantial differences between those DACS then they should choose what they believe or heard to be the best.
I am DAC deaf (I can hear differences between poorly designed and properly designed ones though) so every competently designed DAC sounds exactly the same when level matched AND when not aware WHAT I am listening at.