MPC-BE supports DSD, DSF and DFF files!

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Thought I'd share my recent discovery. Since v1.43, MPC-BE supports these HQ Audio file-types (usually found on SACD's). Previously only JRiver and Foobar (with plugins) had support for them.

The advantage with MPC is that it can stream 24bit/96KHz 5.1 to a receiver, which I was having problems with JRiver (only seemed to have 2.0 support, at least with my receiver) and apparently Foobar can only manage 24bit/48KHz, although that may be plugin dependent.

I'm not sure how exactly it does it, can't find any notes on the net, but I've set it up to use MPC Audio Renderer in exclusive mode, so it bypasses the Directsound sample conversion and configures the receiver to the same sample rate/bit depth and Channels as the source. (also means if I have a stereo file, I can apply Dolby surround settings to it on the receiver without changing windows settings from 5.1 to stereo)

Obviously there must be some way of translating the 2822400Hz/1bit audio to 24bit/96KHz. JRiver's most compatible way of doing this is by putting the audio information into a PCM container and streaming that. I've noticed this not only uses a temp file but also 1 core of CPU activity. MPC-BE seems to do neither, but my receiver does say it's receiving a PCM signal.

TLDR: I can now effectively stream NiN The Downward Spiral and Pink Floyd Wish You Were Here in full surround (how all Pink Floyd was intended to be heard) and it sounds glorious!
 
Last edited:

lamedude

Golden Member
Jan 14, 2011
1,220
42
91
If you're not bitstreaming DSD over $10K HDMI cables you're doing it wrong. Only filthy casuals listen to PCM.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Ha, it's more a case of finding HQ audio in whatever format I can get it.

My HDMI cables were 2 for a tenner!
 
Last edited:

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
So my TLDR is, you're converting DSD into PCM and streaming that to your receiver. If that still magically sounds better than the 24/48 or the 24/96 PCM version of the same material, it means the source used for the DSD version is probably different. Which is the case with almost all "hi-res" audio. The content is different. And if you down sample the DSD version to 24/48, it sounds exactly the same.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
I'm not saying it sounds better, like you say the source often counts for most of the differences you hear. And DSD does apparently have issues above a certain frequency which means any DSD decoder uses a low pass filter at around 50KHz, so a lot of the very high frequencies are cut out. Apparently this was to avoid damage to equipment caused by the intensified high end.

It's just that I'm unable to find certain things like Aerosmith's Toys in the Attic (the guitar on Walk this way really does sound noticeably more "in the room" and rounded compared to the CD) in any other format.

I found the Terminator 2 soundtrack in 5.1, and Pink Floyd's Wish you were here EP which sounds glorious in 5.1, sonically better than even Dark side of the Moon imho. Found some Deep Purple in 5.1 too. I wouldn't have been able to hear them at their fullest without this discovery. I'm not convinced the Wasapi plugin in Foobar properly bypasses Directsound as there is no exclusive mode and Jriver won't let me bitstream 5.1.

I can definitely hear a difference between 16 and 24 bit music, so I try my best to find that kind of quality. The sampling rate doesn't matter as much to me, but it's nice if it is there.
 
Last edited:

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
What are examples of differences you hear?

It sounds consistently fuller and more rounded. Imagine the waveform is plotted on a graph. With 24bit, the X axis has a finer granularity so more of the detail in the waveform is kept. A bit like going from 16bit to 32bit colour but not quite as obvious. It could also be likened to an increase in pixel density or PPI.

It's subtle enough that most people probably wouldn't know they were listening to 24bit or 16bit, but if you've heard it before you are more likely to be able notice it.
 
Last edited:

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
It sounds consistently fuller and more rounded. Imagine the waveform is plotted on a graph. With 24bit, the X axis has a finer granularity so more of the detail in the waveform is kept. A bit like going from 16bit to 32bit colour but not quite as obvious. It could also be likened to an increase in pixel density or PPI.

It's subtle enough that most people probably wouldn't know they were listening to 24bit or 16bit, but if you've heard it before you are more likely to be able notice it.

Yea that's what I was worried you'd say.

The bit depth is the Y axis and just adds more quantization points for volume sampling (or amplitude if you want to be technical), not frequency sampling.

So unless you're listening to something that has more than 80 dB of dynamic range, 24 bit is completely unnecessary. And it is highly unlikely that you have anything with that kind of dynamic range because if you did, to hear the quietest parts of that signal, you would have to set your amplification at a level that would send you into a coma when the loudest parts of that signal come out.

In fact, most records today have no more than 20-25 dB of dynamic range, for which even 16 bit is overkill. You could easily capture it with 8 bits, and still maintain an imperceptible noise floor.

24 bit is really only useful for recording. Just in case you have your microphone gains all over the place, it allows you to digitalize everything without clipping, and normalize later on.

But anyways enjoy the fuller sound if you think it sounds that way. It's probably just different source material, as usual with hi-res audio.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
This is all mostly true up until the point you are finished mixing and want to master the recording. (I would argue that I definitely have quite a few recordings that have a wider than 25db of dynamic range, it's not all brick walled. I'd also point out that the gain for microphones is at the preamp stage and not the levels stage which is where the 24bit sampling would come in)

Yes it is used to give you more room to manoeuvre whilst recording and more space to mix in the effects. Once you get to the mastering stage and you put your final mix onto whatever you are using for the master copy (confusingly this is then sent off to a mastering suite which does additional processes to make all the recordings sound consistent on the same album) it's a different story. The 24bits can be used in the way I described above to give more granularity to the finished recording.

Not only can I hear this, but if you analyse the waveform (not at home at the moment but will update with the specific software I use) you can actually see where the bits are in use, and see fake recordings (the 24bit version of Pantera's Cowboys from Hell contains no data in the last 8 bits)

You will note that in terms of the final product, a 24bit recording doesn't sound any louder than a 16bit recording. It doesn't have peaks that are 50% louder than a 16bit recording, so the additional resolution is used to keep more of the sound.

You are right that when creating a 24bit release, many recordings are remixed, sometimes by a different engineer. I believe the Aerosmith remaster is an example of this, But this is not always the case. Some have the forethought to keep the 24bit masters and the mixes are identical. Same goes if you are digitising from an analogue master.

I suggest having a listen for yourself to a variety of sources. You could even check out the difference between 16 and 24bit samples that get given away free with magazines.

It's a common misconception caused by people reading one definition of how 24bit is used and assuming that is the only use for it.
 
Last edited:

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
This is all mostly true up until the point you are finished mixing and want to master the recording. (I would argue that I definitely have quite a few recordings that have a wider than 25db of dynamic range, it's not all brick walled. I'd also point out that the gain for microphones is at the preamp stage and not the levels stage which is where the 24bit sampling would come in)

Yes it is used to give you more room to manuver whilst recording and more space to mix in the effects. Once you get to the mastering stage and you put your final mix onto whatever you are using for the master copy (confusingly this is then sent off to a mastering suite which does additional processes to make all the recordings sound consistant on the same album) it's a different story. The 24bits can be used in the way I described above to give more granuality to the finished recording.

Not only can I hear this, but if you analyse the waveform (not at home at the moment but will update with the specific software I use) you can actually see where the bits are in use, and see fake recordings (the 24bit version of Pantera's Cowboys from Hell contains no data in the last 8 bits)

You will note that in terms of the final product, a 24bit recording doesn't sound any louder than a 16bit recording. It doesn't have peaks that are 50% louder than a 16bit recording, so the additional resolution is used to keep more of the sound.

You are right that when creating a 24bit release, many recordings are remixed, sometimes by a different engineer. I believe the Aerosmith remaster is an example of this, But this is not always the case. Some have the forethought to keep the 24bit masters and the mixes are identical. Same goes if you are digitising from an analogue master.

I suggest having a listen for yourself to a variety of sources. You could even check out the difference between 16 and 24bit samples that get given away free with magazines.

It's a common misconception caused by people reading one defenition of how 24bit is used and assuming that is the only use for it.

I don't need to listen to samples provided to me, I created my own samples from 24bit tracks and crushed them to 16bit. I couldn't hear any difference. Nor did I hear any difference when I down sampled from 96K to 44.1.

Even though everyone is different, one thing that is not up for debate is that 24bit does not increase the resolution of audio, it just increases possible amplitude levels of each sample. Which to your ears may sound different (i.e. better), it all depends what frequencies your ears are most sensitive to, but it does not make it sound better. Louder (or less loud) does not equal better.

Of course most people spectacularly fail A/B tests if one side is even 1dB louder than the other, you probably fall in this camp.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
I don't need to listen to samples provided to me, I created my own samples from 24bit tracks and crushed them to 16bit. I couldn't hear any difference. Nor did I hear any difference when I down sampled from 96K to 44.1.

Even though everyone is different, one thing that is not up for debate is that 24bit does not increase the resolution of audio, it just increases possible amplitude levels of each sample. Which to your ears may sound different (i.e. better), it all depends what frequencies your ears are most sensitive to, but it does not make it sound better. Louder (or less loud) does not equal better.

Of course most people spectacularly fail A/B tests if one side is even 1dB louder than the other, you probably fall in this camp.

No I can tell the difference when one track has been normalised compared to one that hasn't. It doesn't necessarily mean better quality, but most people will think that in a blind test.

I'm not sure if you don't quite understand the full recording process or are just plain refusing to entertain that there might be more to it than you realise.

I have studied music and recording through college and university. I'm not your average audiophile, I wouldn't even call myself an audiophile. I make practical decisions based on my ears and years of experience, and the results are quite astounding imho.

Working in 24bit whilst mixing a track is different from mixing down a track to a 24bit format.

I'm interested to know which tracks you chose. Not all 24 bit tracks are created equally. I'm also interested to know what you were playing back on and what speakers you were using (although I have heard the difference on some slightly above average speakers)

I can hear a difference if I play back 44.1KHz tracks with windows set to 48 or 96KHz instead of leaving it at 44.1 because Directsound doesn't convert them in a clean way, it introduces artefacts due to 44.1 not being a factor of 96. It will sound worse and I can hear it on my system. This is why I prefer Wasapi over Directsound.

This implies that something isn't quite right if you didn't hear the artefacts when downsampling from 96 to 44.1. I don't think I noticed it much on my last speaker setup, but my current ones are sensitive enough that it makes difference.
 
Last edited:

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
No I can tell the difference when one track has been normalised compared to one that hasn't. It doesn't necessarily mean better quality, but most people will think that in a blind test.

I'm not sure if you don't quite understand the full recording process or are just plain refusing to entertain that there might be more to it than you realise.

I have studied music and recording through college and university. I'm not your average audiophile, I wouldn't even call myself an audiophile. I make practical decisions based on my ears and years of experience, and the results are quite astounding imho.

Working in 24bit whilst mixing a track is different from mixing down a track to a 24bit format.

I'm interested to know which tracks you chose. Not all 24 bit tracks are created equally. I'm also interested to know what you were playing back on and what speakers you were using (although I have heard the difference on some slightly above average speakers)

I can hear a difference if I play back 44.1KHz tracks with windows set to 48 or 96KHz instead of leaving it at 44.1 because Directsound doesn't convert them in a clean way, it introduces artefacts due to 44.1 not being a factor of 96. It will sound worse and I can hear it on my system. This is why I prefer Wasapi over Directsound.

This implies that something isn't quite right if you didn't hear the artefacts when downsampling from 96 to 44.1. I don't think I noticed it much on my last speaker setup, but my current ones are sensitive enough that it makes difference.

Yes so can I. It all depends how much you compress. If you compress by 1dB, I can guarantee the you will fail a blind test. If you compress by 10 dB, yea, no shit I can hear the difference.

I wouldn't quite call myself an audiophile either. I don't believe in half of the things that so called "audiophiles" believe in, that includes hi-res audio that comes from the same master as the 16/44.1 version.

I didn't study music in university, I studied digital signal processing as part of my telecom program. My personal experience combined with the mathematics I was exposed to in that course is why you and I will never be able to agree on this subject.

I don't remember which tracks I chose, it was a long time ago, maybe 7 to 8 years ago. At the time I was using HD650s headphones.

I did not let any OS resample. I resampled offline using Goldwave at the time. I always output at the same sample rate as the source file over USB audio to an external DAC.

Anyways, I'm not really in the mood for going back and forth on this, I already said that everyone is different, but one thing that cannot be argued is that bit depth does not improve audio resolution. It just gives you 48 more dB of headroom, and that takes the noise floor well below anything audible. Again, completely pointless since some of the best recordings today use maybe 10 bits of dynamic range (if not less), unless you are going to post process the signal at the playback stage. Which is another can of worms that we're not discussing.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
I'm tired too, have just spent 12 hours travelling across the country for a very worthwhile 2 hour visit, but I will leave you with this.

I opened both the 16/44.1 and 24/96 version of the same song in GoldWave, Mayonnaise by Smashing pumpkins. I then clicked the zoom button 36 times on each. This showed me the zoomed waveform for the middle of the song.

The waves are slightly different because each track has a slightly different lead in time, but the overall timbre will be the same because the parts visualised are less than a second apart. You can check this in the timeline at the bottom of the screen.

fx84z4.png


wqv4g5.png


I'll let you decide which one is which.

And for the record it doesn't matter whether you let Directsound or a wave editor change the sample rate. It's the method itself that counts. You have to multiply each of the sample rates up to a whole number that makes one a factor of the other, do the conversion then divide it back down. Most programs won't do this by default. I'm not sure whether this or previous versions of GoldWave will do this properly, I'm just being as informative as I can be right now.
 
Last edited:

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
I'm tired too, have just spent 12 hours travelling across the country for a very worthwhile 2 hour visit, but I will leave you with this.

I opened both the 16/44.1 and 24/96 version of the same song in GoldWave, Mayonnaise by Smashing pumpkins. I then clicked the zoom button 36 times on each. This showed me the zoomed waveform for the middle of the song.

The waves are slightly different because each track has a slightly different lead in time, but the overall timbre will be the same because the parts visualised are less than a second apart. You can check this in the timeline at the bottom of the screen.

fx84z4.png


wqv4g5.png


I'll let you decide which one is which.

And for the record it doesn't matter whether you let Directsound or a wave editor change the sample rate. It's the method itself that counts. You have to multiply each of the sample rates up to a whole number that makes one a factor of the other, do the conversion then divide it back down. Most programs won't do this by default. I'm not sure whether this or previous versions of GoldWave will do this properly, I'm just being as informative as I can be right now.

I'm not sure where you're going with this, but what you posted is malarkey at so many levels.

Take the 24/96 track and re-save it as 16/96 PCM. Now open it again, and compare the two wave forms, superimpose them and see if you can spot any differences.

If you do, congratulations, your source material had a massive dynamic range, so big that you wouldn't have heard the quietest parts anyway without going deaf first.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
How do you explain the quantisation in the dynamic range in the top pic?

Think about it like this. The quietest point on the X axis is still 0, effectively silence, and the loudest in the scale is the same loudness in both. This is your dynamic range. Where are the other bits used? The extra 8 bits aren't used to go above the loudest point, otherwise the 24 bit sample would either be louder then the 16 bit, or there would be a huge blank gap in the waveform.

Instead it is used to make the X axis go from having 65,536 points on the scale, to having 16,777,216.
 
Last edited: