Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. And 16 bit cannot by itself represent without distortion the full dynamic range of a lot of music. Most samples, most of the time, do not use a full 16 bits. This is why during CD mastering dithering is used.

Take it from me, when you master 24 bit stereo tracks, and you don't dither, huge amounts of low level detail disappear. The detail in the quiets is there in 24 bit, and lost when its truncated to 16 bits. Add the dithering, and you get increased noise, but the detail comes back.

One could suggest that with dithering 16 bits can represent it. But that's with a whole bunch of noise added to the signal. You can argue that noise is not audible, but it is _just_, and when mastering you can audition the different dither spectrums to find which dither least impacts the music.

http://www.digido.com/articles-and-demos12/13-bob-katz/16-di...



I certainly won't argue that 16-bit is just as good as 24-bit from an objective standpoint, as 24-bit is obviously superior, full stop. I'm just saying that for most listeners (everyone except those who listen at high levels in dedicated, treated listening rooms in very quiet environments) the difference will be inaudible almost all the time. Extremely low level detail doesn't really matter if it's lost in the >20 dB of natural noise in your room.

At that point the issue may become moot as other problems like standing waves, harmonic distortion, inaccurate speaker frequency response and so on creep in and affect music playback to a subjectively larger degree than '16-bit versus 24-bit does', IMO.

All that said, 24-bit is definitely the way to go since we might as well do it right even if only x percent of listeners will notice.

As an aside, thank you for being one of the conscientious 'good guys' in the studio. I collect music and wish I had a nickel for every sloppy recording I've heard.


Yes. I completely agree. From my perspective, even if one person in a hundred can hear a difference, then I'm going to pay attention. I don't want to boil everything down to meet the average. I think it's fine to release 16/44. I think well produced music sounds excellent in that format. It's one of the reasons the CD has done so well. It's just hifi enough to capture everything. And it's amazing to think this technology had its début in 1982!

But for so long, for those of us who want a higher quality (hearing it exactly as they would have heard in the studio during the production) there was nothing we could do. Willing to pay more for it, doesn't matter. You just can't get it. It's still that way.

What gripes me is the attitude of many, including this xiph article, that hires versions "make no sense", that "there is no point" and thus everyone should just be happy with what they've got and that anyone protesting is an "audiophool" or believes in magic fairies or something. We all get lumped in with those people buying $3000 IEC power cables. For many people it's all black and white, there is no room for grey. You either think that 128kbps mp3s sound identical to the analogue master tape, or you are a fool spending $20,000 on magical stickers to increase the light speed in your CD player.

All I want is to be able to buy the mix and hear it as the engineer heard it in the studio. That would be nice. I know it's not for everyone, but it doesn't make me crazy.

As food for thought, have a read of what Rupert Neve said about Geoff Emerick's hearing ability (being able to discern a 3dB rise at 54kHz) here: http://poonshead.com/Reading/Articles.aspx

"The danger here is that the more qualified you are, the more you 'know' that something can't be true, so you don't believe it. Or you 'know' a design can't be done, so you don't try it."


What's the argument in favor of using extremely high sampling rates though? Using 48 kHz instead of 44.1 seems reasonable (as in the Philips digital compact cassette that never really caught on), giving a little bit of headroom for wider frequency response, moving the filters a little higher or whatever, but I've seen D/A convertors that use 384 kHz, and I just can't fathom what the point is... It smacks of the "if some is good, more must be better" mentality.

There's definitely nothing crazy about wanting to hear a recording with as much fidelity to the master as possible. Yeah, I do remember people saying that 128 kbps MP3 was "CD quality" in the early days of the format, and that was a laughable claim indeed. One would have to be pretty tin-eared to think 128 kbps was hi-fi, although I'd say there were valid use cases for it, at least back when portable music players had storage in the megabyte range instead of the gigabytes we have today.

So many of those audiophile tweaks are just outright scams, and a fool and his money are soon parted. I guess education is the only way to combat that.

As for Emerick's ability to hear anything at 54 kHz, much less discern a 3 dB difference there, well, I am really, really skeptical. I'm obviously not in a position to say it's impossible, but it strikes me as an outright superhuman ability that should have been tested scientifically.


I'm not sure there is a compelling reason to distribute final music pieces in 192kHz.

I can only speak from my own experiences, and I record and mix in 24/96 but for reasons that don't really relate to music distribution. When doing further processing, some plugins sound better with their algorithms taking 96k instead of 44k. Every plugin has been written with compromises. And I find I can push hires audio further in the digital domain before unpleasant artefacts arise.

It's very much like image processing. If you take a picture with a cheap basic 1 mega pixel camera and then play with the curves and sharpness, at a certain point smooth graduated colour becomes "posterised". If you take the shot with a DSLR (with 12 bits of each primary colour) then you can push the image a lot further before the posterisation occurs.

I have found the same occurs for audio. I can manipulate the sound with less artefacts when its hires. The plugins sound more transparent and smoother. I tend not to go above 96kHz because this effect is achieved at 96, and 192 (to my ears) sounds no better and I'd just have bigger files and more CPU load from the plugins processing the extra data.

The bandwidth of 96kHz is just short of 50kHz, so if as an added benefit I satisfy the one in a million Geoff Emerick's, then all the better.

But then once the final mix is rendered and no more processing needs to be done, ie for distribution, then this hires advantage seems moot. Maybe there is still some advantage for people or devices that may post process the sound digitally in some way, like a digital equaliser in your playback device, or something like that. But then again, that device could always upsample before processing.

I tend to use 88kHz if the final destination is intended to be CD, and 96kHz otherwise (so there is less aliasing when sample rate converting to 44kHz).

The reason I harp on about the bit depth is because in my experience that is where we are falling short. If I take my hires sources and convert to 44 or 48 with a high quality SRC I hear no difference at all. But when I change to 16 bit the difference is enormous. There is always a degredation. And it's never a good thing. It seems silly to just be throwing away that bit depth because of a 1982 format that people aren't even listening on anymore.

Also on the topic of SRCs, this site has some interesting comparison. For the record I do my SRC conversion with iZotope RX 64 bit SRC. http://src.infinitewave.ca/

So in conclusion, I want 24 bit tracks. If they're given to me as 44, 96, 192... whatever. As long as they're 24 bit. Enough with the 16 bit! :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: