https://people.xiph.org/~xiphmont/demo/neil-young.html

24/192 Music Downloads are Very Silly Indeed

24 96 is more than enough!

Redundant!?

Reply to this post

I record artists at 24/88.2khz thru an RME 802

Reply to this post

Recording at a high sampling rate is recommended, so a steep antialiasing filter at the input is not required. After the data is captured, downsampling to 48k or 44.1, 24 (or 32)bits for mixing and processing, and finally down to 16/44.1 or 48k for distribution. Everything else would be an unjustified waste of resources.?

so most of the "differences" heard are due to the recording process rather than playback process. Some claim the frequency extension beyond the brickwall redbook standard is what gives some their preference of vinyl. Could this preference rather be due to better, purer analog path in the vinyl era "AAD". I'd like your opinion on this. My own opinion is it's all mass psychology. I have vinyl, but for recordings I can't find, like remix and extended mix records.

Reply to this post

This is one of the very BEST articles I have read on the subject and TOTALLY supports what I have experienced as one of the first 3 people person (verified at an AES meeting in 1983) to go digital when you either got a SONY player or you got nothing (Philips, the inventors, had not produced their units yet).

Reply to this post

Try this and find out what your ears like ? http://www.2l.no/hires/

Liking and accuracy don't always go together. I've seen people smother ketchup over a perfect steak.. because that's how they "like" it.?

DXD and MQA both sound superior in my system. But I have also been impressed by the sound of a 16/44 flac file on a Melco streamer. Anyway..there is a choice for us end-users. Some prefer vinyl and analogue tape

Do whatever makes you happy, that's the purpose of music.? However, arguing about physics, math or biology should be done with facts, numbers and repeatable experiments.

Problem with 'audiophile neuroses' is that not everything is measurable, although our hearing is capable to hear differences between components, cables and also formats..

Yeah, right... problem is that measuring equipment has surpassed our perceptive capabilities, however few are able to properly interpret what they measure and how it correlates to what we hear. Never mind...?

We can have lengthy discussions regarding this angel11. Since half a year I am searching for scientific articles regarding this topic. It has become part of my hobby so to say. But most important is to be happy / satisfied and enjoy music ?

By the way, MQA's hidden agenda is to restrict music. Dirk (the admin) sought out the patents, of which one is particularly disturbing.

The MQA debate is something else and especially audiophiles are not aware that it is not about them as the major target at all. Even if it would be about DRM or restriction, I have no problem with it. Too many albums out there which are (still) illegally being copied and 'shared' I achieve better sound quality at home than ever before with MQA since June 2016 and also feel better knowing the artists are being paid for. Some engineers are arguing about these older patents and even claim to have it reverse engineered, which is a very arrogant statement if you just check how MQA is encrypted. They are smart enough NOT to patent everything but hide it behind NDA contracts. MQA is much more than post-processing using minimum phase and upsampling. If it would be that easy, they would not have made such an effort. Anyway, all major music labels are in and also shareholders. The train will not be stopped. Replacing MP3 is the endgame? https://hsm.utimaco.com/wp-content/uploads/2017/09/MQA-Case-Study_vfinal_DIN-A4.pdf

peter8

“Problem with 'audiophile neuroses' is that not everything is measurable,

This is crap. We can measure the sonic signature of blackholes half a universe away. We know what’s happening in some gurus stereo.

If it can reliably be audibly sensed by a human, it can be measured, and understood as to why.

“although our hearing is capable to hear differences between components, cables and also formats..”

And if any of these things were to provide any sonic difference, we would know why. Assuming there is one, the engineers of the product understand why too. There’s no mysterious Ancient Aliens secret.

That’s said, the goal of these things is to be sonically transparent, which is cheap, and easy to accomplish.

Peter Veth Your marketing spin article aside;

MQA, as a model of enhanced sound quality, is a fraud. High Res Audio for playback, above 16/44-48, is also a fraud.

You said you research scientific articles. Do you have any?

And appreciating MQA has nothing to do with marketing spin. I also listen to flac, wav 16/44 upwards all fine with me

peter8 That link, what’s your point?

The whole point in promoting MQA as native enhancement in sound quality is pure marketing.

Nope, the whole point is that our hearing prefers natural sound. Cheers and have a nice day

Peter Veth You know I agree that MQA is intriguing. But since you've been researching articles on it, you have to agree it is a technically complex issue, and look for the whole debate, not just articles that support MQA and its hypotheses and claims. The math might be a little over your head, but the original article claiming human ears beat the Fourier indeterminacy is BUNK. Nyquist sampling can work fine because the indeterminacy is not what the article claimed, it is much much more fine-grained. The problem with this as far as MQA is that Bob Stuart also quotes the BUNK article in various papers and articles! Here's the debunking https://arxiv.org/abs/1501.06890

peter8 By all means, if you don’t want to argue with the big boys...

Nowhere does your link say that. That’s YOUR (very) lay interpretation.

grant6807 perhaps you're too quick on the trigger with the logical fallacies; Peter repeats a good point that pre-ringing that MQA claims to eliminate (or very much improve) is, indeed bothersome, because it is never found in sounds we have evolved to know, only in e.g. digital reproduction.

Tudor Munteanu Please show me where that link says we “prefer natural sound” in the context of this discussion.

Arguing is fatiguing Grant, I simply trust what I hear when comparing various formats and remasters of some albums which I know by heart. Just listen and you might appreciate it as well.

Grant Moon Why do I have to show you anything?! The point is, the debate is more complex, you just like to hit with the Bible (so to speak; not the Bible but the Wikipedia entry on fallacies). The point is, the ear hates pre-ringing (also too much post-ringing...) We know how stuff usually rings, and if it is too much that messes up the sound beyond recognition. That is what the "natural" sound means, not waterfalls. Now you know.

peter8 Do you trust what you see? So when you watch Copperfield you just believe it?

How do you scientifically control for your listening tests? Blind? Level matched?

If you have a banal opinion, fine, but when you make a statement of perceived fact and try to prop it up with science, you can’t just go back to the confines of your comfy opinion!

I suppose you enjoy your music also by using your ears or not grant6807..?

Tudor Munteanu I agree with most of what you’re saying. However, If you are making a claim, it’s your responsibility to support it, so if you don’t want to, I’m more potentially comfortable in dismissing it.

Making a logical argument is key in understanding.

Cherry picking a singular limited study to prop up ones uncontrolled listening tests is absurd.

peter8 Using my ears, and “trusting” my ears are entirely different things.

Well, at the end, this hobby and passion for music are all about listening and finetuning. I do not purchase a new amplifier, cables or loudspeakers by reading scientific articles or reviews until the absolute 'truth' has been verified by a substantial and scientific blind test. With remastered albums and new formats, it is the same thing. for me it is clear that 16/44 recordings, especially form the early days, sound harsh. Not all, but many. The fact that an old DAT recording can be restored to a whole new level of reality with MQA is simply a marvellous achievement. Just talk to the recording engineers who cherished these unique recordings and what their reaction was after the MQA white glove remastering procedure. Those genuine experiences and the ones I have myself are all it takes?

peter8 Most amps, cables et al are sonically transparent, and yes, when I found this out, I stopped fooling with component mix matching; a process driven by exotic gear marketing.

If you aren’t using a microphone for speaker room calibration then you should. Both are a very random synergy that should be properly evaluated, and it’s easy and fun to do. As you said. It’s a hobby.

Since there’s is no single recording standard, (far from it) and there’s nothing in a CD that is inherently harsh. Im left to conclude that you’re biased in someway. It’s like someone complaining of a ghost in the basement.

There are many remastered high res recordings which sound better, but that’s because of the remastered part.

Er... can someone explain or point to an (science - based) article on what exactly is "pre-ringing"? As someone who's designed precision ADC's and DAC's, I'd be very grateful to learn what I've been missing.

angel11 I’ll also hang up while they answer.

angel11 Pre-ringing is real. That's why I never use a linear phase equalizer in my music productions. This figure from my Audio Expert book shows the pre-ringing that accompanies normal "post" ringing after applying EQ boost to an impulse wave. Where a normal "minimum phase" EQ adds ringing after the wave, this adds it both before and after. You design A/D/A converters and never heard of pre-ringing?

Great, thanks! Now, please tell me how this applies to A/D or D/A conversion?

ethan361, please don't confuse the artefacts arising from inappropriate use (or misuse) of digital filters with the actual process of conversion. A properly designed converter neither adds nor takes away anything from the signal.

Converters use filters.?

Yes, they sometimes do, and there's plenty to choose from, all with their specific merits and downsides... I'm always amazed when I see a DAC that gives you a bunch of filter settings to choose from, but then again, it's a sign of an immature approach to D/A. "Throw them at the user and let him pick which one he likes!"?

A good DAC has no sound, and I agree that a competent designer won't confuse things by claiming that different filters should be selected to taste.?

Yes, a good DAC and a good engineer... haha. The "no ringing" argument is kinda loaded, don't you all think?! We see both good and bad mastering in practice, and that is a real issue.

Reply to this post

Reply to this post

High rates in recording and post production are a must.... 16/48 for a user is nirvana.?

No, they are not. High sample rates are only relevant if you're going to slow down what you recorded a lot. Otherwise, there's nothing to gain from it. Plugins will oversample when it's relevant. No need to use anything else than the final sample rate (which usually is 44kHz, but 48kHz for film/TV).

24bit is useful for headroom when recording but not necessary outside of that.

Daniel....high rates for me is 24/96...higher are just audiophile mental masturbation and adding noise.?

Since we can't hear anything beyond 20kHz, what's the point of anything more than 44.1kHz?

None...agree 100/100.... Antialiasing has enoegh room to work correcly.....

Carlo Maria Benedetti And why waste nearly six times the space! Not to mention distortion from the noise you can’t hear, you will hear, so 44-48 is slightly better for playback fidelity.

Reply to this post

Why are people posting the same shit that other already have..

As long as high-res audio is being pushed, shit like this can't be posted too frequently.

Reply to this post

Posting the same article over and over again is very silly, indeed! Simply search this group's history, and you will find past discussions on this subject. It's not as simple as the science. The recording chain is fairly complex and there are all kinds of issues from provenance to mastering and filtering. That's why I am sympathetic to certain formats like DSD and even MQA that take a minimal, or controlled approach (respectively) to mastering. Yes, I am aware of the debates; MQA is not perfect and nothing is. DSD has issues, too. Personally I'd prefer 24/96 PCM as a reproduction standard. The files I analyzed that were mastered at 24/96 or 24/88.2 sound and look perfect to me (as far as spectrum, levels etc.). Yes, you could master well at 16/44.1 with appropriate dithering but why bother?! No one wants CDs anymore. I like DSD, too - not sure why it sounds so good. MQA sounds great, too but I am still on the edge about it. I think it came a bit late to the game but the control it offers over the chain is welcome (the fact that undecoded it still offers CD quality is smart). I draw the line at loaded statements that attack Bob Stuart etc. We can discuss all this on the technical merits without getting personal.

"No one wants CDs anymore" ...speak for yourself, tudor0! ? Although my collection has effectively stalled at its current size, because new non-mainstream releases aren't even pressed to CD here in the USA, and sometimes there isn't an (expensive-import) EU pressing available, either. ☹

Reply to this post

Alway numbers are...just numbers that you can import in Excel... so

Reply to this post

A couple of points about MQA: a) there are several patents, but the developers applied for patents on stuff as they discovered it. The patented techniques don't necessarily appear in the final version (though equally, presumably some of them might do); b) MQA is more interested in maintaining timing accuracy than excessive frequency response. The developers apparently use sampling techniques like those described here: http://lcav.epfl.ch/research/topics/sampling_FRI.html which may not respond helpfully to traditional analysis.

Reply to this post

MQA is something very pushed in UK while the rest of the world live very happily with FLAC....

No problem to be happy with what you like. My experience is different, but I still appreciate 16/44. It is a matter of choice. Almost all HD and MQA remasters sound better in my opinion.

Reply to this post

What an idiot. If you guys can't hear a difference just sell every piece of gear you have and go find an old Sony boom box at a garage sale

Figlio di puttana....sei tu l' idiota

Reply to this post

Well... What sample frequency is needed to accurately describe a sawtooth wave at 10 kHz? Triangular wave? Sine wave? 2 ms slew rate impulse?

Infinite is needed. But we can't hear past 20 KHz, so the answer is 44.1 KHz.?

Anders - try the following experiment: using two high quality analog function generators, one set for a sine wave at 15kHz, the other for a square wave at the same frequency, level adjusted for equal volume (sq. wave amplitude should be 0.707 of sine peak), feeding two inputs of the highest quality amplifier and speakers you can get, set to a comfortable level. Do the comparison and share your observations.

ASAP ?

You will be surprised?

Anders Mellgren, exactly! 96Khz is the minimum to be able to discriminate these waveforms for audio and what do you know, the minimum recommended sample rate by the Producers and Engineering Wing of the Grammys. 24bit, which is about gradations of amplitude, not ultimate dynamic range and is also recommended and something I prefer especially after obtaining the Beatles USB stick which is 44.1Khz 24bit. I admit few can hear any difference but sound quality is not a democratic issue. If one person in a million can detect it reliably than it is audible period. I know that person btw and don't do so badly myself in testing but GC is 100% with blind ABX for amplifiers as far as formal tests. Of course, the testers explain it away, refuse to accept what their own tests reveal, twice! He's my reference and he can judge in 20 seconds but I've really been contemplating the waveform issue. Thanks.

And one more thing.

What does the interference of 45 kHz and 48 kHz sound like?

Silence, if through a linear system.?

Have you ever heard a guitar chorus?

What's a guitar??

anders9 Seriously what's with all this quizzing?! If you have something to say, just say it and please argue and support it scientifically. You know something we don't? This is not either an enthusiasts audio group or "trust your ears" pseudo-audiophile nonsense group.

Anders, seriously - how on earth can you make such an analogy?! What's linear about the vibrations propagating and bouncing about in a guitar's body, bending and twisting it as they reflect from its edges! If you don't know, there was actually a 'novel' speaker idea some years ago which used two ultrasonic frequencies, one fixed, the other modulated with audio which were targeted at a flat surface. It did produce audible sound, sort of...?

If you add two interfering sound waves close in wavelength but not at the same wavelength you’ll get a resulting sound wave that is altering between constructive and destructive interference, in other words you’ll get an amplitude modified wavelength that is altering energy with the difference of the two frequencies as result. 48 kHz - 45 kHz = 3 kHz Those 3 kHz are audible, in fact there are speakers using this functionality.

https://www.performanceaudio.com/item/sennheiser-audiobeam/11174/

Sorry for the inconvenience...

Jumping to transducers based on "interference" in a non-linear medium.. how is this related to the topic of optimal sampling rate for reproduction of music? In any case, the correct answer to your question - what does the interference of 45kHz and 48kHz sound like is not 3kHz. That would be one of the beat frequencies if any intermodulation is occurring. You'll also get a bunch of other frequencies too...

angel11 Your experiment with a function generator is flawed. This is from my Audio Expert book:

I've also seen claims proving the audibility of ultrasonic content where a 15 KHz sine wave is played, then switched to a square wave. Proponents believe that the quality change heard proves the audibility of ultrasonic frequencies. But this doesn't take into account that loudspeakers and power amplifiers can be nonlinear at those high frequencies, thereby affecting the audible spectrum. Further, most hardware generators used to create test tones output a fixed peak level. When the peak (not average) levels are the same, a square wave has 2 dB more energy at the fundamental frequency than a sine wave. So, of course, the waves could sound different. It's also worth mentioning that few microphones, and even fewer loudspeakers, can handle frequencies much higher than 20 KHz. Aside from tiny-diaphragm condenser microphones meant for acoustic testing, the response of most microphones and speakers is down several dB by 20 KHz if not lower.

You're also incorrect about mixing two sine waves and getting difference frequencies. That happens only in the presence of non-linearity. It doesn't happen naturally in the air. How could you not know this?

Ethan, you're repeating what I wrote.

When mixing two sine waves in a linear medium what you get is amplitude modulation, not new frequencies.

NO! Mixing two frequencies linearly gives you only those two frequencies. LOL, this is Signal Processing 101. AM gives new frequencies above and below the carrier frequency: https://en.wikipedia.org/wiki/Amplitude_modulation

Reply to this post

a lot of good stuff in this article. But it still does not explain why with 192 kHz "practical fidelity is slightly worse. The ultrasonics are a liability during playback". I don't see why. Granted it may not sound better than 44 kHz, but why would it be worse? If you had content above 20 kHz, maybe it could cause IM in a badly designed192 kHz system, but it would cause even worse distortion in a 44 kHz system. You antialiasing filter would take care of this. Unless you are silly enough to set your anti-aliasing filter at a higher frequency in the 192 kHz case, the 192 kHz system will always sound better; inaudibly better maybe, but better?.

Reply to this post

I edited my response... sorry I wrote too fastly?

He explains it at the paragraph that says 192 considered harmful. You’re own the right track. Keep in mind, this is playback.

He concludes...

There are a few ways to avoid the extra distortion:

  1. A dedicated ultrasonic-only speaker, amplifier, and crossover stage to separate and independently reproduce the ultrasonics you can't hear, just so they don't mess up the sounds you can.

  2. Amplifiers and transducers designed for wider frequency reproduction, so ultrasonics don't cause audible intermodulation. Given equal expense and complexity, this additional frequency range must come at the cost of some performance reduction in the audible portion of the spectrum.

  3. Speakers and amplifiers carefully designed not to reproduce ultrasonics anyway.

  4. Not encoding such a wide frequency range to begin with. You can't and won't have ultrasonic intermodulation distortion in the audible band if there's no ultrasonic content.

They all amount to the same thing, but only 4. makes any sense.

correct, 1 to 3 are probably expensive and undetectable in a blind test. 4 makes sense. Which leaves the issue of how to design an anti-aliasing filter with a steep enough slope without messing up other things.

At this point, you can spend more money and engineering on an anti-aliasing filter during recording. If the recording has zero content above 20k, you are ok in your living room...

Reply to this post

What’s your point?

Nothing. Got it.

You cannot stop progress, even if you do not believe or want it. This is just the beginning grant6807.

Peter Veth Again, what’s your point? I follow the evidence to where it leads. You cherry pick. Then when your nonsense is called out you special plead out of the debate with “opinion.”

I'm with peter8!

peter8 So again, what’s your point?

Simply that a lot of people and professionals prefer highres audio Grant.

Reply to this post

the answer is 16bit 44 my friend on 50cent CD's. You're welcome.

Or 24 bit, in certain circumstances. 44.1 is all you need.

Michael Walsh Nope, 24bit just adds 48dB of higher dynamic range. Which you'd have to play 160dB to hear. Why not read the article posted? It explains it all. Just an extra 8 bits of noise. Most recordings go to a maximum of 90dB, within the threshold of 96dB provided by CD's.

A peak is normalt measured during 10 ms with a release rate of 40 dB/s. The big problem is mix/mastering. Witch means that you can have peaks 20 dB during 10ms that’s never seen or measured and therefore completely cut out during the process. If you want those transients to get trough you’ll need a headroom of at least 23dB. Music totay is normally mastered with a headroom of 14 dB (LUFS -14) witch is the headroom most music services use, Spotify, Tidal, Deezer. The exception is Apple who uses LUFS -16. EBU recommends LUFS -23 in broadcast.

Reply to this post

The information contained in a 16/44.1 format is perfectly sufficient, however, to be fair, the actual conversion back to analog should be done at a much higher rate, using oversampling, asynchronous preferably. The purpose is to push quantization noise well above the upper audio frequency, then filtering it out is easy and does not introduce problems in the audio band. That's the theory, at least, but the "how to" actually do this is where the difference in sound between different devices lies. It's not the format.

I playback with a panasonic DVD player that's 24/96 but only because it cost me $10 and it has a jog wheel which I love.

Reply to this post

And why not 32bit-float 384k?

Maybe 32-bit fixed Eduardo ? https://l.facebook.com/l.php

Ok, 32bit 352.8k on the work, good to know! Thanks!

That would be as useful as paying for your groceries in pennies...

Reply to this post

It could be argued that pushing the Nyquist rate well in to the ultrasonic might lead to a discernible improvement over standard 44.1Khz sample rates in digital systems. Well worth reading up on.

In different applications, particularly digital recording, /recording/ at 24bit/192Khz and performing all mixing and processing at this rate is considered by (memory serving) Rupert Neve to be the point at which digital summing (mixing) would match the fidelity of analogue consoles.

Would easily fit on a £5 memory stick.

And would require a disk readback rate of 192MB/s. Well within the capabilities of even a chaepo-cheap SSD, let alone a large-capacity HDD.

And a commonly-available CPU would be able to perform 2,750,000,000 calculations on each of those 128 channels every second.

And at 48k samplerate, we can do 447 simple calculations on every single one of those channels during the time it takes to play back one sample.

You mean 7161, surely? Gah, this might be wrong. 458,000/sample?

I'm making a mess of this. The perils of working entirely in calc.exe!

352 GLOPS (FP32) / 48000KHz / 128 channels = 57,291FLOPS/channel/sample

Really interesting things will occur when DAW folk begin to follow the common path of cooping the GPU for GPGPU calcs. The latest GPUs run at 12 TFLOPS (FP32), albeit with a latency penalty, for an astonishing 1,953,102 FLOPS per channel under the above conditions.

Avid Protools expensive outboard? Just going to be full of graphics cards.

, a question: what level do you typically work with when processing or mixing? It occurred to me that if you are tempted to push as high as you can during production, that could be a reason why you get harsh sound. If your data is in 24 bits, try working at -12 to -18dBFS. You'd be sacrificing 2-3 bits that make no different in the end - the dynamic range would be around 120dB. (Which is more than the difference in SPL between a mosquito and a jack hammer in the same room? ).

Ah, there's a dichotomy in my production "signature" where that is concerned. I'm foremost a scientist, so strive for a puritanical approach where tracking, gain management and headroom is concerned. Then, and this is the irony, I like to add "dirt" back in to the mix, but at points and of modes of my choosing - not those limited or dictated by the surrounding equipment.

I came to where I am today via BBC World Service, #1 Record Label, Channel 4, hardcore military R&D and subsequently numerous independent endeavours.

Have I learned nothing??

No need to be sarcastic, I only asked a question. Many recording engineering with an analog background make this mistake. Unlike tape which saturates warmly, you should know what happens when digital is overdriven..

angel11 said "if you are tempted to push as high as you can during production, that could be a reason why you get harsh sound."

No, that's yet another silly myth. Proof here: http://www.prosoundweb.com/channels/recording/the-truth-about-record-levels/

said, "There are severe methodology problems within the article. The null test would be rendered invalid on account of utilising two separate pieces of recording apparatus split at the mic-point."

If you're talking about my Perception article, there are no flaws I'm aware of.? Yes, the Null Test can't be used in every situation, but it can be used in many including comparing HD audio with CD quality audio. The files in my HD-Audio article linked below can in fact be nulled.

CJPN said, "A far more sensible test would be to capture the entire audio sample at the highest possible sample rate and depth, then down-sample the audio in software in a perfectly repeatable fashion."

Yes, and that's exactly what I did:

http://ethanwiner.com/hd-audio.htm

ethan361, a quote from the link you posted: "As long as the master output volume is set to avoid distortion in the final rendered Wave file, both mixes will sound exactly the same."

So, to give you a very real example: mix any two or three channels (signals) that peak at 0dBFS and see what happens...

+6dBFS, no?

(assuming the peaks occur simultaneously)

Well sure, but that's not what this is about. The notion that digital audio "sounds better" 18-20 dB below maximum is just wrong, and is easily proved by measuring.

Yeah, right...??

And beside, what you quoted is correct and compensates for exactly that sort of summing! If the output will clip, then turn down the master volume. Sheesh people!

What sort of argument is that?

Why don't you listen to my files and email me your choices. While you're at it, do the same for my HD Audio linked above too.

I'm referring to the level of each track during mixing or processing, not while capturing. But whatever.. do what makes you happy!?

This conversation is off-track yet again. Record as close to 0dB as possible (at, say 24-bit). Mix at -20dB in a 64-bit environment and it's practically imperceptible.

Bingo!?

angel11 What you are proposing is just silly, and so the burden of proof is on you. I can tell you I've already done this test, so I urge you to do the same.?

This part of my AES Workshop video proves that you can send audio through a DAW at 18 dB over 0 dBFS (!) and still not harm the sound. As nice as I can say this, please watch it and learn:

Ok. I'm currently driving (not texting while driving - enjoying a cup of coffee at a petrol station). When I get to my pc, I promise to do so.

When you get a chance, watch the entire video. I promise you it will be worth the time spent. This video too (it's not mine):

Could that be the DAW's innate "foolproof" headroom? When, as a programmer, you have the choice of either allowing the user to hit a brick wall, or granting them a bit of sympathy, which would you choose?

Out of curiosity, how do you achieve anything above FS? Extra bits padded in front of the MSB?

Bitshift.

Oh yes, and going bach over the entire mix... great!

Bach wouldn't have gone back over the mix. He would have written an angry letter to the local prince.

Nice take on my typo!?

Watch the video! Hint:" 32-bit FP math, as is used in all modern DAW software. Edit: Yes, CJPN's comment applies to FP math.

Question: Does 32-bit floating-point represent a greater set of recorded values than 32-bit integer?

Angel Despotov I record at 24/96. My gear goes to much higher resolution, but when I monitored it, I didn't like it. I had no reason at the time for going lower resolution, but the above discussion is making sense to me now. Well also, my favorite plugin to reduce noise, BIAS soundsoap, only works with lower resolutions, and not many software options for DSD that arent ridiculous. When I took the NPR double blind test, a surprising discovery was I also like 320kbps mp3 over.wav. In double blind, I pick 320 frequently over.wav. I never picked the.wav as sounding best.

You might need to visit an audiologist, scott5994, or perform the same tests in a genuinely double-blind environment not of your own choosing.

That, or it could be that your mind is so conditioned to no longer hearing the "complex" high frequency stuff removed by MP3 that.. it sounds worse?

Blind testing has shown time and time again that choosing between anything higher than 256 kbps to be chance for most listeners.

That said, not all lossy formats are the same.

Also, those who did perform above chance, some reliably did, but do so only poorly (55%) and only with deliberate carful listening with the best equipment, and knowledge of the testing paradigm, and the acknowledgment that the difference was very subtle. In other words it’s not likely they would pass a true double blind difference/preference test.

So, when someone claims a night and day difference, I’m suspicious of their motives.

Actually it was the engineer who scored better, but again, 55% (not very well) with the knowledge of what’s being tested; not double blind.

I don’t understand your second sentence.

Lossy has, more or less, reached its ultimate conclusion. The reason I say this is mainly because so many older mp3 lossy codecs are still out there, and older than 10-15 years ago, they were all terrible.

To be clear, I’ve yet to see an extensive double blind test, only blind. This only helps the idea that there is a difference/preference as the listener who is biased to a difference/preference will go to great lengths to sleuth out difference, and perform poorly. Again, they knew what the test was about.

That said, yes, all experiments have limitations, track limitations (can’t test them all), time limits, etc. however, that is all experiments in science have some sort of limitation, as all experimenters are biased, but it’s how these limitations are observed, and their bias controlled which determines its impact.

Christopher James Philip Norman

Streaming can involve other data compression techniques.

However, and again, I’ve yet to see any blind testing suggesting more than chance, or only slightly less poor than chance above 256, that would include 320 org/vorbis.

If you disagree, please conduct a test with trained listeners and show me this “night and day” observation. I’d expect 98% pass rate, which I’ve never seen.

Moreover you could also show the difference in material with different characteristics which make it easier.

In sum, you’ve got some fascinating claims, but your evidence is very meager if non-existent.

Christopher James Philip Norman Google machine it, or dismiss it, that was more of an aside. You can make your own Ogg vorbis lossy cuts.

Yup, me too, again, just an aside.

Again, an aside, so take it or leave it. The remaining, and primary premise of my discussion doesn’t need it, and that hasn’t changed.

Christopher James Philip Norman So you’re deliberately derailing the conversation, to something admittedly I said doesn’t really matter to my point? Again, take it or leave it. The remaining, and primary premise of my discussion of your claim doesn’t need it, and that hasn’t changed.

Yes, evidence for your many definitive claims, which is meager or nonexistent, and your non response, and desire to chase tertiary commentary, just to sound more right, only makes me more comfortable in this.

Re read, I’m done jackass. I have a life, and not spending it going in circles with you.

Reply to this post

The xiph article is very detailed and logical however like most logical arguments it's incomplete and misses the big picture and therefore has the wrong conclusions. It ignores pre ringing and time smearing caused by high slope filtering. It also ignores the fact that people can be positively affected by ultrasonics in music. The following research proves that listening music recorded processed and played it 192khz can make a difference to the listening experience. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5285336/

But it does not discuss whether they can hear the difference. It discusses whether it changes their arousal state (relaxation); it does not prove it, it shows that it has possible effects. Indeed, nothing can be proven in psychology, but you can say that people react in a certain way.

It shows that it CAN change peoples alpha waves and relaxation state but not that it will always do that.

Reply to this post

yes, I see what you mean, that is why I do not categorically say that 44/16 quantization noise is audible, I state that it MAY be audible (to be tested blind)... However, even without the attenuation of the potentiomenter, remaining at 100dB SPL for full scale DAC, it is possible that during a quiet passage the quantization noise is audible. Do not forget that with uniform quantization (no companding), quantization noise does NOT depend on signal level, it is the same for all signal amplitudes! So during the quiet passage, that Q noise is there... This is why in telecom one distorts the signal by compressing it logarithmically on the transmitting side, using mu law or other scheme, and one decompresses at the RX side. The result can be relatively distortionless and avoid audible Q noise during low signal conditions. Non uniform quantization could be interesting for high end audio if done well. But it may be easier and results may be better by increasing resolution, if not bit rate.

Reply to this post

Have any audio editing software at hand? Take any two samples of audio and mix them at a 1:10'000 ratio (-80dBr). Listen to the mix and judge for yourself.?

Reply to this post

20 dB SPL is way below ambient noise, not to mention psychacoustic masking...

Reply to this post

In the mix no, but the quiet passage alone may be audible. Granted, in a living room ambient noise is around 40 dB SPL, an anechoic chamber around 15 dB SPL, so yes my 14 dB SPL quantization noise from 44 kHz is most likely INAUDIBLE. But it would be good to test this...

Reply to this post

BTW I am quite happy with 44/16. Call me deaf?

Reply to this post

No, you're not deaf? Prejudice is the real problem.

Reply to this post

BTW, the s/n ratio of the most admired studio tape recorder, the STUDER A800 barely manages about 80dB with Dolby A NR at 30ips. Its kid brother the A80 is about 67dB. You may be surprised at another figure: erasure efficiency - 75dB. Let that sink in..?

Reply to this post

BTW when you add SPLs of unrelated content (background noise and quantization noise) what matters is the RMS sum. For dB SPL on two signals a and b to be added, one needs to do 10 log [10^(a/10) + 10^(b/10)]. So adding 40 dB SPL background noise and 14 dB SPL quantization noise gives... 40.01 dB!

Reply to this post

This paper (though the work is somewhat preliminary) may be of interest: it suggests that “there exist audible signals that cannot be encoded transparently by a standard CD”.

http://www.aes.org/e-lib/browse.cfmelib=17497

See also the forum discussion https://secure.aes.org/forum/pubs/conventions/ID=416

That very abstract conclusions was highly criticized.

grant6807 indeed, though there is a lot more meat in the forum discussion than that, notably about ABX testing paradigms.

The abstract apparently differs from the paper cited conclusions primarily due to the fact that the abstract had to be submitted some way ahead of the paper.

Reply to this post

Mike Tommasi, a quality potentiometer has an attenuation range of about 80dB. Consider the following thought experiment: two identical amplifiers connected to two identical speakers. One is playing at "normal" volume, the other is playing something else, but its volume is turned fully down (-80dB relative to amp 1). What is your guess - would you or anyone be able to hear amp 2 while amp 1 is playing?

Reply to this post

Conjecture (please point out any errors I may have made): a blind test might reveal an audible improvement with 192/24 music w.r.t. 44/16; actually, 44 kHz and 24 bits would be sufficient to make quantization noise inaudible. This may not be audible in a living room, but might be detectable in an anechoic chamber. Luckily, we do not listen to music in an anechoic chamber so do not need to ditch our inferior 44/16 equipment? The reason: quantization noise is, in ideal conditions, 6 * number of bits below the full amplitude sound material. Practically, depending on type of equipment and type of content, quantization noise is at least 10 dB higher; hence for 44/16 this is about -86 dB w.r.t. full scale, for 192/24 it is about -134 dB. Indeed, once you are sampling beyond the Nyquist frequency, the bit rate matters little, it is the number of bits that may be detectable in a blind test. While it is not reasonable to listen to music at a sustained 100 dB SPL (roughly what you get standing in front of the speakers at a discotheque), it is true that at normal listening levels short peaks may reach 100 dB SPL, thus the equipment must be capable of reproducing this accurately. A reasonable blind test would thus have the DAC full scale set to a level that produces a 100 dB SPL at the frequency we are most sensitive to (3 kHz). Quantization noise then may reach in theory 100 - 86 = +14 dB SPL for 16 bits, and -34 dB SPL for 24 bits. So we may indeed be able to hear quantization noise in a hypothetical noiseless room. Our ears peak at 2-5 kHz, where our hearing threshold is around -8 dB SPL. In this test at 100 dB SPL full scale on the DAC, the quantization noise for 44/16 could thus be 22 dB above threshold, while for 192/24 it would be 26 dB below the threshold and, thus, inaudible. OTOH, what about ambient noise? For dB SPL on two unrelated signals a and b to be added, one needs to do 10 log [10^(a/10) + 10^(b/10)]. So adding 40 dB SPL typical living room background noise level and 14 dB SPL quantization noise gives... 40.01 dB! Not audible. Probably. Possibly? Likely?? So, when are we going to test this?

Dan Lavry states in the paper i posted above that this is theoretical true, but you have to count also the cuttoff filter in hte convertor and the linearity of PCM sampling:

"Sampling faster enables recording and keeping the energy that we do not hear. At best it will cause no harm. In reality there is a potential that keeping ultrasonic frequencies will cause unwanted audible alterations. One of the more well-known mechanisms for such alterations is imperfect linearity (nonlinearity) in equipment. The type of distortions generated by non-linearity is called intermodulation, and such distortions are rather offensive in nature because the distortion energy is not harmonic. Harmonic distortion tends to alter the timbre, it “colors the sound” by changing the relative harmonic content. Intermodulation is much worse, it is not related to the sound or its harmonics; thus it takes much less intermodulation distortion to become offensive to the ear. As a practical matter, linearity gets worse and worse as frequency increase. The linearity at the audible range (lower frequencies) is better than the linearity at ultrasonic frequencies. While keeping the unnecessary ultrasonic energy may cause harm, making sure not to include the signals that we don’t hear provides protection against such degradation. With none of the un-needed signals to “spill over” the problem is gone! Most microphones are designed to match the human hearing frequency range, so they don’t pick up ultrasonic energy; which is a good thing. It prevents the intermodulation I spoke of earlier from happening. Simply speaking, it is a good idea to keep as much as possible of the part of the energy we need, and to get rid of the energy we don’t need (ultrasonic frequencies) as early as possible in the audio chain. Again, ultrasonic energy will at best cause no harm to the sound you need, but it certainly cannot help; and there is another price to pay. With the current higher sample rates, the file size is doubled or even four times larger, and file transfer rates are proportionally longer. Is this price worth paying when you consider that your audio quality may also be compromised by the “spilling over” of non-musical energy in addition to all of the other inaccuracies due to using a higher than optimal sample rate? Good conversion requires attention to capturing and reproducing the range we hear while filtering and keeping out energy in the frequency range outside of our hearing. At 44.1 KHz sampling the flatness response may be an issue. If each of the elements (microphone, AD, DA and speaker) limit the audio bandwidth to 20 KHz (each causing a 3dB loss at 20 KHz), the combined impact is -12dB at 20 KHz.

At 60 KHz sampling rate, the contribution of AD and DA to any attenuation in the audible range is negligible. Although 60 KHz would be closer to the ideal; given the existing standards, 88.2 KHz and 96 KHz are closest to the optimal sample rate. At 96 KHz sampling rate the theoretical bandwidth is 48 KHz. In designing a real world converter operating at 96 KHz, one ends up with a bandwidth of approximately 40 KHz."

From page 2 of this paper by Dan Lavry, one of the specialist in digital sampling and audio conversion: http://www.lavryengineering.com/pdfs/lavry-white-paper-the_optimal_sample_rate_for_quality_audio.pdf

like I said, I am less worried about ultrasonic content that is easly dealt with at the recording stage, than about quantization noise that is always there and, in theory, audible for 16 bit systems.

Reply to this post

Is the graph is really representing time on the x-axis as opposed to frequency? Let's say you have a line of microphones. These microphones represent the "hair cells" that occur as you go farther into the ear's sonic path. Sound occurring on one side of this line of microphones propagates along the line of microphones such that, aside from the expected attenuation over distance, which can be considered negligible in the case of close proximity, the first mic has the same signal as the last mic, only delayed. If the signal per microphone isn't band limited, the adding of the microphone signals, each delayed to compensate for distance, will still have full bandwidth within ultrasonic frequencies, if the distance isn't too long. This is due to the FIR effect of the delays, which is driven by distance. Is that distance small compared to the 0.7" wavelength of 20kHz at sea level? Probably not. The number of taps (hairs) determines the number of "microphones". So, why not listen to the closer mic only for the highest frequencies? All this could explain the ear's natural roll-off, but NOT a limit to the detection of time between signals from both ears combined. Hmmmmm. It's DSP in our brain that makes the ear so good as pinpointing the origin of sounds. Maybe also as good at "feeling" what we can't hear? You tell me. Hope that all came our right. I'm not disputing the article, by the way.

Reply to this post

Reply to this thread

This site uses cookies and other tracking technologies to differentiate between individual computers, personalized service settings, analytical and statistical purposes, and customization of content and ad serving. This site may also contain third-party cookies. If you continue to use the site, we assume it matches the current settings, but you can change them at any time. More info here: Privacy and Cookie Policy