24/192 Music Downloads are Very Silly Indeed

I think this is a good article about the subject. I don't recall us discussing this article before, but if so, my apologies in advance.

24/192 Music Downloads are Very Silly Indeed

The upshot is that 44.1khz/16bit is all we really need for listening.

Sorry schreiber9, dirk wright is correct. The difference you hear is slightly different mix as these hi res companies are doing to literally foul you. I bet you can't hear anything above 15khz with -3db and noise of 24bit?? Yeah good luck ?

Well all I can say is that we all have different needs.

doug7489, correct. If you will mix this material you need 24 bit as each mix adds noise - reduce bit depth. 16 is no good as 3-4 mixes and you're down to 12 bits and yes you xan easly hear this.

I don't agree with you at all. I have a good ear, and I do hear the difference between the common CD-sounds and the better ones, for example the music on SACD, or on 24 bit /192 KHz. It's very easy to check out for everybody who has an SACD-player and some hybrid SACDs. The difference is very much audible and enormous. Even the remaster functions of the CD-players can't improve sufficiently the sound of the CD to the level of an SACD. This difference is audible more on the classical music CDs, or on acustic jazz ones. but the sound of the rock recordings are also better on SACDs.

Of course there is difference i hear it as well but it is not down container it is held in but mix itself.

Sebastian Gawlik When many years ago I bought my first wonderful Vivaldi SACD the seller (contrary that he was interested in selling me that) told me the same. But thanks to God, I didn't believe in him. So let me tell you, that you may know: a musical sound is not only about the frequencies, but harmonics, and many all that I don't even know about. I trust in myself, and my ears, my musical taste, so I do hear that the sound of CD is a muddy weak thing compared to the one of the SACD. And I also read about the difference between SACD and 24/192 PCM sounds, but I couldn't choose between them. Otherwise if I listen to test samples on my earphones I can even hear slight differences between 2.8 and 5.6 Mhz DSD sounds, or the 24/96 and 24/192 sounds. And let me try to convince you, I am not snob, and also not a HIFI-mad, (who listens to only "vinyl" or "analogue" tapes). I like and appreciate digital musical technology as did Herbert von Karayan and Frank Zappa.

schreiber9, do some double-blind tests, then get back to us about what you can 'hear'. I thought the same as you, and I was a classic high-end tweaker, until I actually sat down and did some proper DBTs, back in the early '90s.

victor4503 awesome

Reply to this post

haha golden rule from the article "Misinformation and superstition only serve charlatans". ?

Reply to this post

It's nice to have 24 bit when recording/mixing to give that extra headroom, but you're right 44.1/16 (or 48/16 for movies) are all that is needed.

yeah, he discusses that. For recording, 24 bits allows the recording engineer to make a rough guess as to needed headroom without running into problems later.

I still need to watch the video, I was just commenting on the headline?

It's kind of like shooting photos in raw before editing and converting to another format (like jpg)

I just realized that you are not my friend Dru Wright, who is another audio guy.?

mike6 no problemo

Finally read it - good article. I didn't know about that site and will bookmark it for further reading.

A book that I highly recommend for understanding this subject at a technical layman's/recording engineer's level is "Mastering Audio" by Bob Katz, which I think agrees with this article completely.

I got it, used. It seems like a textbook?

Honestly, I only find 24bit to be relevant when recording something into my audio interface. When mixing my DAW uses 32bit float internally, so I don't need to worry about it at that stage. I've never been in a situation where 16bits wasn't enough for a mixdown.

Reply to this post

Nice article, Dirk. Thanks! However, the devil is in the details... The theory is beyond question, but it makes all the difference in the world how exactly the stuff is built. It takes just one crappy component in the signal path to compromise the results, and conclusions...?

Reply to this post

It is necessary, but apperently mostly skipped, to understand the differences between recording and distribution formats.

Reply to this post

There is another weakness of the article. It had been written in 2004, and his main worry was the "errors" that occur during the bigger sampling rate. Now it's absolutely not a problem.

Absolutley correct.

Reply to this post

Guys, before you superimpose your personal experience over the theory in the article, do you honestly believe that all 16/44 converters are equal? How about opamps? Are they all the same? The list goes on and on.. Try to achieve a noise floor of -140dBu using carbon film resistors... Good luck! The theory and arguments put forward in the article are 100% valid.

yes, and in order to get 16 bit resolution you usually need a converter that has higher bits. For example, from what I understand, most 24 bit converters are only 18 bit resolution. So, you'd need something more than a 16 bit converter to achieve what the article is promoting. And, of course, you're correct, the analog section has to be very transparent in order to maintain the resolution.

Using a NOS 16/44 converter would be silly today, given the wonderful technology available and all the goodies that come with it, I am personally a big fan of asynchronous resampling, and in my experience 24/96 would be the practical limit of distribution media, exceeding by a wide margin what we can hear or what the actual electronics are capable of. If you get true 22 bits without supercooling or a Faraday cage for each and every stage, congratulations! Yes, it can be done, no doubt, but consider marginal returns..

Reply to this post

I had a chance to compare many formats in a controlled environment. All from a live source in a studio. Both for PCM and DSD the than top ADDA converters were used. There is no way in hell I would sattle for 44.1 16 bit. I have a large collection of material that is mastered from analog on different formats as well (SACD, CD, high res PCM LP) as well. You need 24 88. Higher sample rates are not much use but going from 16 bit to 24 huge. Going from 44.1 to 88 is big and DSD well thats often hard to distinguish from the soruce. So while I totally agree that 24/192 is silly 24/88 or DSD is not. But it all depends on the source (mixed) material and gear that was used to record with. Beck Sea Change is a recording I can recomend to anyone who wants to hear it themselves. It is availabel from the same master in many formats. There are also a remasters from teh analog masters for Pink Floyd etc where the lesser quality CD release came from the same master...

For waht is worth, REM Automatic for the people on DVDA has 192 and I had a lady frined who could tell 88 from 192. Me, never.

So, you're saying that you have a large collection of material mastered from analogue, which has never had more than 85 dB dynamic range, but for some reason you think that more than 16 bits is necessary to digitise it, when that has a dynamic range of about 93dB with proper dithering? Care to give a rationale for this bizarre claim?

Changes in SPL less than 0.5dB are perceived as changes in sound quality, not as changes in loudness. The quotes from test subjects, both trained and untrained listeners are: better clarity in the high register, more prominent and detailed midrange, better, punchier bass and increased soundstage/better imaging.

If everything got better, most likely it was sound level that changed. 99% of audiomyths are caused by this. Level matching matters.

stewart4831 If you read my post it says, All from a LIVE source in a studio. Both for PCM and DSD the than top ADDA converters were used. Then I go into analog sourced readily available media...

andor4 don't be disingenuous. After that first section, you went on to claim that 'you need 24/88' for anything mastered from analog, and that 'going from 16 bit to 24 huge'. I call bullshit on that claim.

andor4 So, it was level matched, instant switching blind test? Nothing else will do i'm afraid.

Reply to this post

To test converters: digitize a noisly lp, listen to the noise, the better the converter the less obvious it is, just as we evaluated systems back in the 1970'ties, the noisy lp test is still valid to determine if transient misbehaviour occurs.

interesting idea.

Reply to this post

I can hear the difference. I like good recoded cd even

Reply to this post

The article is old and the discussion is not as clear or complete as I'd want it. A lot of disorganized stuff in the notes. 24/96 is the optimal ceiling IMO, anything more is unnecessary overkill. As some commented above, hi-rez is best for conversions (DSD to PCM conversions are usually done at 24/88) and you don't want or need further conversion. Also, filters are much smoother at 24/96 (and below); noise floor is definitely lower. Yeah, no one would care much in practice, but still, why not? Then, going down this road, sometimes masters are done at higher rates, and we have the power & space now, so why bother downsampling anyway? We can go with it without being sticklers for 16/44.1. How they market it and why they price it higher beyond 24/96 is another question...

I use an Edirol R44 for portable live location recording, the faster it samples, the better the AD conversion sounds. My impression is that it is about the actual hardware rather than about theoretical hardware.

There's no "theoretical hardware". In any case, anecdotal statements are the worst... (I'm guilty too)

It is a first hand observation, so it is not anecdotal. But here is a quote that is legendary: "Ultimately all hardware is analog", said by Dave Haynie, the guy who designed most of the Amiga hardware.

Anecdotal doesn't necessarily mean second hand, story of a story. it can mean first hand narrative based on personal observation, based on direct perception, "intuition" and so on. And that can be just as bad or even worse as a second hand story. Sometimes second hand is more reflected, analytical... etc. What is that supposed to mean, "all hardware is analog"? Not sure where you are going with this. Ultimately, everything is energy exchange, which isn't really continuous -- so what.

peter27309 back in the real physical world, ultimately all distribution media are digital, whether it be the molecule size of vinyl, or the magnetic domains of tape.

stewart4831, maybe even the binary-like firing of neurons in our ears & brain?

Merriam-Webster to the rescue:

Definition of anecdotal. 1:based on or consisting of reports or observations of usually unscientific observers.

First hand accounts are as anecdotal as any personal observation that is not verified in any way.

Blind testing is needed to confirm any changes when changing resolutions. Usually, that is all it takes to debunk all resolution based myths and beliefs. I can put dozen converters in the chain without anyone noticing anything, all with different samplerate. The noise from all those conversions gets lost in the noise from the first analog circuit we encounter.

It is totally another issue if the circuit itself changes behavior according to resolution. It should not but something like power saving when using low resolutions can be the real culprit and there might be option in some menu to fix it.

If it does change behavior, user should be notified at some point.

stewart4831 Yes, but the borders between "either" and "or" information zones are slopes. As for exactly what Dave meant, perhaps he is stull around so that you can ask him.

Reply to this post

There is no architectural difference between converters of 2004 and today. They are all massively over sampled multi-bit with decimation filters

The clock jitter might have improved a lot. No matter how good the A-D or D-A, inconsistent clocking of samples can affect the next quantizing level captured during record, and the instantaneous slope of of the reproduced wave, which is distortion.

That is more a question of implementation. But, yes, the sensitivity and rejection of j*tter has improved considerably.

Reply to this post

In this 2012 article (very good), "Monty" draws parallels with vision. Since, 4k has been introduced to salivating consumers, who also want hi-rez-for-all. 4k is approx the magnitude transfer function (MTF, ~frequency response) of 35mm negative for motion pictures, therefore the design goal for 4k electronic cameras. HD spot scanners can eke out a lot of this detail, but you'd have to sit a yard or less from your 4k TV to see it, compared to 2k or HD at home. As with 24x96+ sampling, producers need it for processing scaling. But not for distribution.

That has nothing to do with Monty's analogy. His analogy was between ultrasonic content and ultraviolet.

Tudor, the specific 4k vision analogy is all mine. Monty only opened the door to analogs between aural and visual.

Reply to this post

This is from the Conclusion chapter of a study that focused on the comparing and testing of DSD and 24 Bit/176 KHz sounds. But it told some words about the CDs, too. Here you are in Google-translation from German: "In any case, it should be pointed out that from the point of view of the sound master both musically as well as technically for the consumers certainly a considerable quality gain by virtue of the fact that the higher resolution - independent of the distribution format - to the consumer and thus it is possible to get away from the technical limitations of the CD with 44.1kHz and 16 bit resolution." https://www.eti.hfm-detmold.de/lehraktiv/diplomarbeiten/diplomarbeitenordner/blech-yang-pcm-vs-dsd.pdf

Reply to this post

And yes my lab has an iso/iec listening room. And yes my lab is one of the few (the only?) lab in the world that is iso17025 externally audited and accredited for subjective listening tests. And yes I have been doing this for more than 30 years. And no I am not going to hand over our commercially sensitive results.

Yeah, the mic is not to hot, but the signal processor is something else.

jon9131Do you have any published articles that we can read? thanks.

dirk42323 sorry our research is commercially sensitive

jon9131 OK, thanks.

So, you could tell us, but then you'd have to kill us? And no one has ever heard of your uniquely accredited lab? Yeah, riiiight.....

Reply to this post

Case in point: Musical performances don't have enough dynamic range to ever stress the limits of 16 bits. An empty concert hall I was setting up to record in measured 43dB unweighted. The loudest crescendo was 105dB. That's only 62dB of dynamic range. Not a challenge.

Then in 2010, Zambelli Fireworks commissioned me to make a video and sound recording of one of their shows in a city near me. What I ended up with was a 24/96 recording that had 86dB of dynamic range. First, I found it a challenge to get that converted to 16-bit for DVD without incurring significant addition of hiss. I tried many dither algorithms until I found one that added the least hiss. The other problem was my Sony BD player's DAC was granular down in the background sound level. Gritty, like a transistor with the bias near cutoff. I bought an Oppo BD player and the grit went away and I could now hear the crickets in the woods 200 yrds away, before the show started. This was on an airfield at an airport. The next and final problem was buzz and hum. The standard patch cords that come with CD players are only 70% shielded. I replaced ALL of my interconnects with Monoprice cables. I also did some mods to my Carver C4000's power supply to eliminate some rectifier switching noise. The cable and power supply mods did the trick. Now I could hear everything just as I did when I was there. From the softest sounds to the tinnitus inducing first explosions, and no hum, buzz and grit. But it took a good deal of engineering work. Bottom line, music fits well within 16 bits. Fireworks do not.

70%? Sure of that? Not 65% or 85%?

Also this “music fits well within 16 bits”. Yes if there is a microphone involved. But much music has no microphone.

jon9131, he already stated that the live music performance had a dynamic range of 62 dB, this has nothing to do with microphones. Also, while you'd want a 24/96 recording to capture the fireworks display, the final master had 86dB of dynamic range, so could certainly be distributed at 16 bits.

See my comment to Daave Collins below...

stewart4831 But the noise floor of even a decent microphone /mic preamp is significant. I know Rode claim 15dBA for the NTR, but thats A weighted. The spec for the IEC listening room is <25dBA. I would expect that a great deal of that "43db unweighted" is LF and ELF rumble.

Even with an essentially unachievable 25dBA noise floor in the room, good studio mics are quieter. Even with a massive orchestral tutti approaching 110dB, that's still only an 85dB range, well within the capability of CD. Would you record at 24/96? Of course, why not nowadays, gives you massive headroom to play with in the mixdown and mastering, but you certainly won't need more than 16 bits in the distribution medium.

stewart4831 Exactly that is what tends to evade the debate, also in the sampling rate discussion.

The use of 24 bits is to minimize rounding errors when normalizing or otherwise manipulating the audio samples in production mastering. As for sample rate, that depends on who's listening. If it's dogs, then you may need higher than 44.1KHz. But most folks old enough to afford good audiophile systems are close to deaf in the higher frequencies. Most can't hear past 12KHz, some can't hear above 8KHz. But our ability to interpret what we DO hear is greatly improved over youngsters, due to years of experience in identifying small amounts of various kinds of distortion. For a while, the SACD industry had a scam going. Selling upsampled 44.1K material. They were exposed. Nowadays the trick is to master a different mix to SACD, otherwise nobody would hear a difference at all.

Audo is processed with 32 or in some applications perhaps even 64 bit wordlength to minimize the truncation errors. Next there is the issue of processing strategy, because it matters a lot whether individual stages of processing are calculated and performed stepwise or all manipulation of the audio is done in one dsp-process.

Then there is this here issue of frequency response of hearing, you seem unaware of how the hearing works, specifically how recruitment works and how the hearing uses multiple bands. An audiogram is NOT a frequency response chart, I really wish they would turn them upside down so that they display what they display in a less ambiguous way, as is they are apparently a never ending source of confusion and logical errors.

An audiogram is a threshold curve, audio above the threshold is perceived, audio below is not. And there is a neat little easter egg in the sense of hearing. We hear in multiple parallel bands. Those are defined by quite wide bandpass filters so that the bands overlap. When there is signal in the band below the band we want to detect audio in, say female singers overtones, then it gets added as a detection bias and helps the detection. The threshold curves of the audiogram are however single tone detection curves.

Reply to this post

16 bits is 93.3 dB dynamic range using d*ther. Should be no problem to fit 86 in. Was the hiss from the recording?

It is not that simple Dave, the hearing hears in narrow parallel bands and detectability of pure tones is about 10 dB down in the noise. Also, what the long wordlength is about when recording is reverb-tails, very obvious if you do live recording.

peter27309 So, you're saying that more than 16 bits is required?

Also, pardon my ignorance, but don't most engineers record at -10dB, basically throwing away 10 dB of the bit depth? i thought that was a rule of thumb for recording so that clipping was completely avoided.

It's also the case that dither decorrelates the noise floor, so that you can recover pure tones more than 10dB below that theoretical -93dB 'limit'. This is easy to prove with a simple signal generator and a decent ADC.

For recording, yes. For distribution, no. Also when recording chamber music 10 dB headroom is not very much, especially not with singing wimmen in the room. Symphony orchestras are a lot easier, they have a much better defined maximum.

The recording system doesn’t hear like that. If you are hearing problems in reverb tails, there is a technical issue somewhere. Even a paltry 16 bit system fades smoothly into a benign noise floor.

dave5 Have you made the test while in a room with a chamber orchestra?

I hear master sources of orchestra all the time, at 24 bit, as well as the 16 bit version I provide for CD and do not hear any issues with reverb tails. Even with the monitor gain at very high levels. Can you post a link to the sound that you’re describing?

No. Determined when comparing a 24 bit recorder, a 16 bit recorder and the ensemble in the room.

So when I push the button to monitor 24 vs 16 bits, what should I be listening for? Because the differences are small. This is with TDPF d*ther and a D/A with this performance at low levels.

Reply to this post

Any article which claims to explain how the ear works is simply spouting crap. Firstly we have but only a sketchy idea of what actually happens and how it does it. Secondly it is a massively non linear transducer connected to a supercomputer. It is not a b&k microphone into an AP. If you cannot grasp the difference then heaven help you.

Reply to this post

I want to thank everyone for being civil during this intense discussion. I really appreciate all of the comments and discussion on this topic and it makes me feel proud to have so many bright minds in this group. Thank you so much!

"Science & engineering" is an important detail, I think. The more subjectivism would be a different story... more passionated and irrational and absurd discussions, bad manners, etc...

Reply to this post

Speaking of s/n and taking into account the average listening level of 90dB, ok 110dB if you really want to knock yourself out, subtracting a measly 80dB s/n leaves us with a noise floor of 30dB. Come again about those 16 bits..?

my iec listening room has a noise floor of around 20.2 dBA if i remember right. Finding a noise meter that will go that low within calibration is actually quite expensive.

jon9131 wow

well it has to be under 25dBA to meet IEC spec.

I think average living room is around 40dBA?

We have the Casella 63X meter which, in Class1 calibration, is good for 19dBA. 40dBA sounds high, but depends hugely on obvious things like heating/HVAC, whether the house is detached or not, and nonobvious things like variable windnoise, vibration transmission from roads etc.

jon9131 right, of course.

Even a train at half a mile can have a significant ELF/LF input, depending on ground structure

My listening room has a noise floor a little below 30 dB at night (I live near a main road, so minor traffic issues despite deep-dish triple glazing and 13-inch masonry walls). That is sufficiently quiet that, after a couple of minutes of sitting with no distractions to let your ears adapt, the loudest noise is the blood rush in your ears. To me, that's pretty much a hard floor on a practical lower limit. Hardly any high-fidelity domestic sound system can exceed 120dB peak loudness, the vast majority will struggle to hit 110dB, so can we get real on bit-depth required for the distribution media?

Just checked -- the Casella is £3950 + VAT + UKAS calibration (£457 + VAT for the meter, £115+VAT for the 114dB single tone calibrator)

stewart4831 dB or dBA?

I am currently building a home speaker system that allows you to really knock yourself out. SPLMAX is about 125dBA @ full power /1m. On the topic of reality, play briefly at 110 dBA and try to hear anything below 60 dBA..

stewart4831 I think the dynamic range of most music is far less anyway? I think 20dB of dynamic range is a lot for most recorded music? There's a website about that. I think dave5 once remarked that a DR of 60dB will have you running for the volume control? My apologies if I'm mistaken about this.

Jon Honeyball dB, strictly speaking dBC, hence the need for low traffic volume to keep the LF stuff down. I used to work for a consultancy that did lots of sound level assessments, so I could borrow very expensive sound measuring gear.

dirk42323, sadly, you are correct for most modern popular music, mastering engineers like Dave Collins have a lot to answer for here! OK, strictly not their fault, the market demands maximum loudness, so the wonder of true dynamics as in a live classical concert are hard to find in other musical styles. I have a wide range of music, but I doubt I have anything that actually exceeds about 50dB in true dynamic range, maybe a couple of Sheffield Labs extreme recordings, like the Track and Drum records, and some of Gabe Weiner's simplistic PGM records.

angel11 Indeed, it takes a few minutes after a crescendo for the ear to settle, so it's doubtful if our short-term ability to perceive dynamic range exceeds about 40dB. I know that in my own listening room, it takes at least a couple of minutes of sitting still for my ears to bottom out, at which point I can hear the blood rushing in my ears.

The time required is very much dependant on the loudness to which you were exposed. Anyway, we seem to be drifting away from the topic of resolution required for "perfect" reproduction. IMHO, 16 bits are just fine for distribution. The folks who settled for this were definitely not stupid. Too bad that early releases on CD and the available hardware were not up to the job, earning a bad reputation for digital. Now is 2017, things are slightly different.. HD - well, if it makes anyone happy, then let it be! If anyone is in a mood for flame, let's open a discussion on MQA?

The people who agreed the Red Book standard were definitely not stupid, however they were banging against the state of the art in 1980-81, indeed Philips wanted it to be 14 bits, it was Sony that held out for 16. 35 years is a long time in digital technology, indeed there were no PCs when CD was locked in stone, never mind the Internet!

Thanks for your lucid discussion of this topic. So, in theory at least, if the room is around 40dB, and the music has a dynamic range of 20dB, then can you play the music at a minimum of 60dB and still hear the quietest parts, or do you need some extra room between the bottom of the 20 dB and the noise floor for the room? If so, how much?

As you cannot readily hear a difference of less than 3dB (1dB if you are really concentrating), then perhaps crank it up to 63dB maximum for safety. That will also protect your hearing, as listening to music with a 20dB dynamic range at a peak level of even 99dB will permanently damage your hearing after a few weeks. Are you listening to this, Dave Collins? Dave? DAVE? ?

You wouldn't hear much at 60 dB, because Fletcher - Munson. Let's assume a comfortable level of 90dB, dynamic range of the material about 40 dB. You will not have a problem hearing the quietest parts, yet if any noise is present at 50, you would definitely hear it too. Consider the curve they use for mp3 encoding - it gives you a pretty good idea what is audible and what not. A s/n of 80 dB (the attenuation range of a quality potentiometer) is about as good as you need.

stewart4831 A word of advice, if you mention someone in the comments, you can not edit that message anymore. No matter what you do, the comment has received an ID and it is never going to mention that person again in that comment.

kennett7443, sure it mentions that person, the edited version just doesn't flag up in their feed, so the name is unbolded.

Even 40dB dynamic range is quite a bit in practicality

Reply to this post

I feel dynamic range (DR) is being conflated with something else, headroom (HR) or crest factor (CF) or signal to noise ratio (SNR). So here’s my understanding of what DR isn’t…

From the beginning of my time as an audio practitioner, HR (instantaneous peaks to reference level) – typically 12dB RMS for tape & vinyl, 16dB to crests for CDs, and 18 (EBU) or 20 (SMPTE) for cinema – has been under attack. In the worst case, I’ve measured squashed pop releases with only 3dB from average RMS to FS (full scale digital) peaks, which sell better, but for me ruin the life-like “punch” of live music. Generally acoustic classical or jazz, for which we have stronger remembered references for tone color and dynamics, is not so egregiously level-compressed, largely retaining the original intention of HR. But HR is not DR.

We can hear midrange sounds well below noise, until low frequencies (<700Hz) increasingly are masked [Fletcher-Munson; ISO 226:2003]. So SNR is not DR. And yes, this applies equally to analog and digital: just like vinyl or mag tape, a CD with up to a theoretical maximum 96dB-SNR (2^16dB) has an audible range of sounds down to –110dB-FS or lower. RMS peak levels over the course of Ravel’s Bolero (on vinyl or CD) vary at least 30dB, but harmonic contributions to the tone colors of individual instruments go down to below noise even in the midst of much higher RMS, even contributing their tiny bit(s) to HR peaks and FS crests. Now this is DR – the full range of sounds from the softest audible to the loudest tolerable. Playing an orchestra at original level of ~95dB-SPL in a typical living room with RMS noise of 40dB-SPL has an SNR of 55dB (A or C-weighted depending on how the noise was measured). But the DR is likely to be 70dB or more. [For proof, play a recording with pink noise mixed with music –15dB lower and you will detect mid-range tones.]

thanks

Reply to this post

Have had been some of mine posts deleted? For I don't see those here.

not me

Looks like someone deleted his top post of a sub-thread, so all the attached comments have gone.

stewart4831 Thank you. Then I stay here?

Reply to this post

Please don’t delete any post or comments, it’s all interesting and I’m learning

That's part of my hope for this group: learning and sharing knowledge. Thanks to everyone for that!

Reply to this post

...and the ensuing criticism http://www.realhd-audio.com/p=5755

Reply to this post

All these studies and observations fall into the same trap: they (wrongly) assume that hardware performance is nominal. It’s not. Example: just finished modding a digital sound processor that sounded subtly different from my reference dac. The culprit turned out to be a couple of polar electrolytic caps, back to back in the signal path, imitating a non polar. Upon careful examination of the circuit, turns out they are useless. Shunt, caps in the round file, and the device sounds gorgeous! Just sayin'...

Yeah, the analog section, as well as all the associated passives, may be different enough from one DAC to another that comparisons are useless. In other words, any plausible difference between hi-rez and normal CD quality may be completely masked by the differences in circuits and associated hardware.

Also, the behavior of a single DAC with different quality source material may be different as well. I don't know enough about this to say very much.

If the components are not defective, it comes down to listener preference, which is random.

Point is, the two devices are vastly different. I could have shrugged and attributed the subtle differences to the architecture... However, as 1/1 = 101/101 = 16252/16252... and so on, it's what prompted me to look closer into the circuits. The mod continues today, tossing all those lovely little metal cans out the window?

Adding a suitable dc offset to the midpoint between them could have remedied a lot, one of the good cheap tricks in the book? - strange that the designer didn't.

Yes, one is left to wonder... However, the best caps are the ones you leave out.?

As long as you do not get a DC current through a a potentiometer or a DC shift on an amplifier output?

Also, as I understand it, bipolar electrolytic capacitors are just two regular capacitors in series. Is that right?

dirk42323 Yes, but they still lack what an electrolytic needs: a bias voltage, thus the trick above with adding it to their midpoint.

Reply to this post

I found this article interesting an entertaining. Anyone up for designing a NASA grade power supply? haha. http://www.mojo-audio.com/blog/the-24bit-delusion/

Nice, yet some statements are misleading, especially regarding noise levels of power supplies and their impact on overall performance. Paul Horowitz has practically exhausted the subject in his book "The art of electronics".

yeah, I know, he didn't consider power supply noise rejection of the circuits, but anyway, thought it was still interesting.

If I make a power supply with 1uV of noise can I sell it to NASA? Pro tip: it’s dB, not db.

Tube amplifier regulated (lm334 based application note architecture) 450vdc supply has 20uV rms broadband noise. Not a big deal;)

Reply to this post

Here is a very interesting approach to determining the difference between 16 bit vs. 24 bit. He does it all in the computer, so there is no DAC involved, no listening test, nothing like that. He takes a 24/96 tracks, converts them to 16/96, and then does a null test on them! cool idea. The difference between them is at -80+dB. The difference is not audible.

I'm finally convinced. 16 bits is enough. But, to actually get 16 bits resolution in the analog output you need a converter with more than that, right?

The most pointless freaking youtube piece ever. Is this 1999 all over again? Whats the point of this argument? The fact is that most tools will do internal 32 or 64 bit, storage is free. On top of that we do not listen to digits! All that has to be converted to analog, and thats where 99% of the errors are generated. Get over trying to make this comparasion fully in the digital domain. You have to consider the system. Why wouldnt I like more bits when I like higher voltage rails in my preamp? Why woldnt I like to get away fomr the effects of Nyquist as far as possbile? In a system as whole ADDA you can absolutley measure and hear the difference between 16 and 24 and the same goes for higher sample rates up to a point. Do this with any modern gear in a controlled mannner then compare the numbers with listiners preceptions. Higher res PCM and DSD wins everytime. IT is all about the conversion and not the storage. Delivery medium could be 16bit fine. but for recording and mixing etc you got to have more than that.

Reply to this post

Interesting, thank you, I tend to wonder what went wrong... ?... gonna listen on another pc later.

Reply to this post

"If you asked an old-timer how much energy they spent worrying about signals below -90, they would think you were crazy."

Never heard of a 15ips master that had a noise floor below -80, and that's assuming hot peaks into saturation.

Never heard of a cd that sounded better than a 15ips master.

Never heard of a well-made CD from a 15ips master (I have lots of JVC XRCDs with exactly that provenance) which did not sound crisp and clean, and of course better than any vinyl made from the same source. Can't sound better than the master, but likely sounds exactly the same, given the well-known limits of tape, and of course will not degrade with multiple playing.

Stewart Pinkerton Better than vinyl assumes and AWESOME set of DACs. Lucky you and me?. (RME, Burl, UA and Prism at my disposal.)

No, better than vinyl simply assumes a DAC with better than 70dB dynamic range, less than 3% distortion at full level, and a frequency range better than 50 to 15kHz at full output. Come to think of it, I've never heard of any DAC that was that poor

stewart4831 now that is funny...

I agree. Maybe you should buy a Benchmark or an OPPO.

The thing is, 16 bits may be more then enough for dynamic range But at very low levels the signal is represented with few bits, its harmonics will be relatively much higher then full scale signals. Analog does not work this way, smaller the signal much lower the harmonics. I find 96khz 24bit recordings much closer to analog. (İf it is a good thing;))

This is simply not how digital works, and is one reason why proper dither is crucial. BTW, analogue, which effectively means vinyl these days, has massively higher levels of distortion, even at -60 dB, which is the noise floor for most vinyl. You can't hear anything 20dB below the noise floor, and the noise floor of CD is easily 30dB below the noise floor of vinyl, so

For -60dB signal in digital, you are left with 6 bits out of 16. Distortion can be maximum 40 dB down (1%), with 24 bits 18 bits to silence, 108dB down. On vinyl distorsion is masked with noise.

That's not how it works, Kaan.

Why not Stewart? I know that you have strong background in EE, so please comment on this İf you use only bottom 40 dBs on very low signal level part of the recording. Isnt it same as using only 6 LSBs of dac So for that level isnt it same as using a 6 bita dac to full range?

Digital with dither works exactly the same as analog. The low-level signals are not inherently distorted. I know I’ve posted this graph before, but this is 1kHz at -120dB from full-scale, and it’s clean as a whistle. Not all converters have this kind of performance, but it’s certainly possible.

This must be 24 bits dac with theoretical 150dB dynamic range. Such a large dynamic range is practically un realizable with home systems anyway. 40dBA average listenning room noise level +120dB results in 160dBA peaks! Jet engine level! But I am still şooking for a technical explanation for smoother sounding highres recordings

It’s just refuting the notion that digital is distorted at low signal levels.

/></div></body></html>

What Dave said.

Reply to this post

I've been banned from countless HiFi groups for mentioning that article. What a delight to see an admin commenting on this article, without calling it hate speech, or threatening to ban people. Yesterday I was banned from a group by suggesting that one can not expect people to keep their mouths shut, when someone links to a company that sells speaker wire for 8500 USD per foot. I was instantly banned without given an explaination. What in the world is going on? ?

Reply to this post

$8500 only?! Did their calculators run out of digits or what? Unfortunately, many audiophiles are literally drowning in snake oil, generously served up by plain crooks, often with the aid of the specialised media..

The media has to bear some responsibility for the fake news of audio tweaks. But they are also running a business.

Ok but NOT literally...

Then let the Nobel Prizes commence. Because the Hi-End outstrips anything industry has done in the last 50 years.

Yeah, Stereo Review started to go down hill after the Monster Cable conflict in the 1980's. People wanted to read all of that fluff and stuff regarding what audio "sounds like".

There are two camps. One which appreciates domestic audio that sounds 'good' to them and one which appreciates domestic audio which is accurate (or aims accuracy anyway). For some reason, some people are willing to pay £££££s for a length of speaker ca...#

andy22506 Oh right, I forgot about the fancy packaging. That is also part of the marketing ploy used to enroll someone into the belief that the product they bought is beyond awesome. Do you know there are videos out there where guys record "the unpacking"? I mean, they video the process of unpacking the fancy thing they bought, as if that is part of some religious ritual or something. It could be headphones or a cable, it doesn't matter. The extent to which the high end industry goes to convince people that the product they bought is great is tremendous. Then there are the boxes for electronics. Since I make equipment, I have to deal with this problem. Guys want fancy boxes for their electronics. They want dancing meters and they want their gear to match their other gear. It's like women having to match their shoes with their purse. Men will not buy gear that does not match (at least some of them anyway). Also, the box itself has to be super exotic looking, with lots of machined aluminum. I mean, how does this help the gear to sound better? I've heard high end manufacturers comment that 50% of the cost of their product is in the box. Really? astounding!

dirk42323 I like a nice box (only for appearances sake), but I make them myself. I read a thread on another audio site. The OP was talking about a dealer coming to his house to set up his scottish turntable, who threw his hands up in horror because his (solid state) amplifiers were sitting on glass shelves... ?

There is also a third kind of people, the ones that just try to listen unbiased and if they note a difference try to understand technically, why this difference happened. USB cables are a good example, I have heard so many different sounding, where our technical background would tell us it should be same. And pls save your ink about possibility of psychoacoustic errors, we know about that too.

USB transfer is not error-free by design on the hardware level. Current USB2/3 to I2S drivers and receiver hardware either run out of time or out of juice to properly implement error correction, therefore the data received is not guaranteed to be bit-perfect. If you have a very high speed oscilloscope and can see the actual waveform at the receiver input, then you'll easily find the scientific explanation for the "difference in sound" between various cables.?

I would suggest a USB cable that messes up the data to the extent that it sounds different is not fit for purpose. My argument has always been that cables that are fit for purpose sound the same, not that a badly designed and constructed cable sounds...#

I think you'll find construction design and materials used are just as important as measurements in determining how cables perform?

Alec, measurements actually reveal the qualities of any design and material choices. It's not like they are a luxury... measurements are a must.

As Blue Jeans Cables points out, it's the quality of materials plus the precision of construction that makes for a good cable. He singles out Belden as an example of a company that spends an extraordinary amount of time, effort and science into making the perfect cable (for the intended application). For example, in coaxial cable, is the center conductor exactly in the center, and is it in the center consistently all the time? This is the difference between cheap cable and great cable. It's not exotic materials. It's materials fit for the intended purpose.

Probably why my Belden coax / neutrik connector home made cables sound so damn good ??

I've used Neutrik RCA's in some of my own made cables, good quality?

Angel Despotov But I hope you are aware that measurements are still far away of exactly showing a correlation towards perceived sound quality. Im convinced a Yamaha amplifier measures better than Lejonklous new Boazu, but I can assure you the later sound 10 times as good ( concerning Room Information, harmonic presentation of instruments and more)

Here's the problem: measurements are taken for granted, as if all available equipment and test setups are 100% accurate, exhausting and done with a thorough understanding of the results. I tend to disagree with you, Reinhard, and most Yamaha amps measure AND sound awful.?

Audio measurement is a mature science. If you hear a difference yet don’t measure one, you're simply measuring the wrong thing.

Had a Yamaha 2002 amp on the bench last year, measured great sounded great, fwiw.

Why are some people continuing to ignore the fallible and inconsistent auditory mechanisms? Something sounding better at any one time is subjective.

People just want to believe.

Put simply, something that measures well in all parameters relevant to the audio band will accurately reproduce the source signal. Something that doesn't measure as well will not be as accurate. That's basic fact. However, the less accurate one might s...#

Accurate, as in amplifying a complex mix of frequencies at varying amplitude and taking a precise measurement and comparison between input and output, loaded with a reactive load? Show me such a setup..

A mix of frequencies produces a complex waveform to you and me. Why does the amplifier see it as complex? If the complex waveform remains within the amplifier's design parameters, it's no more difficult than a sine. Measuring performance at a reactive load is no harder than measuring at a static resistive load. I may have misunderstood, were you saying it's hard to design for accuracy with a complex load?

andy22506 Im not ignoring the subjcective component. But you mention "relevant audio band" and here we already have a lot of definition problem. As you know there are still people thinking 10 Hz - 20 kHz is more than enough as auditory bandwith for...#

Intermodulation (multi-tone) tests into reactive loads are standard. And have been for decades. They are better than ever today, of course.

reinhard7184 the ability to distinguish arrival times to microseconds does not require >20kHz bandwidth. In fact the tests are done with CD sources

dave5 So you think all MQA engineers are stupid because they care about frequencies above 20 k?

I was already considering 50kHz as audio band as most modern amplifier's are more than capable of reproduction at that frequency. My modified Quad 405 is 3dB down at around 5Hz, and still has useful output at an utterly pointless 100kHz. Most of which is irrelevant with a CD source, but may be generating something (LF noise probably!) with a record as source. ?

Ok, one question: do amps (with different topology) that measure equal sound the same? Yes or no?

I'd say no?

That apparently was a very tough question...?

angel11 That depends on what you mean by "measure equal". Do you mean that they have exactly the same distortion fft? Do you mean that they perform exactly the same with the Power Cube? Please explain.

Angel Despotov that apparently meant I don't sit watching for comments on Facebook all day.? But as you ask, if they have different topologies they are unlikely to measure the same, but if they did, in every aspect, then yes they would sound the same. It's now down to you to explain why they don't - without venturing into fairyland and non-science though?

In theory, they can't measure the same, even if it's the exact same amp due to component tolerances

Let me add an analogy. You have two lightbulbs made by different manufacturers. They have exactly the same spectra, colour temperature, light output etc. Do they look the same? If not, why not?

alec645 in which case, they aren't materially the same. It's theoretical. If they measure exactly the same in every aspect, they sound the same. There's no room for foo in science?

I'd say (lightbulb) the only way for a difference to occur is in construction /materials

Plenty of room for tolerances in components though, that's why they might not sound the same

Here's another analogy, a 225x40x15 tyre, loads of manufacturers, all measure the same, but all perform differently,due to construction and materials

Measurement works in every other discipline but audio.

Only slightly andy22506!?

Yes, but they measure differently! They might be the same size, but the rubber composition is different, tread design different etc etc. All those things are measurable, and will all point to a different performance.

I'm sure all of us know the NE5534. It's manufactured by several manufacturers. Then there's the SE5534, which should be the exact same circuit. Question: why do they all sound different? Not different like night and day, but noticeably different. If you doubt this, order some in DIP8 package and play around. They all measure pretty close, by the way, say up to the 5th significant digit.

Angel Despotov then they aren't materially the same are they?? The question was two amplifiers that measure exactly the same (not 'pretty close') in every way, do they sound different? No, of course they dont!

Who says? Same schematic, same process, allegedly.. I'm referring to the example with 5534, coming from different manufacturers, which measure essentially like twins, sort of..

Let me give you some background to my point: years ago I was building custom passive studio monitors. What puzzled me that the same speakers would sound significantly different on different amps in studios, yet the said amps were essentially indistinguishable in their specs.

I'm not trying to venture into fantasyland, no. I'm just saying that the accepted standard tests don't reveal the full picture. If they did, then we'd have a very reliable method of evaluation, which, imho is still not the case.

Reply to this post

Higher samplerates are pointless - but, for the majority of DACs they do make an objective and subjective improvement in sound.

This is because most delta-sigma and R2R DACs have terrible issues with temporal resolution, meaning time domain performance is compromised and transients are compressed and blurred together. Increasing the sample rate reduces time smearing linear to how much the sample rate is increased.

You can measure all this using an ETC (energy time curve) in most DACs. The average DAC using the standard brick wall linear phase filter, has a total time smear of around 1400 microseconds. A lot of that you cannot hear, but on transients you can and the mechanical speaker driver has to work harder (need more power) with wider transients (which all time smearing DACs have very wide transients).

If you look up the measurements of let's say, my Audiolab M-DAC using its linear phase filter on the website Reference Audio Analyser, you can see this. Then go to its optimal transient filter, which is a typical NOS (non-oversample) arrangement, which shows perfect time performance (albeit, not perfect looking because of the ADCs inherent noise) but revolting frequency response performance (more THD, IMD and infrasonic aliasing folding back onto the signal).

Now, have a look at Chord DAC measurements. Rob Watts, the designer of Chords Pulse Array DAC architecture who has designed DAC chips for hundreds of clients throughout his time as an engineer is the only person to do the DAC process correct.

That sounds arrogant, but it's true. Nyquist-Shannon theory specifically states that to perfectly recreate the original analogue signal, you need an infinite tap length Sinc filter. This is impossible, but with mathematical optimisations and noise shaping, one can bring the needed tap length (real FIR taps, not the theoretical ones quoted by HD player etc) down to realistic levels.

Look at Chord Hugo time domain response, as good as NOS but without the aliasing. The rest of the measurements too like dynamic range, jitter, THD, IMD, noise and noise floor modulation etc. Chord DACs are the most accurate on the planet and make standard CD sound leagues ahead of even a high-res track on a delta-sigma or R2R DAC.

Chords new Blu2 DAVE DAC combination perfectly recreates red-book CD timing accuracy and everything else, resulting in the purest sound imaginable.

The only person to do it right? Well that wraps it up for oxford digital then....

There is a rumor that Chord just has higher output levels than others and that is all it does...

I borrowed the Chord Mojo from my store a few days ago. It's their cheapest DAC and I compared it to my M-DAC+ (double the price), volume matched via headphones.

Performance wise, both measure exceptionally well. The only major difference was Chords timing capabilities measured.

The results? The Mojo sounds more organic, very natural without it sounding digital at all. It reminded me of a good turntable without all the crackling, hiss and analogue failings.

Now I'm upgrading to a Hugo 2, as I can't go back to how standard DACs sound.

And the testing was done blinded, instant switching?

Blind? You do realise all the tests scientist and universities have done on subjects regarding the capabilities of the human ear and auditory processing system were not "blind" right?

It wasn't a subtle difference that was something you do a double blind test for, like a cable. The difference was huge. I even asked one of my family members to have a listen and see which ones they like more. They said the Mojo because "it sounds more like live music".

I often have difficulty mentally separating the vocals on Fleetwood Mac - The Chain. Even on my M-DAC+ which is a stellar DAC, I still can't hear the individual singers properly on the chorus. As soon as I listened to the Chord I first was not used to it, as it sounded "darker" than usual. Like the sharpness was gone. That sharpness is what people call the "digital glare", the sub-par temporal performance. After I got used to the lack of sharpness, I could hear things way clearer than before and singers were properly separated on the soundstage but not artificially.

Ah, ok, then it was not a test of any kind that you can use to back it all up. Do it right, the rest was impressive writing but you fell one step short of making it convincing. You can not just glare over it and prove anything by "but my grandma heard it too". Such changes should reflect in frequency response too if it sound "darker" or "brighter".

Look up hearing tests on NCIB. The science is there for it sounds better.

They have already done the tests for us -> improving timing performance is audible to the human ear.

Look up how they do it. By changing the phase slightly in subjects in burst of multiple sine wave under government laboratory conditions.

Heck, many crazy scientists have even ripped open a cats brain and found out using implants a cats sensitivity to phase and timing.

Unlike the average hi-fi forum user that will tell you things based on nonsense - I'm informing you of real science mate.

What makes this disappointing is that while you use science in your words, you don't use it in your actions. Do the tests properly or don't do them at all.

At least do a blind ABX test. Otherwise, your results are invalid.

"The analog signal can be reconstructed losslessly, smoothly, and with the exact timing of the original analog signal.... /....This (oversampling) means we can use low rate 44.1kHz or 48kHz audio with all the fidelity benefits of 192kHz or higher sampling (smooth frequency response, low aliasing) and none of the drawbacks (ultrasonics that cause intermodulation distortion, wasted space)."

https://people.xiph.org/~xiphmont/demo/neil-young.html

I will when I get my Hugo 2 then, which will be in a few weeks. But my time with then Mojo speaks for itself.

Seriously, I know most of you guys are probably not into things like neuroscience but please do some research.

You underrate our capabilities by a very large margin.

matthias2339 not perfectly, as measured with ETC plus multiple other timing related measurements using an APX555.

No, I mean perfectly as in absolutely freaking perfectly:

matthias2339 nope, it can only be done with a perfect Sinc function - which is not possible as it needs a mathematically infinite amount of time and FIR taps to reconstruct the band limited signal (perfectly). When I say perfectly, I mean timing too.

Buy an expensive audio measuring device or hire one out, I dare you. Then measure your DAC. I guarantee 100% it is not recreating the signal perfectly at all.

That's exactly what he is doing in the video. He is demonstrating in the video, that the signal is recreated perfectly, even when sent through the cheap DAC. If you want to demonstrate that your timing error hypothesis is right, why don't you buy that "expensive audio measuring device" and debunk his video and article?

It's already debunked. Visit Reference Audio Analyser and look up any linear phase brick wall filter using DAC. They all measure the same.

Reply to this post

Also, for any of you on the fence, against or for MQA here's the truth about it.

There is a genuine objective benefit to using the format. The way the high-res track is repackaged, with its triangle upsampling algorithm and its folding back compression technique plus its apodizing minimum phase filter - it results in much improved temporal resolution over the original PCM signal into the average DAC (well not average, as only a small fraction of DACs can decode MQA).

As stated above, the average DAC using the industry standard linear phase brick wall filtering has a smear of 1400 microseconds. MQA brings this down to 50 microseconds. This is a high improvement in temporal resolution, resulting in an improvement in imaging, subjective smoothness, low-level detail and being less fatiguing to the ear. This is all great an all, but is not perfect.

Meridians choice to use a minimum phase filter is not optimum. While it's believed now that "pre-ringing" damages sound quality, people are misunderstood on testing for it. To test a filters impulse response, you use an illegal Dirac pulse - this gives you a visual representation of the mathematics in place of the filter. It does not show you the true time domain performance and while the average DAC sounds better ins some ways using it, it's also objectionable bad in others. For example, phase. Minimum phase filters drop in phase response after a few kHz, usually around 2 - 4 kHz @44.1kHz sample rate. This screws up the phase accuracy and therefore timing performance of treble - therefore you're sacrificing treble for improved bass and midrange.

Another issue I have with MQA is while 50 microseconds is great, it is not perfect and is not a unique level of performance. The human ear is sensitive to phase within 2 degrees which is 4 microseconds of timing acuity. 50 is still a long way from that. 50 is possible without MQA. For example, if you have a DAC with a slow roll-off linear phase filter, it will already have much improved performance over the standard 1400 microseconds. Couple that with an external upsampler/clock or high-red track and you can match or even beat MQA performance without the downsides of phase drop (that's forgetting about MQA treble aliasing). With my Audiolab M-DAC plus using slow roll-off and an external upsampler I have around 54 microseconds timing accuracy, on par with MQA but without phase issues or upperband aliasing.

Going back to Chord DACs, they have accuracy way under 4 microseconds into the nanoseconds - meaning Rob Watts has cracked timing in his Chord DAC by the use of his Pulse Array Architecture and FPGA chips. The real goal after cracking timing, is accuracy down to low-levels - which gets better the higher you go up in the range of Chord DACs. For example DAVE is accurate in terms of transient reproduction down to about 85dB where Blu2 DAVE down to 17-bits aka better than 100dB (recreating red-book objectively perfect).

Listen folks, MQA is great for the average person. But forget the marketing and spend your money on a Chord DAC instead of an overpriced MQA certified piece of junk.

Reply to this post

All that time-smear stuff is nonsense.

Sure, when the average hi-fi forum user speaks of it - it is nonsense. Many proclaim high-res to be night and day better against CD because of improved timing, but it's barely any different I agree.

But when it comes to science and engineering done ri...#

Michael Walsh This is not average hifi forum.

The idea that the sampling rate is related to the ITD ability of a system is one of the most pervasive ones. 44/16 is in the nanoseconds. Pre-ringing is always ultrasonic, etc.

dave5 the format has nothing wrong with it. This isn't what I'm talking about. Im explaining the limitations of modern DAC implementations, the measurements in the time domain of the average designers product.

Yes, I’ve dabbled.

Show me some pictures of the DAC you use, ETC measurements.

dave5 you have never measured a DAC via ETC have you?

Looks like a sinc function. Like it should. My A/D has a slight apodising filter, but the ringing is mostly symmetrical.

Show me a picture.

That is not of your DAC.

The discontinuity in the sinc function of the Chord looked odd. I know the filter has a lot of taps, but the transition should be smooth, not a step.

Is this better?

Yes, that is because of the WTA filter. The only way to reduce the amount of FIR taps needs was by Robs custom algorithm. I don't know what his maths is built of, but from measurements there is nothing wrong. The only problem he has with his algorithm ...#

It’s just audiophile word-salad. As the sample rate increases, there is always a requirement for increased numerical accuracy or low low frequencies (which barely change at all from sample-to-sample ) will not be accurate. Known to anyone skilled in the art.

It genuinely does and can be measured. For example, the Blu2 DAVE is accurate down to 17 bits.

Now, whether this is audible I have no idea as I have not heard it. But from all the things I've read regarding the sensory limits of the ear and brain, esp...#

Reply to this post

MQA aims to go below 10 microseconds michael4056 and The time-smear is unfortunately for real.. https://www.soundonsound.com/techniques/mqa-time-domain-accuracy-digital-audio-quality

matthias2339 the thinner the cable, the higher the resistance. AWG - the higher the number, the thinner the cable.

Yes, but I meant high gauge as in high diameter. I know that a higher number in the AWG scale means a thinner cable.

michael4056 Sure but at least for me, price per meter is also very large factor and most applications do not need star quads.

Kennett Ismael Ylitalo that's true, they do make a difference when needed though.

Matthias Nyberg oh, I see. Yes, resistance is lower with thicker cables - inductance is also higher.

Which is why you need a good geometry to counteract the increase and keep both low as possible.

michael4056stewart4831 - I think I have someone who wants to try your challenge....

No, inductance is lower in thicker cables.

I had the honour to meet Peter McGrath the well known recording engineer, in person last week during a demonstration of the new Wilson Audio Yvette loudspeaker. It meant a lot to me when he told me that when he heard the first MQA remaster of his pers...#

matthias2339 See my posting about the guy who did a null test in the digital realm of a 24/96 file and a 16/96 file. He took a 24/96 bit file and converted it to 16/96. Then he nulled them in the digital realm, and the difference was noise at -83dB. So yeah, CD is good enough.

Yeah, I had just watched that a few days earlier ?

peter8 None of the tests were blinded or in anyway objective. I have find this rule very consistent: if they list the cables used, stop reading.

If each new microphone setup or remastered albums or new DAC systems and cable types need to be validated first by double blind tests in order to judge the increase in sound quality, not much progress will ever be made in the recording or audio industry.. Just trust your ears I would say.

You forgot that those kind of tests take less than an hour and are needed only once. If you try to use ears and sighted long term listening, you will spend tens of hours doing just one test..

There wouldn't be much progress in science if we had to take people's word for it, when they claim something, rather than placing the burden of proof on them. That said, we do not really need to blind test every speaker cable, because it has already be...#

Cables are not so much the problem, it is noise and jitter.

peter8 Jitter has not been a problem since mid 90s. I can easily bet you have never in your life heard jitter induced errors. I have, when they are introduced deliberately.. Or when clocksync just drifts too much. I'm quite sure you hear jitter as ...#

That's right, jitter in modern DAC's is not audible. Jitter is a much bigger problem in turntables and tapedecks. Not only do many audiophiles concentrate on useless improvements, such as lowering already inaudible jitter, they disregard hugely useful improvements such as room acoustics.

Yeah, 200ns jitter tested in 1962 when we are well within single nanos and picos.

I agree on room acoustics, but source is important as well.

it is audible.. unfortunatey

Jitter is part of the noise floor. Source is important but fortunately, ours are in a scale that was impossible 50 years ago in a lab.

Try finding the 2us periodical in those examples.. ...#

Have to go now guys, it was nice - enjoy music!

peter8 And yet another non blinded test. You simply have not learned yet: these tests are ABSOLUTE RUBBISH! Nothing they describe as the effects can be even remotely possible and not consistent what we have learned using scientific method. You have to start looking at your information sources and see that they are ALL sighted tests and rely on anecdotal experiences that can be ALL false.

Heavily debunked, and iirc the author has issued a retraction.

I'm so tired of reading the MQA ads again and again. There is a MQA site on fb where you can post all of your glorifications on MQA.

Thanx for your "like" kennett7443!

Michael Walsh Late to this, but inductance is mainly geometry driven, the native inductance of a straight wire is of no real concern at AF. Resistance of a 'typical' bit of speaker cable (a generic 79 strand) is around a couple of milliohms, and capaci...#

Andy McNally yes true. I generally agree with you, but it depends on many more factors. For example, QED cables manufacture cables to reduce the resistance, capacitance, inductance and interference to a minimum - mainly by their use of geometry.

I use...#

As someone who obviously has no time for flim flam judging by your other posts, I'd be interested in how you think these differences may manifest themselves electrically. As discussed earlier, the prime culprits should not be anywhere near large enough to affect the audio band, so I'd be interested to hear what else might be at play here to create audible differences.

andy22506 what flim flam? Copy and paste a few quotes to back up your assessment of my speech in this group please.

Also, it could be a few things imo. Lower resistance would entail a higher damping factor in lower impedance swing loads. The lock-o...#

Calm down kiddo, I was referring to your lack of flim flam, which seems obvious re-reading my reply. Anyway, I don't think a few milliohms per metre is significant wrt the damping factor when you consider the overall loop impedance, and this is gener...#

Andy McNally ah I'm sorry, I did not read your reply thoroughly haha.

That's fair enough. The differences I heard where only subtle btw, the price difference not so much! I want to investigate this myself by making my own speaker cables and testing th...#

Check out the Truth in AUDIO group discussion on damping factor and speaker cable RESISTANCE (admittedly a simplified though accurate model due to relatively low reactance). There is an amplifier called the In-Line Maraschino. It allows speaker wires of only a few inches long. This has a very significant effect on realized damping factor. Many comments on that thread.

Damping factor is not much of a figure of merit, although you do hear a lot about it. If the speaker cables are of adequate gauge, the are pretty much a non-issue. This paper is somewhat technical, but shows just how little difference there is in output impedance and the actual time-constants in the speaker system:

http://collinsaudio.com/Richard_Pierce_Damping_Factor.pdf

dave5, thanks for the link. I read through it, and his focus is on decay time, or at least that's what he boils everything down to. The numbers in the table were purposely rounded to two decimal places, and this leads the reader to what I beli...#

Looks like Tommy has blocked me.. it is for the better thou this way.

tommy76488 DF of 100 assumes 0.08 Ohms output impedance 20 to 20 KHz. I will quote an example of a Marantz 250 watt per channel amp made in the US in the 70s damping factor quoted 400 at 20 HZ not specified at 20 K but upon measurement. I found th...#

tommy76488 I don’t think there is any agenda in that paper, it’s just the math. The woofer damping factor is dominated by the resistance of its voice-coil, the Zout of the amp only has a little contribution. Two decimal places is plenty for something like this.

dave5, "Two decimal places...." is an OPINION, not a fact. The coil is what's being driven. Any power that goes ANYWHERE ELSE is wasted as heat. The crossover matters, the coil matters, the wire matters. It all matters! And, yes, it's EASY to hear the difference better REALIZED damping factor makes, without even crossing an order of magnitude.

....and before you get on me about it, the coil does dissipate power, but that's where you want it dissipated because it's there where the non-heat power becomes sound.

The main point is this: A speaker systems damping is dominated by the resistance of the woofer voice-coil. As those calculations show, the differences are negligible.

tommy76488 a true capacitor or inductor cannot dissipate or waste power.

A loudspeaker can never be more than 50% efficiency. Bell Labs wrote papers on Horn speakers etc. ...#

It’s amazing to see the actual efficiency of speakers. Like only a couple percent of the energy in comes out as sound!

http://www.sengpielaudio.com/calculator-efficiency.htm

dave5 I like that website. It's a great tool for the objective audio enthusiast!

michael4056 It's high on google search results for a good reason. Has been my go-to site for this kind of calculations, the gain factor calculator being my bookmark.I rather use that than an app..

dave5 The original Bell Labs paper in 1928 documents the making of the field coil driver and the edgewise wound Aluminium voice coil and the shape of the diaphragm and phase plug. Still much the same design is used to this day.

Then they go on...#

Reply to this post

Looks like I’ve been duped. I thought some of the issues with it was simply bad remastered versions. Could be true or simply harmonics.

Reply to this post

I don't get the argument that the MQA folks won't detail what they are doing because they don't want to enable "competitors". Isn't the issue supposed to be guaranteeing "provenance"? What would exactly be the point of competing on this? Would anyone want something "traceable to Ralph's Bureau of Standards" or a theater system that is shown to be compliant with a "something like Dolby" logo? ?

Pretty sure the secrecy is so that no one can see whether it really does what it claims do to. I can't imagine ANY way they can reverse the effects of upstream aliasing, what kind of magic algorithm could possibly do that??

If it’s real, it will be protected by patents. If it’s bullshit, then it will be ‘protected’ by flim flam....

yes, funny but correct.. and I believe 'flim flam' will always fail, but real quality will prevail. on top of that, mr Stuart does not strike me to be a bad guy at all

"ruining"?

Ah, now that response makes sense! ?

My name is right there in the post. ‘Stuart’? ???

I’ve had worse, it’s just a flesh wound...

I am actually referring to the inventor of MQA, Mr Bob Stuart..haha but stewart4831 you also seem to be a genuine person with sense of humor as well!

The sad thing is, I actually know Bob Stuart from his Audiolab days. He is a very talented engineer, but he seems to have gone to the dark side.....

You cannot even attempt to successfully reverse the shortcomings of the ADC, used to produce a track - not without ruling the recording and playback chain.

stewart4831, a patent is not a certification of technical viability or achievement. It is merely a time stamp on the claims of the "invention". Once they are granted (no longer "pending"), they become publicly accessible. With the obfuscation...#

stewart4831 John Westlake who designed the M-DAC would not be happy with him.

Well, as a patent examiner of some 34 years now, I can agree that a patent doesn't signify technical achievement, it merely means that the invention is new and unobvious over what has been done before. Applicants can elect to not have their pending app...#

Be aware of patent officials, if we aren't careful they will revolutionize the laws of physics.. again..

Wasn’t it the US Patent Office who wanted to shut up shop about a hundred years ago, because “everything had been invented”? ???

Great comments, Dirk and Stewart!

Reply to this post

Reply to this thread

This site uses cookies and other tracking technologies to differentiate between individual computers, personalized service settings, analytical and statistical purposes, and customization of content and ad serving. This site may also contain third-party cookies. If you continue to use the site, we assume it matches the current settings, but you can change them at any time. More info here: Privacy and Cookie Policy