MQA, and the 86 MQA digital products reviewed here, follow this revisionist mantra, and take it to its ideal extreme. MQA boasts the shortest duration impulse response of any digital playback system to date, most closely approaching the temporal brevity of the input test impulse, and hence implicitly boasts the best time domain transient response performance of any digital system. The brand new January 2018 issue of Stereophile echoes and confirms this MQA boast. Even more important than Stereophile's impulse response test measurements is the crucial fact that Stereophile's two authors here both agree completely with the crucial point that the temporally briefest impulse response for the playback digital reconstruction filter, i.e. that which most closely replicates the input test impulse's brevity, provides the best time domain transient response performance.

There's just one tiny problem. All of the above is in fact wrong. Indeed, it all is so far wrong that it is the very opposite of the truth. The actual truth, about the way that digital actually works, is that the best time domain transient response performance from the crucial playback reconstruction filter is provided by a filter with infinitely long impulse response (e.g. the old fashioned brickwall sinc filter that is totally rejected by the revisionist movement and MQA). The actual truth, about the way that digital actually works, is that pursuing an impulse response that closely replicates the input test impulse (in the world-standard-evaluative-benchmark impulse response test) actually provides far worse time domain transient response performance in the 'reconstructed' signal waveform, severely butchering the time domain signal waveform that is 'reconstructed'.

Worse yet, pursuing this filter design's impulse response to its ideal goal of the briefest possible impulse response, by most closely replicating the very brief input test impulse  (as MQA boasts of getting very close to doing), actually provides the very worst possible time domain transient response distortion in the 'reconstructed' signal waveform. It provides the very worst possible distortion, in performing its reconstruction job of building a bridge between sample dots (that should correctly replicate the path and curve of the original signal between each pair of sample dots), because in truth it actually does not and cannot provide any reconstruction whatsoever between pairs of sample dots.

Instead, it simply falls back down to zero almost immediately after each sample dot. It does not and cannot build any reconstructed bridge between sample dots whatsoever, since its temporal duration is not long enough. Instead, its so-called 'ideal' (temporally very brief) impulse response dictates that it falls back down to zero immediately after each sample dot, so all it does is output and pass on each of the sample dots as an isolated impulse. In other words, this so-called 'ideal' impulse response reconstruction filter actually does nothing at all, and might as well not even be in the signal path. It certainly does not and cannot perform any reconstruction whatsoever (not even simple interpolation between sample dots), so it provides the worst possible distortion of that crucial correct reconstructed bridge path between sample dots (i.e. of the original signal path), by totally obliterating that bridge, and not providing any bridge whatsoever.

Specific Technical Analysis: Short vs. Long Filters in the Time Domain

The primary job of any digital playback product is to build a bridge between and among the temporally discrete digital sample dots, with the path and curve of each bridge section correctly (accurately) reconstructing the original path and curve traced out by the original pre-sampled signal waveform. These incoming data sample dots are sacred, inviolable anchor points for the reconstruction process, since these dots represent an accurate sampling of the original signal waveform at the instant this signal waveform was sampled to produce each sampling dot.

Most digital engineering texts teach this job over-simplistically, as merely connecting the sample dots that are so frequently and closely sprinkled along the signal waveform path that they already correctly outline the shape and path of the original pre-sampled signal waveform.

But the sampling theorem promises us that we can perform very sparse sampling (merely 1 sample per half sine wave cycle), while still being able to correctly reconstruct the original pre-sampled signal waveform (given the correct reconstruction tool). This sparse sampling necessarily a priori means that virtually all of our sample dots (which of course are asynchronous with the signal being sampled) will not happen to randomly occur at the lucky point of say the sine wave's peak, thereby luckily capturing both the correct amplitude of that peak and the exact timing of that peak, for easy later correct reconstruction of that full sine wave.

Instead, virtually all of our sample dots will be unlucky samplings, at some other unknown and unpredictable point somewhere along that undulating sine wave. Thus, the sample dot itself, and also its immediate neighboring sample dots, do not represent nor contain the needed information to be able to correctly reconstruct the true peak amplitude, and the correct temporal instant, of this unluckily sampled sine wave peak.

Instead, this information is implicitly contained in the pattern of other sample dots that are farther away. This means that the reconstruction filter, in order to collect and use this needed information from sample dots far away, must have a wide field of view when collecting information, which dictates that this reconstruction filter must have a long duration coefficient function, not a short duration one.

In short, accurate reconstruction of the original signal waveform, which of course takes place in the time domain, obviously requires superb time domain transient response by the reconstruction filter, and we see now that this accurate reconstruction, this superb time domain transient response, is actually achieved by a long coefficient function (hence a long 'impulse response'), not a short one. This is the complete opposite of what the modern revisionist digital engineering movement, who pretends such caring expertise and loyalty to time domain transient response, erroneously believes and practices. In truth, it is their short filters that butcher the time domain signal waveform, with many kinds of severe time domain distortions.

What is specifically wrong with their short filter design approach?

A playback reconstruction filter with a short coefficient function (hence a short 'impulse response') actually commits several types of severe errors and distortions.

First, typical short filter designs, including Craven and most of today's revisionist digital products, commit a gross error before they even leave the starting gate to begin their 'reconstruction' of the correct path between and among the sacred, inviolable sample dots themselves. These short filter designs not only will plot an erroneous, distorted time domain path between and among these sample dot anchor points, but they also ignore the sacred, inviolable sample dot anchor points themselves. Instead, they plot a freewheeling 'reconstructed' signal waveform path that sails off into space at their whim, without much regard to where the sample dots, that accurately sampled the original signal waveform, are actually located. Thus, the signal waveform 'reconstructed' by all these short filter designs is so distorted in the time domain that it does not even pass through the official anchor sample dots that accurately sampled the true original signal waveform.

Second, any and every short filter design necessarily a priori (by definition) is intrinsically incapable of casting the wide net that is required to correctly boost and thus correctly reconstruct every peak that happened to lie between the randomly asynchronous sample dots, and thus was unluckily sparsely sampled (which is true of virtually all peaks, in a sparse sampling system such as the sampling theorem establishes, especially in the 2 top octaves [5kHz-20kHz for audio). This required wide net, intrinsic to a reconstruction filter having the correct long impulse response, brings in mathematical information from afar, from other peaks that were more luckily randomly sampled, at or near their peaks, and mathematically acts to correctly boost the reconstructed amplitude, of an unluckily sampled peak between sample dots, to its correct original amplitude and shape.

Third, any and every short filter design necessarily a priori has poor rejection of unwanted spurious ultrasonic images from the sampling process. The residual presence of this spurious ultrasonic garbage has plural unwanted consequences, all of which distort the time domain signal waveform 'reconstructed' by this short filter. Note that all these distortions are time domain distortions, ironically the very domain wherein a short filter is claimed to supposedly excel.

Consequence a: This high frequency spurious garbage might radiate down into the audible baseband, where it would audibly compete with and degrade the fidelity of the reconstructed baseband signal waveform.

Consequence b: Even if it does not radiate that low in frequency, this spurious image garbage could still cause beat interference random incoherent phase patterns between itself and the true baseband information, and these interference patterns would be generated throughout the audible range, including middle and low frequencies, where they would not only be more audible but would also linger for a longer duration. These random incoherent phase interference patterns would create a spurious distortion in the form of a phony and vague halo of richer ambience (just as they do in the reverberation field of a concert hall), and their lingering duration would create a further spurious distortion of making this phony, vague, richly ambient space sound physically larger (again just as long duration concert hall reverberations do). In short, phony concert hall distortion, with vague localization specificity.

Consequence c: The spurious ultrasonic image information causes little distortion squiggles in the reconstructed time domain signal waveform between sample dots, which sonically are heard as spurious extra information in the higher frequencies, adding phony (inaccurate) airy sounding noise to the signal's higher frequencies. This spurious information is essentially noise modulated by the (music) signal waveform, so it tracks the (music) signal, but it is not an accurate replica of the original signal waveform, instead sounding like airy fuzzy noise riding along with (imbedded in) the (music) signal's intrinsic sound. Note that the correct reconstructed signal waveform path must always be a smooth simple curve between each pair of sample dots, in order to obey the requirements of the sampling theorem. If any extra squiggles are calculated and 'reconstructed' between sample dots, for example as a result of spurious illegal ultrasonic information beyond Nyquist having leaked by any short reconstruction filter, these are illegal time domain distortions of the time domain signal waveform, and their sonic effects are time domain signal distortions which are objectively wrong, even if they happen to subjectively euphonically please some listeners.

Consequence d: If the playback reconstruction filter has a very short impulse response, approaching the supposed engineering ideal of perfect replication of the input test impulse, then the filter itself already violates the sampling theorem. Its impulse response will already contain high frequencies beyond Nyquist, due to its very sharp corners and steep slopes. This sampling theorem violation in the filter function itself will cause spurious time domain garbage to be generated by the filter's convolution process with the incoming signal waveform, as surely as though the incoming signal waveform itself were to have violated the sampling theorem by containing frequencies above Nyquist.

Note that convolution accepts 2 functions as input, and it does not distinguish between the differing natures and sources of these 2 incoming functions, so it is completely indifferent to whether a sampling theorem Nyquist violation is contained in and arrives from the incoming signal waveform or from the digital filter's characteristic coefficient function (this indifference is simply proven by the fact that convolution consists of multiplication and addition, both of which are commutative). This further level of added time domain garbage, unique to filters with impulse response so 'ideally' short and temporally sudden that they intrinsically already violate the sampling theorem, is similar to the garbage added by spurious leakage of ultrasonic sampling images by all short filters (consequences a through c above), but is in addition to them, so its sonic degradations are similar in nature but make these total sonic degradations worse in degree.

Fourth, the entire revisionist digital engineering movement is predicated on a simple mantra: the time domain impulse response of a digital filter (e.g. the playback reconstruction filter) should be as good as possible, i.e. it should replicate the temporally brief input test impulse as closely as possible. In other words, the shorter the impulse response of this filter, the closer to the universal engineering time domain ideal of prefect impulse response this filter will be, hence the better its time domain transient response performance will be, in performing its job of reconstructing the original signal waveform correctly, from the discrete sample dots. MQA claims to provide a shorter impulse response reconstruction than any other previous digital design, closer to this time domain ideal goal of impulse response brevity shared by the entire revisionist movement, hence presumably yielding reconstruction with better time domain transient response than any other previous digital design. In short, MQA claims to have approached this very short ideal goal better than anyone else, indeed so close that its nearly ideal impulse response is claimed to be even shorter than a digital sampling period.

But there's a huge defective flaw in this engineering ideal, and thus also in MQA. If the reconstruction filter is so short that its impulse response is shorter than a sample interval (as MQA's impulse response claims to be), then it also has the fatal time domain reconstruction defect that it cannot even bridge the gap between pairs of adjacent sample dots.

In order to cope with the complex job of correctly reconstructing the correct original curved signal waveform path, given the very sparse sampling promised us by the sampling theorem, a reconstruction filter design must have a long span and reach over many sampling intervals, to gather the information required to calculate exactly the correct path to exactly replicate the original signal waveform's path, as a bridge between (connecting) each pair of sample dots.

But what happens if the reconstruction filter's impulse response is very short, indeed so short that it is even shorter than 1 sampling interval (as MQA claims to be)? Then it cannot even bridge this inter-sample gap at all, not even with a distorted erroneous curved path. Instead, its impulse response, stimulated by one sample dot, collapses and falls to nearly zero before it even reaches the next sample dot. That's the worst possible time domain distortion of the time domain bridge that reconstruction is supposed to build correctly between and among sample dots. Other short filters that are moderately short do build a time domain bridge of sorts, but their bridge is distorted in its shape and path. But very short filters have such bad distortion of this time domain bridge that this bridge stops before it even reaches he next sample dot, so it throws you down into the river below.

Indeed, the hypothetically ideal time domain goal, shared by all engineers, of extremely brief perfect impulse response, is actually so fatally defective in the time domain that it produces the worst possible reconstruction distortion in the time domain, actually performing no time domain reconstruction whatsoever. It merely leaves us with isolated spikes at each sample dot, without even a hint of any bridge (signal waveform path) at all between sample dots - thus doing nothing, as though this supposed 'reconstruction' filter were not even present in the circuit.

This so-called 'ideal impulse response' filter gives us zero signal waveform path and zero amplitude, between and among these isolated spikes, thus totally obliterating the amplitude information and the curved path information of the original signal waveform that we are now trying to correctly reconstruct here between these sample dots. This total obliteration of the original signal information is surely the worst possible time domain distortion of the original pre-sampled signal waveform path, that we wanted to accurately reconstruct, between and among these isolated sample dots.

The Impulse Response Test

Throughout the world of science and engineering, the impulse response test is the sacrosanct gold standard evaluative benchmark, for testing the time domain transient response of the device under test (DUT), be that device an amplifier or a digital filter. This test's criterion for evaluative success is simple: the output of the DUT should very closely replicate the test input signal (a generally applicable criterion), where the input test signal in this case consists of an impulse that ideally is as brief as physically possible.

However, this creates a problem here. Obviously, when subjected to this gold standard evaluative benchmark impulse response test in digital, a long filter does not replicate the brief input test impulse well, and not as well as a short filter, so the long filter clearly should have poorer time domain transient response than the short filter. But our analysis above, based on performing the curve-fitting reconstruction task accurately, tells us the precise opposite, namely that the long filter has better time domain transient response than the short filter, because it can curve-fit more accurately and thereby reconstruct the original re-sampled signal waveform with better time domain accuracy, hence better time domain transient response.

So which of these two opposite, contradictory evaluations is correct?

Later in this article on Digital Done Wrong, we devote an entire installment to analyzing the impulse response test, to showing what the answer to the above question is, and to explaining why. To answer the question in brief here, the impulse response test (especially the digital version) is universally very badly misunderstood, and very badly mis-applied, and very badly mis-interpreted - and in many, many ways. Indeed, the impulse response test actually sits on the verge of being an invalid test, especially in digital, when used for evaluative purposes (which is its primary usage worldwide).

The basic proof of the impulse response test's invalidity is simple (our analysis of the how and why is far more complex and lengthy). Note that a test's validity can be overthrown by just one counterexample, since, if the test cannot discriminate to weed out this counterexample, then we have no way of knowing how many other examples there might be where this test might be similarly invalid.

(Continued on page 175)