We need to employ a reconstruction algorithm that is very complex and powerful. And this algorithm must also have the ability to gather sample dot information from other sample dots far away, dots that were sampled more luckily.

How can the digital reconstruction process look beyond the 2 sample dots immediately adjacent to the missed peak, to gather the required information implicitly encoded in the overall pattern of sample dots? Simple. When the playback reconstruction filter is reconstructing a bridge between a given pair of sampling dots, to reconstruct (hopefully correctly) the curved path of the original signal waveform, this filter's design must mathematically gather information from farther away in the overall pattern of sample dots, not just from the 2 sample dots immediately adjacent to the sampling interval it is presently building a bridge over.

This is not rocket science. Every digital engineer should know this, and every digital author should know this. Somehow, today's many brilliant brains with PhDs in digital engineering can't see or comprehend this. Yet every age 4 child already knows this, when he graduates, from his age 3 limit of drawing only straight lines connecting the dots outlining that bunny rabbit head, and learns to instead plot curved paths between dots, which require curve-fitting. Curve-fitting necessarily a priori requires looking at and gathering information from sample dots beyond the immediately adjacent pair flanking the section of the curve presently being plotted between sample dots.

So, how can a digital reconstruction filter design gather this required mathematical information, about the sample dot pattern well beyond the immediately adjacent pair? It must of course first be able to see far away, and then it must be able to transmit this mathematical information from far away, and then use this information to calculate the correct curved path to reconstruct in the present sampling interval where it is building a (hopefully correct) curved path bridge connecting a pair of sample dots. This is exactly what a human does when he manually curve-fits, and what every computer should do when it is programmed to curve-fit accurately.

The connect-the-dots model of how digital reconstruction allegedly works, universally taught in textbooks, is hopelessly naïve, simplistic, and misleading, actually applying only at very low frequencies, where many sample dots are very closely spaced, relative to the signal waveform changes, so that the sample dots actually do closely outline the signal waveform, and we can indeed reconstruct the signal waveform by merely connecting the sample dots.

But now, a crucial question: just how low is this very low frequency, up to which the connect-the-dots model is viable, but above which it is naively and over-simplistically wrong? What is the upper frequency limit, above which the universally taught connect-the-dots model of reconstruction fails, above which we need sophisticated and powerful curve-fitting algorithms?

Let's assume that we connect-the-dots via linear interpolation, since the short filter designs by the modern revisionist digital engineers cannot gather sample dot data beyond 2 sample intervals, and thus cannot perform any meaningful complex curve-fitting. If we are using a digital system with 24 bit depth resolution, we naturally are entitled to expect our reconstruction to be accurate within 1 LSB of error at 24 bits, an error which we will forgive.

So, our crucial question here becomes: what is the upper frequency limit, below which the universally taught connect-the-dots model actually works, given the typical modern revisionist short filter (not nearly as short as MQA) and a 24 bit resolution system, but above which the universally taught connect-the-dots model utterly fails to explain how digital reconstruction can and does actually work?

We calculate this upper frequency limit, for the universally taught connect-the-dots model of digital reconstruction, as being 0.0000001 Hz. If the signal waveform being reconstructed contains no frequencies above 0.0000001 Hz, then the universally taught connect-the-dots model does indeed suffice. But if the signal waveform to be reconstructed contains any frequencies above 0.0000001 Hz, then we must look for, and learn, and employ, other models for reconstruction that are far more powerful and sophisticated, with a much farther time domain reach in gathering the curve-fitting information they require.

How powerful and sophisticated must this correct model be? The answer is simple: this correct model of how digital actually works, and its reconstruction algorithm, must be so powerful and sophisticated that it can cover the entire spectral span, from 0.0000001 Hz all the way to at least 20,000. Hz, and perform correct reconstruction within 1 LSB at a 24 bit depth resolution.

How do we program the digital reconstruction filter's computer in order to bestow it with these required curve-fitting abilities? A digital filter's design is embodied in its coefficient function, a function which extends over its X axis dimension (here the time dimension), perhaps for a brief (short) duration, or perhaps for a long duration. We can design its coefficient function to scan far away sample dots by deliberately designing it to be long in duration rather than short. This long coefficient function will gather that dot pattern information from afar, and then use this information in its curve-fitting calculation process to plot exactly where the correctly reconstructed curve of the original signal path belongs, in the specific sampling interval between a pair of sample dots that the filter is presently plotting (this process here is called convolution).

How far away should the filter's coefficient function extend, to gather information from far away sample dots, in order to perform this curve-fitting reconstruction accurately? This task, of accurately calculating the correct curve-fitted signal waveform path, naturally gets more challenging as the signal frequency increases, since the sample dots become sparser at higher frequencies, relative to the signal paths twisting and turning features that need to be reconstructed. Thus, to perform this more challenging high frequency curve-fitting, we need to make the filter's coefficient function gather information from more sample dots, i.e. from farther away - thereby requiring the filter's coefficient function to be even longer in duration.

Moreover, the digital system's amplitude bit resolution plays crucial roles here. A 24 bit system naturally requires finer (greater) reconstruction absolute accuracy than a 16 bit system, and thus requires a longer duration coefficient function, in order to calculate the correct reconstructed original signal waveform path, within 1 LSB accuracy.

In short, the coefficient function needs to have a wide reach and grasp, in order to gather the information from enough other sample dots, far enough away, so as to be able to accurately calculate the correct curve-fitted path that the original signal waveform followed between sample dots. Thus, it must be a long coefficient function, not a short one.

How does digital correctly execute this convolution process, using its coefficient function to gather the information, from enough far-away sample dots, so that its reconstructed curve-fitting path correctly replicates the curved path of the original signal waveform (just as the sampling theorem promises we can do)? Let's suppose that our reconstruction filter is presently working on curve-fitting (calculating) the correct reconstruction path just between sample dots #101 and #102. The filter's computer loads, into its static memory array, the amplitude values of 100 sample dots to the left of dot #101, and then also the amplitude values of 100 sample dots to the right of dot #102. Presto! It now has instant random access to all 200 flanking sample dot values, statically stored in this static array.

All the data is now statically in place, for the filter's convolution process to calculate the correct curve-fitted path between sample dots #101 and #102, utilizing its moderately wide reach of 100 sample intervals on each side to gather the information from all 200 other sample dots, that is required to perform an accurate curve-fitting reconstruction. The filter's calculating computer can randomly access any and all of this static array data, in any sequence it wishes, and at any speed it wishes.

How exactly does the filter gather the required information from all the other sample dots, both near and far, that have been pre-stored in the static array? It gathers this information from each of the 200 outboard sample dots, one at a time. It multiplies the amplitude value of each of those 200 sample dots by a coefficient, determined by the coefficient function that embodies the reconstruction filter's design. This coefficient function consists of a table of fractional values, which, very simply speaking, are scaled along the X axis dimension so that sample dot values that are far away (say dot #199) have less influence in deciding the correct curve-fitting path than sample dot values that are close by (say dot #103). This follows exactly the same logic and method that we humans would use when manually curve-fitting, by eyeballing a train of sample dots.

Thus, the coefficient function will typically have bigger fractional values for closer sample dots, and smaller fractional values for sample dots that are far away. Of course, the coefficient function's fractional values for the far-away dots must still be greater than zero, so that those far away sample dot values still have some influence in our hopefully accurate curve-fitting calculation for the correct reconstructed path of the original signal waveform.

If the coefficient function is designed to be short, then it will die out to zero before it achieves the required wide reach, of gathering information from all 200 other sample dots. It will utterly fail to reach, and fail to gather any curve-fitting information from, the farther away sample dots. And thus its calculation of the reconstructed curve-fitted signal waveform path, between sample dots #101 and #102, will be inaccurate.

It will be distorted.

It will be distorted in the time domain.

It will thus, necessarily a priori, have distorted time domain transient response performance.

Only a very long coefficient function can and does have the wide reach to gather the information from far away sample dots that is required to accurately calculate the correct curve-fitted path and thus the correct time domain reconstruction of the original pre-sampled signal waveform. Accurate curve-fitting for high frequencies (with their sparse sampling hence unlucky sampling), and also for high bit resolution digital systems, requires very, very, very long coefficient functions.

In a later installment of this article (Digital Done Wrong), we'll show you a concrete example wherein a 24 bit digital system, reconstructing a simple sine wave just below Nyquist, requires the coefficient function's wide reach to gather information from sample dots that are 17 million sample intervals distant, in order to calculate a correct reconstruction within 1 LSB accuracy.

As you can see, we have just showed and proved how and why a long coefficient function, in a digital reconstruction filter, provides better time domain accuracy, whereas a too short coefficient function necessarily a priori distorts the time domain reconstructed signal waveform.

This is the way digital actually works.

This is the way digital necessarily a priori must work.

Thus, the digital engineering field's pursuit since 1984, of a short filter coefficient function, is so misguided and so wrong that it is literally backwards, and the very opposite of the truth about the way that digital actually works. They have espoused and pursued as their ideal a coefficient function that is ever shorter in length, erroneously thinking that they are thereby producing ever better time domain transient response from digital, when in fact they have been achieving the complete opposite, actually producing ever worse time domain reconstruction accuracy, hence ever worse time domain distortion and ever worse time domain transient response.

MQA, boastfully claiming to be the apotheosis of this digital engineering movement that achieves the shortest coefficient function of all, thereby supposedly approaching the ideal of 'perfect impulse response' and hence virtually perfect time domain signal reconstruction accuracy - in point of fact produces the complete opposite, namely the worst possible distortion of the reconstructed signal waveform in the time domain, and hence the worst possible time domain transient response.

Clearly, for all these digital engineers to have been so very wrong for 34 years, that they believe and practice the very opposite of the truth about the way digital works here, they must not have a clue of comprehension about how digital works in general.

MQA does stand uniquely alone from other short filter digital designs in one key way. Other short filter designs commit many, many sins, and produce many severe distortions in the time domain (which we'll discuss in detail in another installment of this article Digital Done Wrong). But least they do one required thing: they at least do bridge the time domain gap between dots #101 and #102, albeit with a very distorted, erroneous time domain reconstructed signal waveform path.

However, MQA stands uniquely alone in not even being able to do this. MQA boasts that their coefficient function is so short that it does not even span one sampling interval (in their backwards belief that shortest is best). This means that MQA cannot even reconstruct a bridge of any sort between two adjacent sample dots, not even a wrong path, and not even the simplest possible bridge path of all, a straight line from age 3 connect-the-dots linear interpolation. Instead, MQA's 'reconstructed bridge' between sample dots dumps you down into the river, at nearly zero amplitude, before you even get to the next sample dot in sequence.


Historical Background of Technical Analysis


The overall title of our serial installment article, Digital Done Wrong, is actually modest. We expose and analyze 100 blunders that today's digital engineers erroneously believe and practice. Many of these blunders are so severe that the belief and practice of digital engineers, and the supporting media, are actually backwards, the very opposite of the truth about how digital actually works. The numerousness and the severity of all these digital engineering blunders make it clear that the digital engineering community doesn't have a clue in comprehending how digital actually works.

The scandalous irony is that you don't need to be a brilliant PhD digital engineer to know how to do digital right - to know how digital actually works, indeed how it must work, and thus thereby know how to avoid all those 100 severe blunders. All of this erudite knowledge is actually learned by all of us, and as mere children. In fact, by age 8 all of us have already mastered all the tools and techniques we need to do digital right, and to steer clear of all those 100 blunders that professional engineers make.

As one quick example, all of us as newborn infants have already mastered the technique, required for doing digital reconstruction correctly, for collecting needed 'unnatural non-causal pre-response' information, from a future data point in our hand's future travel path - so we can construct the correct direction to even start our interpolated hand-reaching path, before we even start with the action of reaching out our newborn hand, to touch our mother's face. Yet brilliant PhD digital engineers have forgotten this lesson we all learned as newborn infants, and so they are utterly unable to perform even simple linear interpolation, let alone the required correct curve-fitting reconstruction, from one digital sample dot to the next.

What is a concrete example of these 100 blunders? And what do we mean by saying that Stereophile's digital pronouncements, and also digital engineers' digital theories and consequent product designs, are so far wrong that they are backwards, the very opposite of the truth about digital? Here's just one example of the 100 severe blunders, an example that also shows us the history of the modern revisionist digital engineering movement that erroneously believes it is creating digital reconstruction filters that are optimized for superior time domain transient response performance, but which in actual reality do the complete opposite, and butcher the 'reconstructed' time domain signal waveform. This modern revisionist digital engineering movement is practiced and respected worldwide, but it will be completely overthrown by the revelations of this article.

The impulse response test is a sacrosanct gold standard evaluative benchmark test, for science and engineering in general, including digital engineering. Its basic evaluative criterion is simple: the more closely the device under test (DUT) can replicate the input test impulse, the better is the time domain transient performance of said device. This input test impulse is very brief in temporal duration (and it also has no ringing and no temporal pre-response). Thus, when evaluating various candidate digital filter designs to use as a playback reconstruction filter, the filter with the best time domain transient performance would clearly seem to be the one whose impulse response (its output within the impulse response test) is temporally the briefest, with no ringing and no pre-response (making MQA the clear winner among today's state of the art filters).

In contrast, the old fashioned brickwall sinc (sinx/x) filter exhibits awful impulse response in this benchmark evaluative test; its impulse response has extremely long (indeed infinite) temporal duration, and it also commits the further heinous crimes of having violent ringing and also very strong (and long) pre-response (this pre-response obviously being unnatural and non-causal, like a bell ringing before it is struck).

Thus, it is with good reason that advanced and brilliant digital engineers, inspired by Lagadec's seminal 1984 paper on the time domain transient response errors committed by digital filters, started a revisionist revolution, with the noble intent of improving the time domain transient response performance of digital filters in general, of digital playback reconstruction filters in particular. This revisionist digital engineering movement has steadily grown during the 34 years since 1984, and there are now many, many digital products on the market which feature a variety of playback digital reconstruction filter designs, all sharing the common mantra of rejecting the sinc brickwall filter (because of its awful impulse response), and designing a revisionist alternative digital reconstruction filter that features much shorter duration impulse response, hence the expectation of much better time domain transient response performance from the reconstruction filter and process.


(Continued on page 174)