This is a conversation between a sceptical prospective customer (or rather not), who is far from being convinced what I’m trying to explain..
Whenever I’m recording the impulse response of any QUANTEC Room Simulation, for putting it through my convolution plug-in, its spatiality collapses altogether. To name a number, the 60 to 150 feet depth of a sacred building has completely vanished into thin air. No idea of what’s going wrong. Maybe some copy protection you’ve hidden intelligently?
It’s neither you doing something wrong, nor there’s a copy protection. It simply doesn’t work.
You really cannot make a claim for yourself that proven mathematical and physical methods like convolution and Fourier transforms ironically fail, as soon as they’re applied to your Room Simulation algorithm.
Nobody did insist on that. It goes without saying that convolution works; you do hear flawless reverberation, don’t you? It’s just it’s spacial depth that you’re missing; which has gone flat somewhere in the course of your manipulation.
Flawless indeed. But, where exactly, the spatiality has fallen by the wayside?
Right from the start – when feeding the unit.
In plain language: first I feed the left input with a click, and record the impulse response on both outputs. Then click on the right, and again record both outputs. What’s wrong with it?
For now, you’ve recorded no more than two, let’s call them “labyrinths”: one for 100% left, and one for 100% right. Now hurry to proceed with sampling center, slight left, slight right, and all the rest of it.
Just wait! – I’ve two labyrinths – one for left, and one for right. If I’d feed the unit with a center signal, i.e. mono into both labyrinths at once, the output always delivers the sum of both labyrinths. Generally, this should hold for any panpot settings, right?
In the context of Room Simulation, we don’t deal with two delayline-based labyrinths, but with hundreds of thousands of resonators distributed throughout the room. From the perfect coordination of all those resonators – and only from there – results that stunning transparency.
Feeding the unit with a sine wave test signal and very fine frequency steps (<<1Hz) would stimulate a non-specified subset of those resonators, to a more or less extent. Depending on the phase and amplitude conditions at the two inputs, quite a few resonators may not even respond at all. Moreover, one cannot estimate if a specific resonator will respond to the left, right, or only to a specific phase or amplitude relationship between the two input channels. In other words, as jagged and bumpy as the amplitude and phase behavior of a single labyrinth, is crosstalk between the two labyrinths with room simulation. The operative point here is that those hundreds of thousands resonators jump wildly with even the slightest frequency drift, while your two labyrinths stubbornly deliver their vector sum, regardless of the frequency – definitely a bit of a yawn.
I must admit that my approach would indeed force a majority of those resonators into lockstep. This may paralyze time-of-arrival stereophony, but what puzzles me is that intensity stereophony does collapse likewise. Did I overlook another important detail?
Due to the complex crosstalks within a room, one may realistically imagine one resonator at one specific room position, which would respond to either left or right channel, but not both. With mono, genuine Room Simulation may deliver a resonance gap here, while your convolution clone may still deliver the sum, e.g. a peak – as it does with any other resonator. Be aware that you haven’t captured those singularities while taking the fingerprints. Moreover, just 1 Hz higher, both approaches might match again, and 2 Hz higher, some completely unexpected behavior could occur. In short: while having sampled the two room fingerprints, you’ve completely disregarded the “crosstalk domain”.
Finally, it looks to me that there’s no feasible way to counterfeit your room model by means of a convolution plug-in?
Sure enough, there is one dedicated configuration where a convolution clone would be 100% identical.
Would you mind to tell me?
The idea is to just lever out the uncapturable “Crosstalk Domain”. Take care of feeding your Yardstick always with a mono signal. Just feed both left and right input with the click, record both outputs’ IRs, and then off to convolution!
With this trick, both the Yardstick and its convoluted IR clone do really sound exactly the same?
Absolutely – both are as flat as pancakes now.
- Timothy K Hamilton (header image)
- Timothy K Hamilton