• Peter Sorensen

    Joe Gilder wrote 2 years ago: “For this week’s video I’ll shoot a followup video showing that the polarity invert yields better results”.

    Did Joe ever made this Recording bass Video no 2 ?

    • I’m not understanding your question.


      • Peter Sorensen

        Hi Joe … I was looking for your video “How to Not Destroy Your Bass Part two” but I simply could not find it.
        Now I have found the video.

  • J A

    Not for nothing, but Stephan: you and other sound engineers like yourself take the fun out of everything, and make it a dragging, negative experience. Where are your tutorials? Although I am not a sound engineer, I can say with certainty that if they in fact did exist, they would be full of things anyone else could disagree with. But at least some of us would do it respectfully.

    On a brighter note, I appreciate the time taken out to discuss in detail, the various aspects of mixing when recording into mic and direct, and the problems that may arise and how to achieve better results. For beginners such as myself, it definitely helps steer my recordings in a better direction.

    Thanks Joe.

  • Jazzy Pidjay


    Maybe YOU really shouldn’t be giving advice to Joe Gilder.

    For sure, in real life, there’s plenty of ways for a signal to be delayed
    enough to need such time alignment correction.

    Without even going into the digital world.

    Your explanation about wavelength is correct.
    So the two-inch that separate the loudspeaker to the microphone,
    are indeed induce a constant time delay through all frequencies
    equally. So there’s absolutely nothing incorrect about trying
    to compensate for this delay once in the digital world.

    Now, what you don’t seems to know or understand, is the fact
    that in real life, every loudspeakers exhibit some group delay.
    And for the bass frequencies, it can be pretty long !
    That’s pure physics, and it’s true for all kind of enclosures,
    infinite baffle, vented, sealed, transmission line and so on…

    Said in a different way, every loudspeaker play the lower register
    LATER than the upper frequencies. So this is not a constant delay.
    It’s a GROUP delay.

    Here is a typical graph


    So let’s take a realistic example.

    The first track is D.I.
    The second track is record at 10cm from the woofer of the bass cabinet.

    To simplify let’s focus on one bass note, for example B1 at around 62Hz
    And let’s say the loudspeaker+enclosure give a Group delay of 5ms at that particular frequency.
    If we consider the D.I. to be the time reference, at 62Hz there’s already

    The constant time delay is really insignificant as you pointed (0,01/344)
    about 0,03ms. not really problematic

    The group delay in the other hand is significant 5ms.
    The fondamental of the bass note played by the loudspeaker at 62Hz will arrive 5ms later than the upper harmonics
    of the note. it’s equivalent of a distance of 1,72meters. (and 5ms in that case, is an AMAZINGLY
    GOOD group delay……)

    So as you can see, maybe you should watch your words.

    Both solutions (delaying the signal, or flip the polarity) can bring back some bass.

    – Flipping the polarity can help some notes and note the others, because the phase
    is affected equally across all frequency. And this only cannot correct the group delay
    introduced by the loudspeaker. So you don’t avoid any comb filtering with this flip.

    – Delayed the DI track, can help too and as flipping the polarity, it’s not going to avoid
    comb filtering cause, it’s a constant delay.

    Ultimately, you should measure the impulse response of the DI (good luck)
    the impulse response of the loudspeaker miked. then you will know the group delay difference
    between both, and then in the digital world, it would be possible with an allpass filter to alter
    the phase selectively for the low frequencies but even then delaying the whole signal will be
    necessary for the constant delay part.

    Joe, didn’t loose any credibility and the solution in his video is not worse or better than
    your solution. both CAN produce good results it’s all depend of a lot of variables…
    NOT just wavelength and sound speed.

    Next time, you better show some respect…

  • jdrazin

    Who cares what the technical origins of the problem. The problem is there, and how does it get fixed? There was a problem and joe showed us how to fix it. That’s all that matters. Joe, ignore that idiot. Life’s too short.

  • Lucas Iessi

    The one thing Mr. Stephan didn’t know is how the amped bass track was recorded. If the bass was recorded only with the DI first, and then later through an amp (reamped), that would add significant latency to the signal.

    According to my Logic 9, a buffer of 512 samples at 96kHz results in an output latency of 8.3 ms. And that is just the output latency, the time it will take for the DAW to send the signal. If you are reamping a track you can add to that latency value the audio driver latency and the D/A converter latency.

    Now, I have to thank Joe for this video. I did just that in my recent recordings, reamping a bass from the DAW after it was recorded with a DI. I realized later that the sound I was getting was not good, and immediately tried to switch the polarity on the amped bass track, without success. There was an immense amount of comb filtering, and some notes were in phase, some were out of phase. I began to think that the tracks would be useless in my mix, but after seeing Joe’s advice I gave it a try. Guess what, it worked. The bass now sounds deep, full, and no weird top end. And my analyzer now tells me that every single note is in phase in the bass tracks.

    So thank you very much Joe, and keep up the excellent work. You not only help us, you inspire us to just go out and get it done.

  • Neel

    Stephan – In conclusion, you really, really shouldn’t be commenting on other’s people posts (actually you really, really shouldn’t be talking at all)

  • There is absolutely no way that a signal gets delayed enough in the two-inch trip from the speaker to the mic to have any phase problems with bass frequencies which are several feet long. What we are seeing here is a *polarity* issue between the amp mic and the DI, and it absolutely is appropriately fixed through the use of what you inaccurately call the “phase” switch.

    * Notice how at the end you say “it’s sounding really chorusy”? It’s because after applying your delay, the two sources are comb filtering badly, in the same manner as a chorus effect. Put a spectrum analyzer in your chain and see for yourself.

    * Notice how the waveform in the second note don’t line up nearly as well as the first note? You were able to line things up perfectly for the first note, but since the second note is at a different frequency, a different amount of delay is necessary for it to line up.

    In conclusion, you really, really shouldn’t be giving advice to anyone.

    • Hi Stephen,

      Thanks for the comment, but the signal absolutely is delayed. You can see it right there on the waveform. Also, the tone sounds much better throughout the entire song when I time-aligned them.
      How is that a bad thing?

      • The bad thing here is that you have identified the cause of the problem incorrectly, leading to your using the wrong solution to correct it. Problem solving is the number one skill an engineer must have, and if you cannot correctly identify even simple problems, larger problems will cause much worse troubles for you than they should.

        The first step in problem solving is correctly reasoning about the cause of the problem. In this case, a simple intuitive grasp of how long wavelengths are and how fast sound travels should tell you that there is no way that the time it takes for sound to travel a couple inches from a speaker to a microphone can cause enough of a delay to cancel out bass frequencies, and an intuitive grasp of the effects of bad time alignment should tell you that if your time alignment is so bad that bass frequencies are cancelling, the high frequencies would also be comb filtered to hell.

        I didn’t need to do the math to come to this conclusion, but I’ll do it for you to demonstrate: In order to cause a phase cancellation at 100hz, you need 5 milliseconds of delay (1/200th of a second, or the period of one half-wavelength at 100hz). The microphone would have to be about five and a half feet away to cause this much delay between the direct and the mic (speed of sound is about 13.5 inches/millisecond). If your mic is a more reasonable two inches from the speaker, there will be less than a sixth of a millisecond of delay, it translates to about 6.5 *samples* of delay at 44.1khz. The *lowest* frequency that gets cancelled by this amount of delay is about 3.37 *kilohertz*.

        So tl;dr the explanation that time alignment issues between the di and the mic are causing cancellations in the bass frequencies can be completely ruled out because the symptoms of time alignment problems are completely different from what you’re experiencing.

        The other possibility is polarity – it wouldn’t be at all unusual for the bass amp or the cabinet to be wired so that positive voltage from the bass causes the cone to suck in, which would cause a negative voltage in the microphone in response. This is actually consistent with our evidence, since it appears the entire low end is being cancelled out (since the very low delay between the speaker and mic should leave the lows mostly in phase without the polarity inversion), but the high end is staying intact (the small amount of delay is just enough to put the highs out of phase, so when the polarity inverts they go back in phase again). You can confirm this using something like the Cricket polarity tester, which I carry in my engineering toolkit.

        You can also confirm all of these things for yourself by placing the same clip of pink noise onto two channels, putting a spectrum analyzer on your master (I recommend Voxengo SPAN), and looking at the phase cancellations which occur when you mess around with delay and polarity.

        Oh, and the other bad thing is that you’re presenting faulty information in the form of an educational tutorial that people will trust. This is bad for the people who follow your faulty information, and bad for your reputation.

        • I went back and pulled up the mix and flipped the polarity on one of them instead of nudging it.
          I’ll admit, it does sound better. My nudged version sounds roughly the same to my ears, but the flipped polarity actually has a little more deep bass to it.
          That said, my nudged version still sounds good, too. And BOTH options sound better than the original. 🙂
          For this week’s video I’ll shoot a followup video showing that the polarity invert yields better results.
          Thanks for the comment and suggestion!

          May I offer a piece of advice back to you? A little niceness goes a long way. Perhaps next time you find fault with something someone is teaching online you can simply let them know your thoughts without telling them “you really, really shouldn’t be giving advice to anyone.” It makes it hard to take you seriously with the insults. I’ve created a full-time business and loyal following by helping people get better recordings. If I was simply misleading people for the last four years I wouldn’t still be here.
          There are too many audio engineers out there who live to insult and criticize. Let’s change that, eh?

          • The deeper bass that you might get when you flip the polarity does not necessarily mean you hit the phase sweet spot. I’ve tested this myself many times.
            Yet if it sounds good … it sounds good… is it phase correct.. .probably not 😉

            • Good point.

              • Mario

                I can’t imagine a good engineer saying such method is “wrong”… or “faulty” or whatever… I phase align by hand, it’s a tedious thing that pays a lot (metal, anyone?) but I can’t say I don’t enjoy it, so… I’ll just stick to say MAN! this guy’s got a long “DISQUS” history and knows a lot about everything! very amazing!

                //ontopic: Thanks for the tip Joe, as always.

          • Oh and Joe … You should ABSOLUTELY continue to advice people on recording and mixing! 🙂

      • I’m with Joe on this one. I did this myself recently… I even had two mics! (And two DI’s actually, one being a tube distortion)

        https://www.facebook.com/photo.php?fbid=10151283644389553&l=ef39e128d7 Lining up the phase is essential to get the tone right.

        I would have dragged the amp forward to the DI though… The DI is closer to the time of the player 😉 …. but that might not matter, depends.

        Snare top and bottom for instance… you flip the polarity on the bottom …. but you should also check the phase and align those transients if needed.

    • And you’re absolutely right that polarity and phase are two different things, but if you look closely at the wave form, you’ll see that flipping the polarity will give you roughly the same problem. The wave forms aren’t exactly opposite one another. One is slightly delayed, which is (by definition) a phase problem.
      Hope that helps!

    • dierock enroll

      Stephan, technically you might be right, but your people skills are wrong. The less-than-friendly attitude you display, and which I have seen repeated on some audio engineers I’ve come across, is what drove me (a musician) to learn how to make better audio on my own.

      When I wanted to learn, Joe was there to teach, and so were his colleagues, who I bet have a lot of work because they know how to treat people in need of their services. In the course of a few years, my audio production skills have increased, mainly thanks to Mr. Joe Gilder, who certainly knows how to explain things to laymen such as myself and many others.

      I strongly suggest you revise your conclusion, and your ears as well -Joe’s solution sounds better.

    • Pete Woj

      HAHA. You carry around an “ENGINEERING TOOLKIT?!”

      Stephan, people like you, with your air of superiority, are not wanted on this website. I have no problem with any person wanting to freely chime into a chat thread or discussion, but the way you seem to berate Joe, and/or others is nauseating. BASICALLY, STOP TALKING TO PEOPLE LIKE YOU’RE BETTER THAN THEM. This is not the place for an individual to speak to others in a rude, arrogant, or self righteous manner as you have done. IF YOU DONT HAVE ANYTHING NICE OR POSITIVE TO SAY, JUST DONT SAY IT.

      I don’t care if you think Joe is wrong. I don’t care if you think Joe is right. But either way, show some RESPECT to a guy who’s taking time out of his day to help other people make the best home recordings they possibly can. Gearslutz, TMZ, and the Dungeons and Dragons Forum are OVER THERE.

      I dont care if you were Michael Brauer… You’ve gotta GIVE RESPECT TO GET IT.

      Much props to you Joe, and Sigurdor for eloquently talking to/handling this Jabroni.

      • JMCD

        On of Stephan’s comments alludes to the fact that it could
        not be a time issue he states “a simple intuitive grasp of how long wavelengths are and how fast sound travels should tell you that there is no way that the time it takes for sound to travel a couple inches from a speaker to a microphone can cause enough of a delay”. However if you consider that the
        direct signal is purely electronic and travels at 180,000 miles per second, and the 2 inches of sound traveling at approximately 660 miles and hour is infinitely slower and therefore would present a time difference. Also consider
        the fact that we are not recording the wavelength of the signal we are recording the bass signal in cycles per second (i.e. frequency) in the DAW which is purely a time based signal. If we were recording wavelengths, we would be using antennas not microphones.

        • I wish we used antennas instead of microphones. 🙂