WaveformsDaniel sent me the following question yesterday via the Ask Joe form. If you have a question, please ask! I’ll do my best to answer it either here on the blog or over in the forums.

Here’s Daniel’s question (which is a great one, by the way):

Okay, so in researching proper recording techniques I stumble upon the issue of “Phase” quite often. Most of what I’ve read deals with the electrical properties of phase / polarity. Most of which makes perfect sense, however, I have failed to see how this effects actual recordings. Enlighten me here, Joe. What practical knowledge do I need to know about phase when it comes to the recording process?

– Daniel

The Sweetwater website has a great glossary of audio terms. Here’s their definition/explanation of phase:

Audio waveforms are cyclical; that is, they proceed through regular cycles or repetitions. Phase is defined as how far along its cycle a given waveform is. The measurement of phase is given in degrees, with 360 degrees being one complete cycle. One concern with phase becomes apparent when mixing together two waveforms. If these waveform are “out of phase”, or delayed with respect to one another, there will be some cancellation in the resulting audio. This often produces what is described as a “hollow” sound. How much cancellation, and which frequencies it occurs at depends on the waveforms involved, and how far out of phase they are (two identical waveforms, 180 degrees out of phase, will cancel completely).

Okay. That’s great and all, but how does that apply to you and me when we’re trying to make a record? This is the part where I could go on a rant about polarity vs. phase and how so many manufacturers call the polarity button on a piece of gear a “phase” button. (Hint: Flipping the polarity of a waveform simply makes the positive parts negative and the negative parts positive. It makes a mirror image of the waveform. Phase has to do with timing issues.)

For the purpose of this article, we’re only dealing with phase. Here’s the deal, sound travels at roughly 1,100 feet per second. That’s pretty slow compared to light. That’s what makes it awkward at a college football game when you try to clap along with the marching band across the stadium. You’re seeing them play, but you’re hearing them a half-second later. Awkward indeed.

Since sound travels so slowly, you have to pay careful attention when recording. Why? Because if two signals are out of phase with each other, your recordings will sound thin.

Try something for me. Open up Pro Tools and drag in an audio file. Now duplicate that file to another track. Move the second file slightly to the right. I’m talking less than 10 ms. Now listen. What do you hear? Pretty gross, right? All those waveforms are no longer lined up, they’re no longer “in phase.” As a result, certain frequencies are being canceled out. They’re being cut. The resulting audio is thin and hazy.

This can be a great effect for guitar. That’s why guitarists use a phase pedal. All that pedal is doing is duplicating the guitar signal, and delaying the dupicate signal ever so slightly. This is great for guitar, but not for anything else. πŸ™‚

The biggest practical application is when you’re using multiple microphones on a single source. Let’s look at using two microphones to record acoustic guitar. If you put one microphone 3 inches away from the guitar, and the other microphone 5 inches away, the sound will reach each microphone at different moments in time. When you listen to these two microphones blended together, there’s a chance that it will sound “hollow” and “thin.”

Phase issues are nothing more than timing issues. The sounds from multiple sources are ideally meant to reach your ear at the same time, but sometimes they don’t.

3 Quick Tips for Dealing with Phase

1. Microphone placement – Most phase issues you’ll deal with are simply from microphone placement. Take the time to listen to the microphones blended together before you start recording. Each microphone may sound fine by itself, but phase issues happen when you combine them. The easiest way to listen for this is to listen to them together in mono.

This doesn’t mean that microphones need to be the exact same distance from the source at all times. Sometimes you want one microphone farther away than another. To determine the appropriate distance, listen to the two together until it sounds great.

2. Plug-in Latency – Inside your recording software, some plug-ins induce latency, or delay, in your audio. This can cause subtle phase problems in your mix. If you put a plug-in with 10 ms of latency on one track and not on another, the 2nd track will be out of phase with the first. This is isn’t as big of an issue as some people will try to convince you it is. It’s simply something you need to keep in mind. If you used two mics on your acoustic guitar, then use the same plug-ins on each track, so that they remain in phase with each other.

Most DAWs today compensate for plug-in latency anyway. Pro Tools LE and M-Powered don’t, but that’s another story. (I personally don’t think it’s a big deal.)

3. Linear Phase Processing – This isn’t really a tip per se. I just want you to be aware of the term “linear phase.” Most plug-ins process the low frequencies and the high frequencies at different speeds. The lows may come through the plug-in a little faster than the highs, for example. This is known as phase shift, and it can theoretically affect the clarity of the audio. Plug-in manufacturers have developed “linear-phase” EQs, etc. These are designed to combat phase shift. I wouldn’t worry too much if you don’t own these. I just wanted you to know about them.

As with just about everything, use your ears to determine what’s best. What tips do you have about phase? Let’s hear ’em!

[Photo Credit – Creativity103]

  • Rob Karlson

    Contemporary research has found sound to travel faster than light ( a recent lab i beleive in Tenesse, undergrads)…. light in fact is sound…our concept of light, for most, is limited to what we can see… when in fact EVERYTHING is a mind interpreted image of sound waves! ie. string theory

    • Rob Karlson

      Just sayin, I wish you wouldn’t have used that cliches about the football game…. i’m reading because I’m looking for audio production expertise, not something someone else told you that you think is cool!

      • Not a cliche’, just an example of the concept that sound travels slowly compared to light.

  • Ferina Tjie

    thank you for the explanation Joe, it’s really helpfull πŸ™‚

  • Pingback: Linear Phase EQ vs Regular EQ()

  • Note that nudging does not often fix any phase issues on distant mics. Common problems on drum overheads (besides cancellation of certain frequencies) are image “lean,” flamming, and a lack of cohesion in general with the close mics.

    The lean will still be there no matter if you nudge an OH left or right to be sample-accurate in terms of phase. Why? Because although the signal directly emanating from the source (snare’s a good test) will be in phase, the reflections from other parts of the kit and especially the room will never be in phase – you just moved them out of phase, in fact. If you had a perfectly symmetrical room, kit, player (!), etc. then perhaps this technique would work. However, any difference between the entirety of what two different mics pick up will result in some degree of phase cancellation that cannot be fixed after the fact.

    • Well said! That makes sense. Although I couldn’t turn around and explain it to someone else without confusing everyone. πŸ™‚

      • Thanks Joe.

        It’s easier to think of if you imagine a sountrack session. They have LOTS of mics within relative proximity to one another, yet there isn’t a significant amount of phase cancellation when they’re summed. How is that possible? Wouldn’t you have even more bleed than a drum kit?

        With no forethought.. Yes. Plenty more. However, it’s a combination of techniques that provide the solution.

        The first is baffling. The brass and percussion are first to be separated because of their relative volume (loud!) and there may be some other barriers between other sections. This can be overdone because although the idea is limited separation, it’s not sterility.

        The second is miking. Most of these sessions start with the distant mics as the foundation and fill in spots with other mics placed on specific sections. Decca Tree (equilateral triangle of mics used to capture the performers as a whole) provides the basis of what we hear, but that can become a little far away-sounding, so the recordist mics sections. If you had all section mics at equal volume plus the Decca, things would get somewhat phasey… But through careful blending of volume and tone, that’s not the case.

        Going even further, the Decca mics aren’t recording the sources exactly the same as the close mics. They’re recording more reflections, more of the room. So.. If you have a recording of an instrument’s interaction with the space and another of the instrument rather close up (say 3-4 feet or less), they’re not going to cancel as much because the waveforms won’t be exactly the same. Ever notice this with room mics on the drums? They usually don’t cancel all that much…

  • Matt

    http://www.dontcrack.com/freeware/downloads.php/id/4106/software/PhaseTone/

    PhaseTone is a great free plugin to help with phase correction and manipulation.

  • Joe

    One great trick I learned is to zoom into two audio files (from the same source) and nudge the tracks to get the waveforms to match as close as possible. This will get the phase fixed most of the time.

    Hey Joe, do you know if the Digi 003 or PT has a mono button feature, so you can listen to the sound in mono? Or should I just leave it panned up the middle?

    • Hey Joe. Yep, the 003 does have a Mono button. It’s very handy, AND it affects both the monitor outs AND the headphone outs. The Mackie Big Knob that I used to own does not.

  • Daniel Lewis

    Also, you would flip the polarity on one of the Mics if you were recording the front and the back of a guitar cab at the same time correct?

  • I’ve had a phase thing happen once, had no idea that was what was happening, until I tried to record again only with one mic. This is really great info Joe, thanks for posting!
    Quick observation & question – my Digimax has a “Ø” marked “Phase Reverse” — I assume this is another one of those gear things where they’re really reversing polarity. Would I basically use this button for things like a Fig 8 mic arrangement, or other situations where the mics need to be really close to one another?

    • Sorry -terminology mess up- I meant to say would I use that button for “side by side mic’ing of fig 8 mics etc?”. ‘
      I’m still such a noob. πŸ˜‰

      • I’m not exactly sure what you’re asking, but I would say use the polarity button when you’re having phase issues. Perhaps flipping the polarity of one of the mics will help clear up any problems.

        The most common use of a polarity switch is when you’re miking the top and bottom of a snare drum. The two mics will be out of polarity with each other, because one will be picking up the “pull” of the sound, and one will be picking up the “push” of the sound.

        • Thx. I was mainly asking because I saw this technique where one vocal mic is upside down, directly above another mic, and the artist would play/sing into that mic arrangement. But the technique noted that you must reverse the polarity of one of the mics. So I was wondering if that’s what this switch on my Preamp was for. Sounds like this is the same thing as a mic’ing a snare drum on top and bottom. Good info, I’m learning a lot, sorry about my terminology confusion!

          • You’re referring to a Blumlein configuration, where there’s a Figure-8 mic paired with an omni or cardioid. You use two copies of the Figure-8 mic signal and flip the polarity on one of them. Here’s a link to a definition of it.