Here’s Daniel’s question (which is a great one, by the way):
Okay, so in researching proper recording techniques I stumble upon the issue of “Phase” quite often. Most of what I’ve read deals with the electrical properties of phase / polarity. Most of which makes perfect sense, however, I have failed to see how this effects actual recordings. Enlighten me here, Joe. What practical knowledge do I need to know about phase when it comes to the recording process?
The Sweetwater website has a great glossary of audio terms. Here’s their definition/explanation of phase:
Audio waveforms are cyclical; that is, they proceed through regular cycles or repetitions. Phase is defined as how far along its cycle a given waveform is. The measurement of phase is given in degrees, with 360 degrees being one complete cycle. One concern with phase becomes apparent when mixing together two waveforms. If these waveform are “out of phase”, or delayed with respect to one another, there will be some cancellation in the resulting audio. This often produces what is described as a “hollow” sound. How much cancellation, and which frequencies it occurs at depends on the waveforms involved, and how far out of phase they are (two identical waveforms, 180 degrees out of phase, will cancel completely).
Okay. That’s great and all, but how does that apply to you and me when we’re trying to make a record? This is the part where I could go on a rant about polarity vs. phase and how so many manufacturers call the polarity button on a piece of gear a “phase” button. (Hint: Flipping the polarity of a waveform simply makes the positive parts negative and the negative parts positive. It makes a mirror image of the waveform. Phase has to do with timing issues.)
For the purpose of this article, we’re only dealing with phase. Here’s the deal, sound travels at roughly 1,100 feet per second. That’s pretty slow compared to light. That’s what makes it awkward at a college football game when you try to clap along with the marching band across the stadium. You’re seeing them play, but you’re hearing them a half-second later. Awkward indeed.
Since sound travels so slowly, you have to pay careful attention when recording. Why? Because if two signals are out of phase with each other, your recordings will sound thin.
Try something for me. Open up Pro Tools and drag in an audio file. Now duplicate that file to another track. Move the second file slightly to the right. I’m talking less than 10 ms. Now listen. What do you hear? Pretty gross, right? All those waveforms are no longer lined up, they’re no longer “in phase.” As a result, certain frequencies are being canceled out. They’re being cut. The resulting audio is thin and hazy.
This can be a great effect for guitar. That’s why guitarists use a phase pedal. All that pedal is doing is duplicating the guitar signal, and delaying the dupicate signal ever so slightly. This is great for guitar, but not for anything else. 🙂
The biggest practical application is when you’re using multiple microphones on a single source. Let’s look at using two microphones to record acoustic guitar. If you put one microphone 3 inches away from the guitar, and the other microphone 5 inches away, the sound will reach each microphone at different moments in time. When you listen to these two microphones blended together, there’s a chance that it will sound “hollow” and “thin.”
Phase issues are nothing more than timing issues. The sounds from multiple sources are ideally meant to reach your ear at the same time, but sometimes they don’t.
3 Quick Tips for Dealing with Phase
1. Microphone placement – Most phase issues you’ll deal with are simply from microphone placement. Take the time to listen to the microphones blended together before you start recording. Each microphone may sound fine by itself, but phase issues happen when you combine them. The easiest way to listen for this is to listen to them together in mono.
This doesn’t mean that microphones need to be the exact same distance from the source at all times. Sometimes you want one microphone farther away than another. To determine the appropriate distance, listen to the two together until it sounds great.
2. Plug-in Latency – Inside your recording software, some plug-ins induce latency, or delay, in your audio. This can cause subtle phase problems in your mix. If you put a plug-in with 10 ms of latency on one track and not on another, the 2nd track will be out of phase with the first. This is isn’t as big of an issue as some people will try to convince you it is. It’s simply something you need to keep in mind. If you used two mics on your acoustic guitar, then use the same plug-ins on each track, so that they remain in phase with each other.
Most DAWs today compensate for plug-in latency anyway. Pro Tools LE and M-Powered don’t, but that’s another story. (I personally don’t think it’s a big deal.)
3. Linear Phase Processing – This isn’t really a tip per se. I just want you to be aware of the term “linear phase.” Most plug-ins process the low frequencies and the high frequencies at different speeds. The lows may come through the plug-in a little faster than the highs, for example. This is known as phase shift, and it can theoretically affect the clarity of the audio. Plug-in manufacturers have developed “linear-phase” EQs, etc. These are designed to combat phase shift. I wouldn’t worry too much if you don’t own these. I just wanted you to know about them.
As with just about everything, use your ears to determine what’s best. What tips do you have about phase? Let’s hear ’em!
[Photo Credit – Creativity103]