The Bigger PictureSo far we’ve looked at why you shouldn’t EQ without listening and should limit how much you EQ in solo. But what about the bigger picture?

If you’re being a good little mix engineer, you’re making your EQ changes while listening to the entire mix. But how do you know when to cut from one instrument or another? Do you think about how making one EQ change here will affect the mix over there?

Let’s say you’ve got some muddy build-up at 200 Hz. Do you cut 200 Hz in the bass? The vocal? The acoustic guitar? The piano? The answer is, of course, it depends. Depending on how much buildup is there, you may need to do a cut on more than one track, even all of them.

Mute Away

One thing I will do when trying to make this decision is mute each of the tracks in question. Then I’ll slowly add each one back into the mix by un-muting it. Try adding them back in a different order. Which track(s) really emphasizes that 200 Hz muddy-ness you’re getting? That’s probably the one you need to cut.

Playing Nicely = Compromise

Most problems you face while mixing aren’t quite as cut and dry. It may simply be a matter of not being able to hear all the instruments at once. You’ve got the levels set properly, but you still can’t hear the piano over the electric guitar. Neither instrument sounds bad, but you can’t hear either one as clearly as you’d like.

It’s time to start compromising. Which instrument needs to be heard? Ask that question first. If it’s the guitars, then determine which frequency range is the “key” range that you need to hear. If it’s 2-3 kHz, then consider cutting the piano at 2-3 kHz. What you’ll find is the EQ cut at the piano will give the electric guitar enough room to sit in the 2-3 kHz range without standing out or being too harsh.

Most Popular Example

The most common place where I do this little compromising routine is between kick drum and bass. They both sound huge and massive, but when I play them together it gets all muddy and indistinct. I usually hone in on the kick drum first. If it’s really kicking at 80 Hz, then I’ll cut 80 Hz out of the bass to try to balance the two.

It sounds simple, but it takes a lot of practice, trial and error, and a LOT of listening. But over time you’ll learn to love the synergistic relationship the tracks in your mix start to have. No one track is the star of the show, but each track has a specific role to play in the mix. It’s your job to make sure that track plays its role…and plays it well.

Want more EQ training, including real-world examples? Click here.

  • Pingback: The Mix Engineer, The Mind-Reader | Home Studio Corner()

  • Unless you can talk them into letting you record some “full” parts over the tracks they sent you, I’m not sure there’s anything you can do. Ugh…no fun, man.

    • letzter geist

      there are ways, of course, to make a dull mix sound “bigger” and “fuller” by careful panning and use of reverb, delay etc. maybe recording small parts in the lower and higher register (plucked bass, bass drum, tambourine, shaker) will fill in what your ears are after. just throwing out ideas.

  • letzter geist

    i think this goes hand in hand with eq-ing in solo. it’s hard to not be biased to one instrument or another and actually give or take what needs to be given or taken. i myself am a drummer, so i often start mixing the drums first. i get this amazing, bombastic, kickin’ drumset sound, but 99% of the time i will have to make adjustments (sometimes huge adjustments) to relative levels or eq to get it to sit and feel right with the rest of the song.
    eq-ing in itself is an art form that few (probably none!) have mastered, but of course, practice makes almost perfect! 🙂

    • That’s why I’ve stopped mixing drums first. I see it as kind of a waste of time, since sometimes drastic changes are needed later. I bring up all the tracks together, then mix from there.

  • dan

    Great post Joe! Any tips on making mixes sound like they cover a wider frequency range? I’m having a lot of trouble with one particular mix, it just sounds small like, it only takes up a small space in the frequency range. I believe I understand what the problem is, just not how to remedy it. Hopefully this makes sense..

    Thanks, dan

    • Not sure what you’re asking. You can’t MAKE the frequencies be there. If they’re not present in the recording, they’re not going to be there in the mix. You need to record the sounds how you want them to sound in the mix.