I’ve just discovered this video and subsequent discussion, and
I’d like to add a comment from a math-based perspective. The bass note under discussion sounds like an E (played on the D string in the 2nd fret) with a frequency of about 82.4 Hz. This is confirmed by counting the approximately 58 complete cycles within each quarter note (noting your quantizing grid set to1/16 note) as they appear on your screen when you’re zoomed in. The time value of each 1/4 note is 60 seconds divided by your tempo of 85 bpm, or .7059 seconds. Dividing the 58 cycles in each quarter note by this .7059 seconds per quarter note yields a frequency of 82.2 Hz—close enough to confirm that your note is indeed an E. This note has a wavelength of about 13.7 feet at room temperature in dry air at an altitude not too far above sea level.
Comparing the two tracks, it appears that your DI track (lower trace) is almost half a wavelength earlier than the miked amp track (upper trace). If this were due only to the propagation delay of sound in air, the mic would have to have been about half a wavelength from the loudspeaker, or about 6 feet 10 inches. This seems as unlikely an explanation as the suggestion of a polarity reversal, which cannot account for the time difference of about 24.27 ms between the two traces. (This is also about five times greater than might be expected from group delay alone, as has been suggested in a comment on your original video.)
At a sample rate of 44.1 kHz, this observed difference is equivalent to about 1070 samples. This immediately raises a flag, to my mind at any rate, that there may be some sort of plug-in latency involved here, since this is in the realm of look-ahead dynamics processing latency that’s typical of current plug-ins.
Is there any possibility that the amp/loudspeaker/mic track was recorded through a compressor or amp simulator plug-in?
Just a suggestion made from the observable phenomena of your video. Thanks for the tips, and keep up the good work, Joe.
Unfortunately I don’t know how the track was recorded. Just received the raw tracks for mixing. But thanks for the super-indepth insight!
Both ways have their place. I will sometimes use a phase rotator (by Audiocation) which basically delays the track different amounts based on the chosen rotation value. The idea is not only to align the wave form with the other track, but to experiment with various settings. Occassionally a rather nice result can be achieved without the need to use any EQ.
Great video! Do no let some close minded “musician” opinion stop you from giving us, the people who are into home recording, awesome advices on how to make our mixes sounds better! Thank you for all your videos and lessons!
Nice timing, Joe. No pun intended! I’m heading to Hilltop Studio in New Albany, IN today and a friend of mine is dealing with similar issues on some upright bass tracks. At least now I have some support on what I believe to be the issue. Lastly, I count myself as one of the early members of HSC and appreciate everything I have learned. I admire your entrepreneurial spirit and passion for what you do. The music I am producing now would not be at the level it is without the products and advice I have received from HSC.
Yeah Dude, great response. Always more than one way to do lots o’ stuff. Pick one, use it if it works for you, and carry on or find something that does work. Granted, having the theoretical chops behind you is a great asset, (yeah, did the school thing myself, back in the day) but we’re still serving the song, so what works, works. On this week’s Pensado’s Place, listen to what Manny Marroquin says when it comes to mixing, as long as it works…..(something about “no rules”)
Keep up the Good Work and God bless!
Great response man! I know it’s very easy to get angry and defensive but you didn’t and that great! “Do not repay evil with evil or insult with insult, but with blessing, because to this you were called so that you may inherit a blessing.” 1 Peter 3:9