Mixing: What Is Frequency Masking & How To Reduce It


If you've ever struggled to balance two (or more) similar-sounding tracks together in your mixes, chances are you've had difficulty dealing with frequency masking. If your mixes aren't as clean, clear and punchy as you'd like them to be, you could have an issue with frequency masking. Let's discuss what this is and what we can do about it.

What is frequency masking, and how can I reduce it? Frequency masking happens when multiple sounds compete for certain frequency bands and become ill-defined within those bands. Strategies to combat frequency masking include balancing with faders and pan pots, EQ, sidechain compression and arrangement changes.

In this article, we'll take a deeper look at the phenomenon of masking and consider the top strategies for reducing it so that we can achieve more clarity and power in our mixes.


What Is Frequency Masking?

Frequency masking is a psychoacoustic phenomenon that explains why two or more sounds with similar frequency content will obscure each other when happening simultaneously.

The similar sound sources effectively “mask” each other, and our ears have difficulty perceiving them as distinct sound sources.

The idea isn't difficult to conceptualize, but it's a bit tricky to explain technically. Let's begin our description of frequency masking with a few examples. Bear with me here.

Imagine walking in the woods in autumn and hearing the leaves crunch beneath your feet. Now imagine walking in the woods in autumn while it's raining. The sound of the rain hitting the trees and the ground would make each step sound less pronounced.

As another experiential example, imagine trying to fall asleep with a clock tick-tocking back and forth. Now imagine turning on a fan. The sound of the fan would mask the sound of the clock, helping you to fall asleep better.

The same is true of similar instruments. If there's a significant overlap in frequency content, the tracks within our mixes are liable to mask each other.

In most cases, the louder of the sounds will mask the weaker of the sounds, rendering them almost imperceptible or, at the very least, unclear.

Looking at this phenomenon from a more technical angle, I can explain frequency masking in a roundabout way by likening it to transient information, phase cancellation and distortion.

The transients of a signal are the initial attacks of the represented sound source. They are the peak in terms of amplitude and harmonic content and make up the majority of the tone and timbre of an instrument. Each unique sound has a unique transient profile with its own harmonics with its own amplitude envelopes.

Focusing on the harmonic content, if there are two or more tracks sounding at the same time with the same harmonic content, we'll get significant constructive buildup in those frequencies. However, the resulting mixing of harmonic profiles and their respective individual amplitude envelopes will effectively form a new harmonic profile with different amplitude envelopes.

Therefore, what we're left with is a new sound that blends the two (or more) tracks together in terms of tone/timbre, which is a big part of frequency masking.

In addition to the constructive phase interferences caused by two or more similar audio tracks or sound sources, we also have to take into account the destructive phase interferences. These happen when any of the frequencies in one track are out of phase with the same frequencies of the other track and result in phase cancellation.

Of course, this doesn't have to mean complete cancellation, though any reduction in harmonic content brought on by phase interferences will alter the sound of a track. This is another part of frequency masking.

Finally, I want to discuss the similarities between frequency masking and distortion. Now, distortion can act to enhance the presence and clarity of a track in small doses, but when it's used more aggressively, it can smear transients and reduce the definition of a track. Keep that in mind.

Distortion (and saturation) cause deformation to a waveform, particularly at the peak amplitudes of the signals (positive and negative).

So the first thing to note is that distortion will have a greater effect on the transients since they have the highest peaks. As I mentioned, this will cause changes to the tone/timbre of the track, similar to frequency masking.

The second thing to note is that, in altering the waveform, distortion effectively creates new harmonic content in the signal (typically, but not always, in relation to the original harmonic information). Likening this to frequency masking, blending a second (or any number) of similar tracks with an original track will effectively add harmonic content to the summed/mixed output.

And so, in this way, we can liken frequency masking to distortion, especially in terms of worsening the definition of a signal.

So I hope that gives you some ideas on the phenomenon of frequency masking. Of course, “phenomenon” means “a fact or situation that is observed to exist or happen, especially one whose cause or explanation is in question”, so it's tricky to give a direct definition, but if we can think of it in the ways I describe, we can understand why it happens and understand what we can do about it.

Here are a few common clashing elements to be aware of:

  • Kick drum and bass.
  • Bass guitar and guitar.
  • Guitars and pianos (both acoustic and electric).
  • Vocals and everything else.
  • Background vocals and pads.

If you’re having trouble identifying frequency masking across the greater context of the mix, try starting with these elements.

It’s critical to note that frequency masking is bound to happen even in great recordings. Even if every track was recorded spectacularly and sounds amazing on its own, there is still a high chance that frequency masking will require addressing through EQ or otherwise.

When we're working on a mix, we should be aware that it's not only the multitracks that contribute to frequency masking but also our effects returns. Be sure to mix delays, reverbs and other effects appropriately.

And with that, let's look into the main strategies I have for you to combat frequency masking in your mixes.


Reducing Frequency Masking With Faders

Naturally, frequency masking will be the most problematic for low-level tracks in the mix. Combine this with the fact that the majority of mixing has to do with achieving a proper level balance between the tracks, and we can see how our faders play a role in determining the amount of frequency masking in our mix.

If frequency masking is an issue and we need one track to stand out among the competition, we can opt to bring that track up in level and/or the competing tracks down in level.

Always listen to how your fader moves alter the overall balance of the mix. A lot can be done with faders alone to reduce frequency masking, but it's not all we have to combat this clarity-killing issue.

Balancing with faders and pan pots (which is my next point) is so important that I have an ebook dedicated to the process as part of my ‘Mixing With Series‘, named ‘Mixing With Faders And Pan Pots‘.


Reducing Frequency Masking With Panning

Another big part of balancing a stereo mix is panning, which allows us to balance the width of the mix and the directionality of elements within the mix. Panning tracks to different locations along the stereo panorama is one way to get some separation and, therefore, reduce the amount of frequency masking in the mix.

However, we aren't really doing anything about the issue at hand, which is the significant overlap of frequency content. Rather, we're just “moving the mess around” without truly cleaning it up.

Add on the fact that the masking will still be there if and when the mix is summed to mono, and we can see that, while panning is one strategy worth considering, I'd reckon it's the least effective.


Reducing Frequency Masking With EQ

EQ is really our primary tool when it comes to dealing with issues of frequency masking. After all, EQ is our processor for frequency-dependent balancing control and frequency masking is typically concentrated around specific frequency bands (rather than across the entire audible spectrum of 20 Hz to 20,000 Hz).

Remember that mixing is about blending multiple tracks together, so there will inevitably be frequency masking going on. Our job is to keep it under control in a way that gets us an adequate level of clarity within in our mix.

With that primer, let's get to some key tips for reducing frequency masking with EQ.

The first is perhaps the most important, and that is reducing masking in the low-end frequencies. The lower the frequency, the longer the wavelength and the more prone the frequency is to noticeable phase issues (this is true of destructive and constructive interference).

Having low-end noise across multiple tracks that doesn't add any musical information to the mix is a surefire way to mask the important low-end information (particularly in the kick drum and bass elements).

So then, high-pass filters can be our best friend when it comes to reducing low-end masking. They allow us to remove low-end information that doesn't contribute to the mix and make room for the low-end information that does.

The same can be said, to a lesser degree, of the high-end frequencies. Low-pass filters can help reduce noise from the top end of our tracks to make room for the tracks with useful top-end information.

However, the work that really demands our full attention is in the midrange, where the majority of the musical information is to be had (and, therefore, the most frequency masking).

To start, mixing is a balancing act, and every EQ move we make on one track affects the entire mix. Keep that in mind as you go about EQing tracks in the midrange.

My advice for reducing masking in the midrange frequencies is to consider the hierarchy of elements in the mix. What tracks deserve to be heard above others? What tracks should occupy which frequency bands? These are a few questions to ask yourself.

Once you've determined this information for your mix, I would recommend cutting from the “lesser” tracks rather than boosting the “more important” track(s). The former reduces competition (removes masking), while the latter increases competition. Either strategy will push the important element out, but the latter has a higher risk of overloading the frequency range in question.

Let's say there's a lot of masking going on in the 3-6 kHz range. This octave happens to have a lot of important information for vocal intelligibility as well as a lot of important harmonic content in instruments like guitars, keyboards, pianos, horns, and the like.

Continuing with this example, let's say we have a vocal that's being masked by guitars and keyboards. We could make the decision that the vocal is higher up on the hierarchy of mix elements (not a controversial idea in modern music production), and so we can go about EQ like this:

  • Make cuts to the guitar and keyboard (as little as possible to reduce masking while preserving tone) — these cuts don't necessarily have to be broad between 3-6 kHz in this example; they can be narrower or wider.
  • If need be, boost the vocal (as little as possible to reduce masking while preserving tone) at a frequency (or frequencies) that will allow it to cut through the mix.

This is one of the most difficult parts of mixing, especially in dense or poorly arranged mixes (we'll get to that shortly). Every EQ move we make on one track will affect every other track, and so dealing with frequency masking is part of a bigger, holistic approach to balancing the entire mix.

One extremely important frequency range to pay attention to is the low-midrange around 200-500 Hz. Too much masking in this range will cause our mix to sound “muddy”, while too little energy in this region will lead to a thin mix with no power.

The reason this range is so important to keep an ear on is that there are so many instruments and vocals that carry significant energy in this region, whether it's the fundamental frequency of the note or hit or the first few [most important] harmonics above the fundamental.

So then, the more tracks with significant energy in the range, the greater the build-up.

When it comes to clarifying the low-mids, consider the tracks that drive the mix (guitar and bass, most often) and keep this as they are (or even boost them if it suits the mix), and perhaps cut from other tracks. Note that we may be able to cut a bit more aggressively (if need be) from our percussion tracks in this range, especially if there are resonances to be taken down anyway.

And while I've made all these suggestions, it's equally as important not to overdo it with EQ. Every EQ move we make causes phase shift (unless we're using linear phase EQ) and alters the tone of the original recording, so be careful when balancing and reducing frequency masking with this powerful tool.


Reducing Frequency Masking With Sidechain Compression

Sidechain compression is a style of compression that utilizes a signal other than the input signal to control the gain reduction. Technically, all compressors have a sidechain signal path used to control the gain reduction. This sidechain is typically taken from the input of the compressor, but in “sidechain compression”, we take it as a completely independent signal.

Let's reminisce to a few earlier points in this article, notably:

  • Fader levels are useful for reducing frequency masking by making the more important track louder and the less important track quieter.
  • We ought to consider the hierarchy of mix elements — which tracks are, indeed, more important within the mix.

Armed with that information, we can discuss how sidechain compression can help reduce frequency masking.

Let's take the simple example of the lead vocal. It's nearly always the most important track in the mix and should be mixed as clearly as possible.

In this example, let's say the vocal, acoustic guitar and piano are all vying for important midrange energy, and noticeable frequency masking comes as a result.

We can consider inserting a compressor on the acoustic guitar, piano, or both (the “less important tracks”) and routing the vocal (the “more important track”) into the sidechain input of the compressor. We'll then set up the compressor parameters appropriately to ensure the vocal is triggering a good amount of gain reduction.

In this example, we have a situation where, when the vocal is present, the levels of the piano and/or acoustic guitar (depending on which tracks we inserted sidechain compression) will be brought down, thereby reducing frequency masking by making the vocal louder in comparison. The magic of sidechain compression, though, is that the piano and/or acoustic guitar levels will return to their original balance when the vocal isn't present.

So that is how a bit of sidechain compression can help reduce frequency masking in the mix.


Reducing Frequency Masking With Arrangement

This one isn't necessarily about mixing, but frequency masking can be reduced with simple arrangement techniques. It's important to remember that great mixes come from great songs with great arrangement, recording and production.

It's also worth noting that the mute button is a mixing tool, too, and if we truly can't get certain arrangement elements to sit in the mix, our best option may indeed be to mute a track or two in order to serve the song to its fullest.

With that bit out of the way, if you are involved in the arrangement of the song, know that writing with too many similar instruments and vocals will lead to frequency masking, so plan accordingly.

Try choosing instruments that cover different registers and tones/timbres that work well together without becoming overly homogenous. Try writing parts with different voicing and in different octaves to break up the harmonic content between tracks, and just be aware of frequency masking throughout the process.

The mixing process will become much easier with perfect arrangement. Trust me!


How do I get rid of the noise in the tracks of my mix? Low-end noise (some rumble, handling noise, 60 Hz hum, etc.) can often be reduced with a high-pass filter. Hiss can be tamed with a low-pass filter. Midrange noise includes many types of noise, and there are restoration plugins to deal with pops, clicks, clipping, HVAC, ambience, reverb and more.

What is EQ? EQ is the process of adjusting the balance between frequencies within an audio signal. This process increases or decreases the relative amplitudes of some frequency bands compared to other bands with filters, boosts and cuts. EQ is used in mixing, tone shaping, crossovers, feedback control and more.

To learn more about EQ, check out my article The Complete Guide To Audio Equalization & EQ Hardware/Software.

This article has been approved in accordance with the My New Microphone Editorial Policy.

Arthur

Arthur is the owner of Fox Media Tech and the author of My New Microphone. He's an audio engineer by trade and works on contract in his home country of Canada. When not blogging on MNM, he's likely hiking outdoors and blogging at Hikers' Movement (hikersmovement.com) or producing music. For more info, please check out his YouTube channel and his music.

Recent Posts