While many of us are cheering on the de-escalation of the so-called “loudness war”, it's still common practice for musicians and music fans to prefer loud mixes. While streaming services work to match the loudness levels of all songs, the loudness bias is still alive and well outside of streaming. So for many mixers, our job is to get our mixes sounding loud (at least loud enough to keep up with other commercially successful songs) without completely destroying the mix.
Here are the top 12 pro tips to get your mixes louder:
- High-Pass Filtering
- Mid-Range EQ Boosting
- Surgical EQ
- Distortion & Saturation
- Serial Compression
- Parallel Compression
- Multiband Compression
- Use Limiters
- Experiment Cautiously With Clipping
Let's dive into each of these tips in greater detail and explore how to make your mixes louder. In this article, we'll also discuss the idea of loudness, the downsides of seeking loudness and the loudness war in general.
Note that while many of these strategies are useful in mixing, a few are likely best saved for mastering, notably tips 11 and 12.
Peak Levels, Headroom & Perceived Loudness
Before we get into our discussion on increasing the loudness of our mixes, we should seek to understand what loudness actually is and the terminology surrounding it.
Peak & RMS Levels
First, let's discuss peak and rms levels.
The peak level of an audio signal refers to the instantaneous measurement of the audio signal's level. In practice, we're mostly concerned with the highest peak(s) of an audio signal over time, which tend to happen on the transients.
The root mean square (rms) level of an audio signal is the square root of the mean square of an audio signal over a set window of time. Because audio signals often have maximum positive and negative peaks, rms is used to give an “average” signal level.
Next, let's understand headroom.
Headroom, technically speaking, is the available level above an audio system's nominal level and its maximum level. This definition is key to understand when working with analog audio and equipment where metering may not be available up to the maximum level.
In digital audio, where we have a defined hard ceiling at 0 dBFS, we understand headroom as the available space between a signal's peak levels and the 0 dBFS ceiling.
Trying to push audio signal levels beyond a system's maximum signal handling capabilities will result in distortion due to the flattening of the audio waveform's tops and/or bottoms. This flattening of the waveform is known as “clipping”.
Analog clipping tends to come on a bit more gradually and sounds a bit more musical. The distortion characteristics vary from one analog device to another. Digital clipping is more abrupt and harsh and is typically unwanted.
In short, leaving some headroom available in the mix is advisable before mastering. The practice of mastering and increasing levels will trade this headroom for additional loudness.
To learn more about the headroom, be sure to check out my article What Is Headroom In Audio? (Recording, Mixing & Mastering).
Let's move on to perceived loudness.
At the most basic level, a stronger audio signal (one with a higher rms value), played back through the same system, will sound louder than a weaker audio signal. However, that's not the entire story.
Perceived loudness is a complex psychoacoustic factor. Any time we discuss perception, we get into subjectivity and the physiology of a person's auditory system.
That being said, there are standards and accepted norms when it comes to what constitutes “loudness”.
The first thing to consider is that human hearing has a universally accepted frequency response of 20 Hz – 20,000 Hz and that we do not hear all frequencies equally within this range.
We're naturally more sensitive to the mid-range frequencies than the low and high-ends of the audible range. This means that the sound pressure level at the low and high-end frequencies must be relatively higher than at the mid-range frequencies in order for us to perceive them at the same loudness.
Our hearing sensitivity across the audible spectrum or our natural “frequency response” has been studied, and graphs have been plotted to show how we hear certain frequencies relative to others, taking sound pressure level or “volume” into account.
The Fletcher-Munson curves were published in 1933 to show this, while the more recent equal loudness level contours were published in 1955. Both show how we hear frequencies differently. Let's have a look:
The lines represent phon, a logarithmic unit of loudness level for tones and complex sounds. As you can see, we're most sensitive to sound in the 2 – 6 kHz range. Notice, too, that the response becomes more level as the sound pressure level increases.
Extending this psychoacoustic trait to audio, we can understand how we'd need greater signal level representation at the low-end and high-end in order to hear these frequency ranges once the audio is properly converted to sound.
As an additional point, many loudspeakers can't accurately reproduce the very low-end and high-end of the audible range, anyway, but I digress.
While the aforementioned rms level gives us an idea of the average levels of the audio and how loud it will be compared to other audio in the same system, it doesn't take into account the variations in our natural auditory response.
The best way to judge loudness is by listening. However, we also have LUFS (Loudness Units Full Scale), which uses loudness units to help us understand the loudness of a signal or mix relative to full scale (0 dBFS). Note that this is a digital metering system.
Loudness units factor human perception and electrical signal intensity together in a standardized measurement. Two mixes that are at the same LUFS should, in theory, be perceived to be the exact same loudness.
LUFS is generally measured over set windows of time, including:
- Integrated LUFS: over the entire mix
- Short term LUFS: over the last 3 seconds of audio
- Momentary LUFS: over the last 400 milliseconds of audio
Getting our mixes loud requires using up the headroom available in our system and increasing the average level without allowing the peaks to exceed the maximum ceiling. Additionally, we don't hear all frequencies equally, and certain frequency bands will eat up more headroom than others (this is especially the case with the low-end sub-bass frequencies).
Increasing The Perceived Loudness Of A Mix Comes At A Cost
Once we reach the ceiling of our system's maximum signal handling capabilities (0 dBFS in digital systems), any increase in the perceived loudness of a mix means bringing up the average level without bringing up the peaks.
In other words, beyond turning the mix up until the highest peak reached the ceiling and no higher, we must shape the waveform of the mix's audio. This will have side effects on the mix.
So the goal of loudness processing, put differently, is to increase the perceived loudness to an adequate level without compromising the qualities that made the mix great, to begin with.
When pushing to “loudness war” levels, it's often been the case that mix quality has suffered for the sake of making the record sound louder.
Increasing the perceived loudness of a mix or master tends to have the side effects of decreased dynamic range, distortion, lack of clarity, pumping, artifacts, and others.
Making your mixes loud for loudness' sake isn't necessarily the best practice, though it's still common practice.
Streaming services have largely adopted loudness standards that normalize all material to a set integrated LUFS value. This means that quiet mixes and loud mixes will all be automatically adjusted to the same perceived loudness level. Loud songs will be “turned down,” and quiet songs will be “turned up” so that there isn't massive variation between different tracks in a playlist.
Furthermore, there are growing complaints about overly loud mixes, especially when normalization rids of the loudness bias. Loud mixes are fatiguing on the ears and can lack definition, leading to less overall enjoyment from listeners.
For more info on normalization, check out my article Should You Normalize Audio? (When To & Not To Normalize).
On the other hand, though, after so many years of increasing loudness, the “sound” of loud mixes has become expected, so getting loud mixes is often the goal of our projects.
Of course, there are creative ways to bring a mix up to reasonably loud levels without destroying the quality of the mix. Let's get to them now!
Loudness Tip 1: Production/Arrangement
The first pro tip for getting louder mixes comes before mixing in the arrangement and writing stage.
In general, sparser mixes are easier to get loud because fewer elements are competing for their position in the frequency and stereo spectrums. Put differently; denser mixes are typically more difficult to get loud while still ensuring each element is balanced perfectly.
To get a mix as loud as possible with as few negative side effects as possible, we would theoretically want to fill up the entire frequency spectrum along with the entire stereo spectrum.
When multiple instruments compete for the same frequency ranges and stereo positions, we have to process them (often with EQ, compression and saturation) to have them fit into the mix and be sufficiently audible.
Of the processes mentioned earlier, EQ cuts can thin out certain tracks and make them sound weaker, compression can suck the life out of certain tracks, and saturation can over-distort certain tracks.
Furthermore, phase cancellation is always a concern, where certain frequencies can be over or underrepresented due to the phase relationships between all the different tracks in the mix. This is especially concerning for loudness when the stereo mix is summed to mono.
To combat these issues, it can be advantageous to think through the arrangement and production and choose instrumentation with different key frequency bands (where the bulk of the harmonic content is). That way, each instrument naturally has its own “slot” in the mix and can be mixed cleaner and louder, resulting in a cleaner and louder mix overall.
Furthermore, if I can get a bit philosophical with you, we could think of loudness in terms of relativity. What are the long-term dynamics of the mix? Is the song arranged so that there are big, climactic, loud parts and smaller, building, “quiet” parts?
A lack of long-term dynamics, where the “loudness” is static throughout the song, may suit some productions well. However, getting the entirety of a mix loud for loudness' sake isn't necessarily a great strategy.
First off, listeners typically have control over the volume knob, so they ultimately control how loud the playback will be. Second, there really isn't anything to base “loud” and “quiet” on without notable long-term dynamics. For example, if the final, climactic chorus is equally as loud as the intro and breakdown, is it really climactic, even if the LUFS meter reads high?
Arranging and producing songs with good long-term dynamics is key in getting perceived loudness in the sections of the songs that need to be big. Yes, bringing levels down in the “quieter sections” may bring down the integrated LUFS of the song as a whole. However, in terms of loudness and dynamics, it's almost always a good thing.
Related article: What Are The Differences Between Audio Mixing & Producing?
Loudness Tip 2: High-Pass Filtering
High-pass filtering is the most important EQ move you'll make in any given mix.
Earlier, we discussed the audible range of human hearing and the “natural frequency response” of the typical human auditory system. We understand that low-end frequencies require lots of energy to be heard and can eat up significant headroom in the mix.
Furthermore, the very low-end of the frequency spectrum isn't overly musical anyway. For instance, the fundamental frequency of a bass guitar's E string is 41 Hz, and the B string of a 5-string bass would be 31 Hz. However, the real tone of the bass guitar (and other instruments) comes from its harmonics (notably the first few), which are integer multiples of the fundamental.
So most instruments don't have a lot, if anything, going on in the low-end of the frequency spectrum. Even bass instruments don't necessarily have significant frequency content in the low-end.
Another factor to consider regarding the long wavelength of low frequencies is that they're more susceptible to phase cancellation. Longer wavelengths complete their cycles over longer periods. Therefore, when there is phase cancellation, it affects a longer period of the waveform, causing a reduction in the overall strength of the low-end frequencies.
When two signals are perfectly in phase, there's a doubling of amplitude (+6 dB). On the contrary, when two signals are perfectly out of phase, they completely cancel each other out. To ensure proper loudness and clarity in the low-end, we have to have good phase cohesion, where any tracks with low-end work together, phase-wise, rather than cancel each other out.
That's not to say that high frequencies don't also cancel each other out due to phase cancellation. However, their cycles happen so fast that the phase relationships between different tracks in the mids and highs are less consequential.
These higher-frequency phase interactions are also much more difficult to do anything about, though we can play a significant and thoughtful role in bettering the phase relationships between different tracks in the low-end.
Let's consider these three points on low frequencies:
- Low-end frequencies require lots of energy to be heard.
- Low-end frequencies don't contain musical information in the majority of instruments.
- Low-end frequencies are more susceptible to phase issues.
Understanding these three truths about low-end frequencies, we can understand how high-pass filtering most of our tracks can help improve loudness.
First, it reduces the amount of energy in the low-end frequencies, thereby eating up less headroom and giving us the opportunity to push things louder.
Second, it eliminates unmusical information such as low-end rumble, mechanical noise, electromagnetic interference, and more. With this information gone, we can bring up the levels in the mix without bringing up the low-end rumble.
Third, eliminating the low-end of most tracks reduces the potential for phase cancellation, which allows us more control over the solidarity of low-frequency phase cohesion. It's critical to note that high-pass filters and EQ, more generally, will shift the phase of a signal at and around the corner frequency. The greater the HPF's slope, the greater the phase shift.
I have a video detailing EQ's side effect of phase shifting. You can check it out here.
It's also worth noting that many instruments can be high-pass filters above their fundamental if doing so will benefit the overall balance of the mix. Psychoacoustically, the human brain will effectively fill in a sound's fundamental based on the harmonics, even if it's not present in the sound or mix.
Sometimes high-pass filtering at a higher frequency than would sound natural in solo will actually make a track fit better in the mix and make way for more important lower-frequency tracks in their respective frequency bands.
All in all, high-pass filters are key for loudness and mixing in general.
Loudness Tip 3: Mid-Range EQ Boosting
When considering perceived loudness, we must refer to the aforementioned human auditory response. Looking at the Fletcher-Munson curves and equal-loudness contours, we can clearly see that humans are most sensitive to mid-range frequencies.
Compared to the low-end frequencies we touched on in the previous tip, mid-range frequencies don't take up as much energy in audio signals and sound reproduction.
So we have the situation where we're more sensitive to mid-range frequencies that don't eat up as much headroom as low-end frequencies. It makes sense, then, that we can get more perceived loudness without maxing out our levels as fast by boosting the mid-range.
Having a great balance in the mid-range is essential for a great mix, so take care to mix each track appropriately. Keep in mind that boosting any frequency too much will lead to issues, but giving a bit of extra level to the mid-range can help make the mix sound louder.
Loudness Tip 4: Surgical EQ
It may be the case that the tracks within the mix have one or more resonant frequencies that are over-represented. These frequencies tend to poke out in the mix and distract listeners.
Individual tracks with bad resonances will sound terrible before they're brought up to the level they ought to be in the mix balance.
Furthermore, resonances will also eat up headroom, especially when multiple tracks have been recorded in the same room with the same acoustic challenges. When multiple tracks have the same resonant frequencies, these frequencies will quickly add up in the mix.
What is commonly referred to as “surgical EQ” is an effective method of reducing these resonances. It's effectively a parametric EQ centred at the problem frequency with a narrow Q and a significant reduction in level.
Surgical EQ helps improve the sound of the tracks and the mix as a whole while also helping to increase loudness.
First, eliminating these problem frequencies from an individual track will make it sound better when pushed up in the balance. Second, if the mix as a whole has bad build-ups, reducing the problem frequencies of even a few tracks with surgical EQ can help the mix get pushed a bit loud (along with sounding much better).
A word of caution: it's common for people to suggest boosting a narrow band on the EQ and sweeping it across the frequency spectrum to search for problem frequencies. However, this is counterproductive in practice, as it causes every frequency to sound problematic.
Before searching for any problematic frequencies, listen first. If there's nothing offensive, don't go looking to be offended. It's best to leave it alone.
However, if you do hear that something's wrong, sweeping a gentle boost can help you find the issue.
Perhaps a better method is guessing where the problem is, cutting what you deem appropriate, and then A/Bing the EQ to hear whether there's an improvement or not. If not, try again with a different frequency. Repeat until the EQ has solved the problem.
Check out My New Microphone's recommended EQ plugins:
• Top 8 Best Graphic EQ Plugins For Your DAW
• Top 8 Best Passive EQ Emulation Plugins For Your DAW
• Top 10 Best Digital Parametric EQ Plugins For Your DAW
• Top 10 Best Dynamic EQ Plugins For Your DAW
• Top 10 Best Linear Phase EQ Plugins For Your DAW
• Top 10 Best Parametric EQ Emulation Plugins For DAWs
Loudness Tip 5: Distortion & Saturation
When we learn about audio and mixing, distortion is often painted as the enemy to be avoided at all costs. While unnecessary distortion in our devices is often best to be avoided, distortion (and saturation in particular) is an important tool in music production and mixing that shouldn't be ignored.
The technical definition of distortion is any deviation in the shape of an audio waveform between two points in a signal path. In this regard, EQ and compression distort the signals they process.
Distortion and saturation, as processes, are designed to shape the waveform of a signal to produce a desired effect.
This effect can quite obviously be taken to the extreme, as is the case with distorted guitar amps or audio equipment that is pushed well beyond its signal handling capabilities.
However, distortion, and more specifically saturation, has the effect of soft-knee compression and harmonic saturation.
Soft-knee [dynamic range] compression has a “rounded” threshold. The amount of compression applied to the signal happens gradually as the input signal increases in level until the maximum ratio is reached.
This is different than “hard-knee” compression, where the compressor engages as the signal exceeds the set threshold and disengages as the signal drop back down below the set threshold.
Harmonic saturation is the creation and amplification of harmonic content in a signal through subtle waveform distortion.
As mentioned earlier, the harmonics of a sound source are integer multiples of the fundamental that largely make up the timbre of the sound. However, harmonic generations through saturation will effectively introduce harmonics on whatever frequency content is already in the audio signal (including the fundamental, harmonics and even noise).
Both these effects (soft-knee compression and harmonic generation/saturation) happen as the tops of the audio waveform are gently flattened out via saturation or subtle distortion.
So how can this help increase the loudness of our mix? Let's focus on saturation, though subtle distortion can also be used.
In Tip 3, we discussed how subtle increases in the mid-range will have a greater effect on loudness than increases in the low-end without eating up as much headroom.
Well, EQ boosts in the mid-range can do the job but are only able to affect the frequency content already present in the audio. For this reason, EQ boosts can often sound a bit unnatural or, even worse, bring up noticeable noise levels without necessarily increasing much of the audio.
Saturation actually produces new frequencies in the audio signal. The added harmonics are largely produced in the mid-range, thereby increasing the perceived presence and loudness of a track (or the mix as a whole) without having a huge impact on overall levels.
For example, saturating a bass guitar can give it more presence in a mix than boosting the low-end while also eating up less headroom.
Furthermore, the subtle compression aspect of saturation can help tame the peaks of an audio signal without squashing the dynamics entirely, allowing for a smaller crest factor (the difference between the peak and average levels) and more loudness before clipping.
Check out My New Microphone's recommended saturation plugins:
• Top 11 Best Saturation Plugins For Your DAW
Loudness Tip 6: Serial Compression
As the name suggests, Serial compression is running a signal through multiple compressors in series (one after the other).
With modern digital signal processing and digital audio workstations, it's easy to insert multiple compressors on any given track. However, serial compression also happens across buses and sends. For example, an individual track can be compressed, sent to a bus with a second compressor, and finally sent to the mix bus, which may also have a compressor. This example has 3 compressors in parallel for that given track's audio to pass through.
For more information on inserts, check out my article Audio: What Are Inserts? (Mixing, Recording & More).
When done right, we can use serial compression to reduce the crest factor and increase the perceived loudness of elements within a mix, all without the typical unwanted artifacts of high levels of compression.
For example, we can often get away with 3 dB of gain reduction without noticeable distortion and pumping. These negative side effects would be much more noticeable at 9 dB of gain reduction. However, if we had 3 compressors in a row, each applying about 3 dB of gain reduction, we could get to the 9 dB with less noticeable side effects.
With a more natural-sounding gain reduction on the peaks, we narrow the crest factor and can, therefore, push an element, bus, or mix as a whole louder.
I should mention a special note on mix bus compression. Utilizing a mix bus compressor means that any signal affected before the mix bus will be part of a signal chain with parallel compression.
As the name suggests, Mix bus compression works on the entirety of the mix. It's a common technique to help tame transients, “glue” the mix together, and grant slightly more perceived loudness.
Great care must be taken with mix bus compression not to overprocess the mix as a whole. It's advisable to keep the ratio low (1.5:1 to 4:1 at the most) and to set the threshold so that gain reduction is kept to a maximum of about 3 dB.
Because this processing affects the entire mix, we must be careful to avoid pumping, distortion, and other artifacts that would ruin a mix that would sound great otherwise. Adjust the attack and release times to make the compressor work better with the rhythm and feel of the music and to avoid the notorious pumping effect.
Mix bus compression will tame transients at the expense of transient definition, so we should be listening to how the compressor affects the overall punchiness of the mix's drums and percussion.
Listen for increases in perceived loudness in the sides (left and right-panned elements), along with the relative increase in level in the quieter parts. Adjust the mix bus compressor as necessary to get the results you want.
I talk about using serial compression for loudness in more detail in one of my YouTube videos that you can check out here:
I also have a video dedicated to parallel compression that you check out here.
As always, A/B the process by matching perceived levels and bypassing/engaging the compressor.
To learn more about A/B testing, check out my article A/B Testing & Its Importance In Mixing (With 5 Best Tests).
Loudness Tip 7: Parallel Compression
Parallel compression is the parallel processing of an audio signal with an uncompressed (or slightly compressed) “dry” version and a heavily compressed “wet” version.
The parallel splitting of the signal is most often achieved through a send/return, where one or more tracks can be sent to an auxiliary track for heavy compression before being summed back with the original track(s) at the mix bus. Alternatively, we can duplicate a track and process the copy. If our compressor of choice has wet/dry control, this effectively works as parallel processing as well.
For more info on auxiliary tracks, check out my article Mixing/Recording: What Are Auxiliary Tracks, Sends & Returns?
We effectively achieve upward compression by heavily compressing a separate version of an audio signal (or collection of audio signals) and mixing it in at lower levels.
Upward compression decreases dynamic range by increasing signal levels below the threshold rather than decreasing signal levels above the threshold (as is the case with typical compression).
We must remain mindful of peak levels with parallel compression, as they're liable to increase as the compressed version is mixed in. However, it does still have the effect of increasing the average level more than the peaks, thereby decreasing the crest factor and allowing for great potential loudness.
Note that the parallel send should be compressed hard and won't sound good by itself. However, as we mix it in, it will give weight and loudness to whatever track(s) we're sending to the parallel compressor.
Listen for potential phase issues between the compressed auxiliary bus and the original track(s). Latency and signal delay can cause unwanted phase issues (generally heard as comb filtering and a lack of low-end) that can cause more harm than good to the mix.
Other side effects include slightly less punch on transients (though not as much as “regular” mix bus compression), less control over the peak levels (which can lead to clipping if we're not careful), and greater relative level increases in sparser sections of the song.
I have a video going into more detail on parallel processing that you can check out here:
Check out My New Microphone's recommended compression plugins:
• Top 10 Best Optical Compressor Emulation Plugins
• Top 11 Best Digital Compressor Plugins For Your DAW
• Top 11 Best FET Compressor Emulation Plugins For Your DAW
• Top 11 Best Variable-Mu Compressor Emulation Plugins
• Top 11 Best VCA Compressor Emulation Plugins For Your DAW
Loudness Tip 8: Multiband Compression
Multiband compression is a dynamics processor that splits a signal into different frequency bands (often 3 or 4) and has independent compression controls for each band.
The ability to compress each band differently allows us to get more or less compression as is necessary. We can also avoid pumping across the entire signal if one band has more energy than the others. Multiband compression helps to avoid pumping and the gain reduction of certain bands when the overall signal level exceeds the threshold.
So multiband compression, like regular compression, can help increase loudness by reducing the crest factor. Only with multiband compression, we have much more control over the individual frequency bands.
Listen for the potential alterations to the frequency balance as different bands experience different amounts of gain reduction. When each band has a different amount of gain reduction, the band with the least amount will effectively get a boost in level while the band with the most gain reduction will get a “cut”.
So while multiband compression on the mix bus gives us great power, it also gives us the responsibility of ensuring it doesn't throw off the balance of the entire mix. Fortunately, when applying mix bus compression, we're often only after a few dB of gain reduction, which works in our favour. However, we must be vigilant to ensure the balance is maintained.
Tying back in with Tip 4 for a moment, multiband compression is a common processor for de-essing and reducing other dynamic resonant frequencies that poke out periodically. If there are problem frequencies in a track that aren't always present (such as sibilance in vocals), we can narrow a band around the problem frequencies with a multiband compressor and only compress them when they're present.
This multiband compressor “de-essing”, in turn, allows us to boost the levels of the problematic track without having the resonance(s) boosted with it.
We'll discuss limiting in greater detail in Tip 11, but multiband limiting can also help increase loudness.
Furthermore, dynamic EQ can be set up to behave similarly to multiband compression and can, therefore, be used to help increase loudness as well.
Check out My New Microphone's recommended multiband compression plugins:
• Top 10 Best Multiband Compressor Plugins For Your DAW
Loudness Tip 9: Automation
Automation is basically the altering of parameters over time during the mix.
By automating relative levels of individual tracks and busses, we can maintain loudness across diverse sections of arrangements.
Conversely, automation of track levels can help widen the long-term dynamic range of a mix, giving us sections that are relatively loud and others that are relatively quiet.
As was mentioned earlier, “loudness” is largely relative (whether to other songs or between sections or even instruments within a song). Therefore, having good long-term dynamics will make the more climactic parts of the song sound louder. This can be achieved by arrangement alone (see Tip 1), though it can also be achieved with automation.
Beyond automating faders, we can automate EQ settings, compressor settings, saturation settings and more to ensure that each section of the song has its proper balance and loudness.
I have a video discussing my top 11 automation tips for mixing. Check it out here:
Loudness Tip 10: Reverb
This tip isn't usually expressed when learning about loudness in mixing. However, a tasteful amount of reverb on individual tracks and buses can improve the perceived loudness of a mix by adding musical ambience, especially between transients.
When using reverb, I always recommend setting up a send/return for the reverb effect to have the most control possible. We can also choose to send multiple tracks to a single reverb send to help glue things together while reducing CPU load (in DAWs).
It's often beneficial to high-pass filter the low-end from the reverb send to avoid unnecessary build-up and phase issued in the bass frequencies. Adjusting the reverb to taste can take time, and I suggest listening for how the reverb fits with the timing of the mix along with how it affects the sides of the mix.
Adding a sense of space with multiple reverbs can give dimension to different elements within the mix and ultimately make it sound louder.
Note that, in denser mixes, delay may be a better option than reverb to achieve the same effects as described above.
Check out My New Microphone's recommended reverb plugins:
• 12 Best Reverb Plugins (Spring, Plate, Algorithmic, Convolution)
Loudness Tip 11: Use Limiters
When it comes to increasing the loudness in mastering, the limiter is perhaps the most common tool. It's also useful in mixing, though we should be careful to understand our role and not peak limit the mix before mastering (unless we're also the mastering engineer).
Limiting is effectively compression with an ∞:1 ratio. In other words, limiters limit the maximum signal level at a set threshold.
Note that, like compressors, there are time factors in limiters. Limiters are designed with attacks and release times. They act by attenuating the signal as a whole as the momentary signal level tries to surpass the threshold (this happens over a window of time rather than instantaneously within the waveform itself).
By attenuating the peaks so that they do not surpass a set threshold, limiting can have a massive effect on reducing the crest factor and getting extra loudness in tracks and the mix as a whole.
I tend to stick around -3 dB maximum reduction on the master limiter to avoid pumping and use serial limiting (like serial compression) as necessary if more gain reduction is required.
If you're tasked with mastering or at least “pseudo-mastering” your mix for monitoring on different playback systems, you'll likely be limiting the master bus (or mix bus).
Here's one of my videos explaining how to use limiting to get “competitive” levels in a mix for monitoring and referencing before mastering:
Check out My New Microphone's recommended limiter plugins:
• Top 10 Best Limiter Plugins For Your DAW
Loudness Tip 12: Experiment Cautiously With Clipping
It's generally best to avoid clipping when learning. Digital distortion tends to sound awful in most cases. However, rules are meant to be broken, and tasteful clipping can certainly increase perceived loudness. The issue, then, becomes about making the clipping sound good.
Some producers/mixers go as far as shaping the individual audio files to achieve prolonged clipping and transient shaping.
Unlike limiting, clipping happens instantaneously and will cut off the tops of the audio waveform at the clipping point (0 dBFS is digital systems). There's no attempt at attenuation with clipping. The waveform is perfectly linear up until the clipping point, at which the waveform is entirely clipped until it drops below the clipping point once again.
This increases loudness at the expense of distortion. The slicing/flattening of a waveform will shape the waveform toward that of a square wave, which is the basic waveform with the greatest rms value (compared to the sine, triangle and sawtooth).
Square waves have infinite odd-order harmonics, so we can expect a significant odd-order harmonic generation to happen as we digitally clip our signals.
But the ugly part of digital clipping comes from aliasing, which causes inharmonic distortion where artifacts are created at frequencies below the source frequency content. Like saturation, this applies not only to the fundamental but to the original harmonic content and even the noise, resulting in unmusical, noisy, inharmonic distortion.
A Brief Discussion On The Loudness War
The ongoing loudness war is the trend of making records louder than others. This “war” has been going on for decades, where there is a sort of competition to make records as loud as possible.
The basic idea behind the loudness war is to take advantage of loudness bias, where people naturally tend to prefer louder music. If a louder song is played after a quieter song, it may sound better or more professional.
What the loudness war doesn't take into account is that the listeners ultimately have control over the volume, so they can turn their music up and down depending on if it's too quiet or too loud, respectively.
There is a lot of controversy surrounding the loudness war, with many engineers and listeners denouncing the practice and many industry heads, artists and listeners maintaining the practice.
Loudness is achieved using many of the techniques mentioned above, along with mastering focused on getting the greatest loudness possible.
As was mentioned earlier in this article, loudness comes at a cost. Many records that have become “victims of the loudness war” sound incredibly fatiguing, distorted and subjectively unenjoyable at the expense of being louder than the competition.
Today, many streaming services normalize audio by default in an effort to make the loudness discrepancies between songs as minimal as possible. This means that a super “loud” song will be automatically turned down to a set level, while a “quiet” song will be automatically turned up to the same set level.
With this normalization practice, loudness for loudness's sake actually hurts the quality of the song. Super loud audio now only has the negative aspects (fatiguing lack of dynamic range, distortion, poor definition, etc.) without the sense of being louder.
In my opinion, the loudness war sucks. However, it's important to understand how to compete. After all, it's better to be a warrior in a garden than a gardener in a war.
What volume should audio be listened to and mixed at? Critical listening should be done at various levels, though 80 – 85 dB SPL is the sweet spot with the best frequency balance and low risk of hearing damage. Low levels help us identify elements too low in the mix, while high levels let us hear/feel the mix at a higher risk of hearing damage.
Related article: What Volume (In Decibels) Should Audio Be Mixed/Listened At?
Choosing the best audio plugins for your DAW can be a challenging task. For this reason, I've created My New Microphone's Comprehensive Audio Plugins Buyer's Guide. Check it out for help in determining your next audio plugin purchases.
Determining the best equalizer for your audio needs takes time, knowledge and effort. For this reason, I've created My New Microphone's Comprehensive Equalizer Buyer's Guide. Check it out for help in determining your next EQ purchases.
Determining the best compressor for your audio needs takes time, knowledge and effort. For this reason, I've created My New Microphone's Comprehensive Compressor Buyer's Guide. Check it out for help in determining your next dynamic range compressor purchases.