Loudness: How Loud Do You Go?
Updated: Oct 8, 2019
Making sense of the loudness in the modern music industry
With the mess of loudness standards across the various streaming platforms, conflicting advice from experts, and all manner of wrong-headed and misinformed, however well intended, opinions across blogs, social media, and forums; not to mention, the slew of pseudo-experts broadcasting across YouTube, it’s little wonder that Independent Music-Makers find themselves confused and unsure as to what the correct target loudness for their music should be.
Cut to the Chase: What’s the Number?
During the course of this article, we will try to make sense of the various factors that influence the target loudness of a particular piece of music. This is not intended as an exhaustive technical breakdown, as it would be difficult to contain all the necessary details in one article; but rather, a comprehensive introduction to the wide-ranging subject.
As will become clear, there is no simple, straight-forward answer; rather a series of compromises that must be made, in order to ensure a particular piece of music translates as well as possible across the variety of potential distribution channels.
Anyone peddling a ‘one size fits all’ answer is quite simply wrong and should be politely ignored.
Part 1: What the LUFS?
The LUFS standard of loudness normalization was first introduced in television, as a way to combat the growing problem of adverts/commercials playing significantly louder than the programs they interrupted. Until then broadcast, both television and radio, had relied upon a ‘peak limited’ system to control the maximum signal at output; given that, too much signal could result in damage to consumer’s televisions and radios, a ‘peak limited’ system made sense.
‘Peak’, however, plays a surprisingly small part in how we perceive loudness; and so, its use in broadcast, unwittingly, gave rise to (or at least facilitated) the now infamous ‘Loudness Wars’. Major Labels, desperate to catch as many ears as possible for new releases, began to push RMS* levels; RMS playing a much more active role in how we perceive loudness, the releases caught more ears; and with that, more sales and, in turn, more revenue.
*In simple terms, RMS refers to the short term level of a signal.
A Quick Word about Dynamics
Dynamics are typically measured in one of two ways; the difference between peak and the average loudness of the entire piece of audio (PLR), or the difference between peak and the short term loudness (PSR). PSR is most pertinent to this article, as it most directly relates to how we experience listening to music in real-time. The greater the difference between peak and RMS*, the higher the PSR value; and, in turn, the wider the dynamic range. *The way in which the PSR is calculated can vary depending on how the short term loudness is measured (eg. RMS Vs Short Term LUFS), but for the sake of this example, we will consider it in terms of Peak Vs RMS.
As the Major Labels began to push RMS up against the ‘peak limited’ radio broadcasts, so the dynamic range of popular music began to reduce.
Seriously, What the LUFS?
The LUFS system of loudness normalization makes use of psychoacoustic algorithms to effectively ‘hear’ the audio signal, in a way in which approximates how humans would perceive it in regard to loudness. It does this by taking into account factors beyond that of level, for example, frequency; and although they are other factors at work, frequency is perhaps the easiest to grasp. The frequency of a signal has a significant impact on how humans perceive its loudness. In simple terms, the LUFS standard uses EQ normalization curve to ‘re-calibrate’ the way in which the signal is measured so that it more closely matches human perception. Because of this, the precision of the applied EQ normalization curve (and other applications of measurement) is(are) central to achieving effective loudness assessment, and, by extension, loudness normalization.
A Quick Word about LUFS: A Moving Target Given that scientific knowledge is constantly evolving, Loudness normalization is likely to be a ‘moving target’ for years to come; as psychoacoustic algorithms are improved over time, and the precision of the assessment and normalization increases with each new update. Mastering Engineers will be obliged to keep pace with coming changes as, and when they arrive.
Part 2: Loudness & Streaming
With the advent of streaming media platforms, in particular, Spotify with its playlist based ecosystem, and a listening experience much more akin to that of broadcast than to CD or Vinyl, a standard of loudness normalization was required. The act of listening to a playlist is simply more ‘passive’ than that of putting on a CD or record; indeed, the argument can be made, that while people drive, workout, and socialize to playlists, they are, in fact, not really listening to playlists.
Playlists are, at best, a soundtrack to our lives and, in reality, often little more than ‘sonic wallpaper’.
With this in mind, many listeners are simply not prepared to accept the need to adjust volume as they drive, workout, or socialize; and so, loudness normalization is required.
Depending on the context in which you are listening to a track, it will have been normalized in one of two ways. If, as many do, you are listening to a playlist, the individual tracks contained therein are normalized on a ‘track by track’ basis. If, on the other hand, you’re listening to an album or EP, the tracks are normalized as one whole; thus taking into account the entire album or EP, and, by doing so, preserving the internal dynamics of the album/EP.
The Same but Different
Each major streaming platform employs its own system of loudness assessment and normalization. For example, Spotify, at the time of writing this article, uses a third-party application called, ‘Replaygain*’ and Apple Music has its own proprietary application, ‘Apple Soundcheck’.
*Spotify intends to switch to the ITU-R BS.1770-4 standard of Loudness Normalization; and so, suggests mastering your music to this standard (as a way to futureproof your music on their platform).
A Quick Word about Replaygain
Replaygain predates streaming media and was originally designed to normalize the playback of file formats such as mp3 files. It makes use of metatag data to set normalized playback levels for individual audio files, which is then read by compatible player software. The advantage of this system is that it requires no limiting; it is effectively acting as a ‘fader rider’.
It is important to note that Replaygain does not follow the LUFS standard and is subject to criticism for basing it’s loudness calculations on an EQ Normalization Curve that is now considered to be inaccurate. Spotify themselves freely admit that, although it was the best option open to them at the time, it is now time for them to update to the LUFS standard.
Seriously, Cut to the Chase: What’s the Number?
Given the lack of a unified Loudness standard across the major platforms, and considering that those standards will evolve over time; or in the case of Spotify, soon change completely, it should be becoming clear, as to why there is no simple, right answer to how loud you should master your music.
Once again, anyone peddling a ‘one size fits all’ answer is quite simply wrong and should be politely ignored.
Part 3: Loudness Vs Dynamics
A Tale of Two Mastering Engineers
The conflicting schools of thought currently dominating the field of mastering are perhaps best characterized by the two mastering engineers Ian Shepherd and Streaky. Both are extremely talented at their craft, generous with their knowledge, and enjoy popular support via their YouTube channels; where the two of them differ, however, is in their approach to loudness.
Ian Shepherd* is a longtime proponent of dynamics over loudness; and although advocating no explicit target loudness, does offer a suggested level of no louder than -9 short term loudness** in the loudest sections of your track (i.e. chorus, etc). If employed, this approach would result in a LUFS of around -11 to -14 LUFS, which broadly tallies with the normalized playback levels of Spotify (-14) and Apple Music (-16). His argument, in simple terms, being that, there is no point in pushing your music louder, at the expense of dynamics, as the various streaming platforms will simply turn it down; and once normalized, your music will feel flat and lifeless. *Ian Shepherd’s YouTube Channel: https://www.youtube.com/user/masteringmedia
**Short term loudness can be thought of as 'smart RMS'. You can read more about Ian Shepherd's thoughts on -9 short term loudness here: http://productionadvice.co.uk/how-loud/
Streaky*, on the face of it, very much resides at the opposite end of the scale; extolling the virtues of pushing tracks to a target loudness levels far in excess of Ian Shepherd’s suggested levels; arguing that, not only are we accustomed to that level of limiting, it is, in fact, the limiting itself that adds considerable excitement to the way in which we experience listening to the music.
*Streaky’s YouTube Channel: https://www.youtube.com/user/StreakyMasteringTV
With two experienced mastering engineers advocating opposing approaches to loudness and dynamics, it is difficult for Independent Music-Makers to best decide how to master their music.
Why Can’t We All Just Get Along?
There is, though, more common ground between Ian Shepherd and Streaky than would at first appear.
Streaky is quick to point out that limiting should not be applied at the expense of dynamics; but rather to ensure the track is competitive, while still retaining the fundamental dynamic characteristics. And Ian Shepherd freely acknowledges that the loudness of a track is often dictated (or at least influenced) by factors outside of the preservation of dynamics; namely those of market competitiveness and the taste of the artist and/or record label.
And both Ian Shepherd and Streaky make reference to different genres of music suiting different dynamic ranges and, by extension, loudness levels.
Musicality Vs Market Competitiveness
By taking the commonly acknowledged principles of these two high profile, experienced, though philosophically opposed, mastering engineers; it would not be unreasonable to consider dynamics as a necessary, and inevitable, compromise that must be made between musicality and market competitiveness. A determination made on the basis of the ‘loudness potential’ of the track versus its intended target market.
Loudness potential refers to the inherent musical characteristics of the track’s production. Compositions that feature complex musical arrangements, multi-layered instrumentation and/or sound design, or are densely packed, require more dynamic space for listeners to comfortably be able to hear the various musical elements at work. The lower the frequency range wherein these characteristics are a factor, the greater the dynamics space required.
Artists who intend/are obliged to push their music to aggressive loudness levels should spend considerable time researching productions, similar in style/genre to their own, that have translated successfully to the same aggressive loudness level. Failing to do so will severely reduce the level at which your mastering engineer can push your track, without having to seriously compromise musicality.
The degree to which your music must be mastered to a competitive level will depend largely upon your chosen genre and, to some extent, whether the track is intended for a single release or part of an LP/EP.
Part 4: LUFS Too Good To Be True
With all music now normalized to the same level, loudness, competitive or otherwise, should no longer be a factor; except for the fact that, of course, not all music is normalized.
Spotify’s web browser and any third party player/jukebox, etc applications are not currently subject to normalization; and premium subscribers, for example on both Spotify and Apple, can choose to switch off the normalization. Platforms like Soundcloud and Bandcamp do not presently employ Loudness Normalization (Bandcamp is Peak Limited) with CD and, increasingly, vinyl offering further options for distribution.
Even without the consideration of the physical distribution formats (CD & Vinyl), a significant proportion of your potential listeners will hear your music unnormalized; or, in other words, at the volume of your original upload (via Distrokid/CDbaby or similar). This goes some way to explain why major labels are, in fact, asking for levels louder than ever*. A fact that is further borne out by the average loudness of tracks featured on Spotify’s Global Top 50. *As top mastering engineer Mandy Parnell explains in this interview with Sound on Sound: https://youtu.be/Aot-sWlIDjU
Unfortunately, targeting LUFS levels of around -14 LUFS runs the risk of leaving your track sadly lacking for competitive loudness if played back unnormalized.
Mastering your tracks to a level less than -14 LUFS (approx) will also result in your music being ‘gained up’ by Spotify. A process that is destructive due to their gain staging and use of limiters.
Another Quick Word about Dynamics
In the wake of the understandable backlash against loudness and the years of loudness wars, it would be easy to fall into the trap of believing that dynamics are akin to quality. This is, however, not the case.
Dynamics are not Michelin stars, more doesn’t necessarily indicate better.
As has already been alluded, different genres suit different dynamic ranges (and by extension target loudness); and although no absolutes can be drawn, as a general rule of thumb, EDM requires less dynamic range than that of indie music, which in turn requires less than that of jazz or classical.
Part 5: Going Down (Stream)?
On the face of it, mastering your music to as competitive a level as made necessary by your given genre, while maintaining the required dynamic range demanded by the loudness potential of your production, would seem the best approach to cover all necessary bases (both normalized and unnormalized); and if streaming media platforms delivered music via a lossless format such as WAV, that would, indeed, be the case.
Unfortunately, streaming platforms rely on lossy codecs (eg. OGG) which are far more susceptible to distortion than their lossless counterparts.
This phenomenon is known as ‘downstream clipping’ and occurs because of the way in which lossy codecs reduce file size; which is done by removing audio information, based upon psychoacoustic principles that model the limitations of human hearing; or rather, the limitations in the way in which our brains are able to perceive and distinguish between different sonic elements.
Put simply, there are only so many frequencies our brains can actively perceive at any one time; the remaining notes are, effectively, masked and therefore, considered by the psychoacoustic model to be disposable. The more aggressively file size compression is applied, the more aggressively audio information will be removed; and, in turn, the more ‘downstream clipping’ is likely to be a factor.
Although ‘downstream clipping’ can vary greatly in severity depending upon the content being converted; nevertheless, tracks with higher average loudness, are much more likely to incur problems; and, in turn, require greater headroom* to safeguard against them.
*Headroom refers to the difference between the loudest peak in a piece of music and the maximum output of the system through which it is being played (on digital systems, typically 0dB).
This creates a ‘double whammy’ effect, whereby the louder your push your music, the greater the headroom you’re obliged to work to; resulting in an even greater loss of dynamics; not only this but the increase in headroom paired with a high target loudness will likely result in limiters being pushed too hard; and thus, create artifacts, thereby canceling out any benefit derived from increasing headroom (as it relates to reducing ‘downstream clipping’). Given that simply reducing headroom is counterproductive, and although other techniques can be applied to reduce the severity of ‘downstream clipping’, ultimately, reducing overall volume is often necessary.
The use of auditioning applications, such as Nugen Audio's Mastercheck*, is essential for properly safeguarding against downstream clipping; while, at the same time, maximizing loudness and quality.
*Nugen Audio's Mastercheck: https://nugenaudio.com/mastercheck/
A Quick Word about the Generation of CD Mastering Engineers
It can be argued that a lot of skill went out of mastering when CDs came in; or rather, that the bad habits of lazy mastering engineers were no longer punished/exposed by the more demanding media of radio and vinyl.
Artists should be wary of working with this particular brand of mastering engineer; as they have been slow to adapt to requirements of streaming media, and, in all likelihood, lack the fundamental skills and/or work ethic to keep pace with the changing technology.
Part 6: What the LUFS is Going On?
The question of target loudness is expansive and encompasses many subjects that are both highly technical and hard to grasp (given their abstract nature). Let’s take stock of the various factors, and their implications, explored within this article.
The act of Mastering is often described as one characterized by compromise. It would be false to believe that Mastering is about making a track ‘perfect’. Mastering is, in fact, about successfully balancing the, often polar opposite, demands of both the fundamental musical form of the track itself and those of the relevant distribution channels; ranging from the technical to those dictated by market realities.
When it comes to Target Loudness, the following factors should be considered; the loudness of similar (competing) tracks already in the marketplace, and the range of their dynamics; the inherent loudness potential of the track itself; and the prevention, or at least reduction, of any downstream clipping (once the track has been converted to the various lossy codecs employed by the streaming platforms).
The Importance of Reference Tracks
Reference tracks play a vital part in determining the target loudness for your music, and although selecting a specific reference track can often be problematic for Artists, it is important that you (or your mastering engineer) does so; working without a reference track is to work without any meaningful or relevant context. Mastering without a reference track is like throwing a dart while blindfolded.
When finding reference tracks, it is often best to select a target playlist (and in this regard, Spotify can be extremely useful); once selected, an application such as ‘Sort Your Music*’ can be used to reorder the chosen playlist, according to popularity. Starting with the most popular tracks first, find one or two tracks that have a similar loudness potential to your own.
*Sort Your Music App: http://sortyourmusic.playlistmachinery.com/index.html
It is important to select reference tracks of a similar loudness potential; failure to do so risks targeting loudness levels your track was simply not built to achieve.
Hands-On Solo and the Rise of the Machines
Advances in AI and plugins, along with the wealth of knowledge (or lack thereof) available on YouTube, now make the option of ‘Home Mastering’ a real possibility for Artists; and auto-master services, such as Landr and eMastered, provide at extremely cheap rates with extremely fast turnarounds. All of which, on the face of it, argues strongly against the expense of engaging a professional mastering engineer. And in truth, depending on your budget, and motivations/expectations for making music, you may well find that home mastering or auto-mastering is sufficient for your needs and circumstances.
As we have found during the course of this article, target loudness is a complex subject; and it is just one of many subjects that professional mastering engineers must be familiar with, in order to do their job properly. Mastering is a highly specialist skill set, best applied by those who have chosen to specialize in it.
It is no more possible for an Artist to properly master their own tracks to a high professional standard than it is a mastering engineer to learn the nuances of songwriting with any real success.
A Quick Word about Mix/Master
The same argument can be made against the growing trend for mix/master. And although the tools are the same, their application differs greatly; so too, the fundamental approach and ‘way of hearing’. And there are only so many hours in any given day. Time spent on mixing is time not spent on mastering.
Artists looking to engage a separate mix engineer and mastering engineer, be sure to engage specialists in each regard. Mix/Master engineers are typically best left to do all the work themselves.
Part 7: There is no Number
There is clearly no ‘one size fits all’ target loudness, nor is there a ‘one size fits all’ solution to who (or what) should master your music. For Artists with little or no commercial expectations for their music, and instead compelled purely by a need for self-expression, home mastering or auto-master may well be the best option. For those choosing to master their own music, I strongly recommend sticking to a target loudness of between -20 and -16 LUFS with a true peak of between -1 and -3 dB*; and, in doing so, ignore the loudness level of their reference track(s). I would, however, pay attention to dynamics and try to match them as closely as feels right for your music.
*In the absence of proper auditioning, LUFS values of between -20 and -16 LUFS, require a target headroom of up to -3 dB to safeguard against downstream clipping when converting to lossy codecs (this, of course, can be ignored if an auditioning application is employed).
For artists with greater (or grander) ambitions, engaging a professional mastering engineer is, in my opinion, the best option.
With commercial ambition for your music, comes the necessity of commercial target loudness; until loudness normalization is implemented across all platforms and distribution channels, at a standardized level, ‘the pressure to push it’ will continue to walk hand in hand with commercially competitive releases; and while any monkey will a limiter can make something louder, preserving dynamics and fidelity while increasing loudness to competitive levels demands a specific and particular set of skills.
Make Up Your Own Mind
This article is meant as an introduction and summarization of the various factors that influence the target loudness of a particular track. Readers are encouraged to follow up with the recommended further reading included at the end of the article; and ultimately, make up their own minds as to what is right for their music.
Finally, and for the last time hopefully, anyone peddling a ‘one size fits all’ answer is quite simply wrong and should be politely ignored.
Ebu Page on Loudness: https://tech.ebu.ch/loudness
ITU Standard for Loudness: https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-4-201510-I!!PDF-E.pdf
AES Recommendations for Loudness: http://www.aes.org/technical/documents/AESTD1004_1_15_10.pdf
BBC Standards for Radio: http://downloads.bbc.co.uk/radio/commissioning/TechnicalSpecificationRadio.pdf
Sound On Sound 'What Data Compression Does To Your Music': https://www.soundonsound.com/techniques/what-data-compression-does-your-music
Ian Shepherd's 'Mastering for Spotify? NO! (or: Streaming playback levels are NOT targets)' http://productionadvice.co.uk/no-lufs-targets/
Spotify FAQ for Mastering: https://artists.spotify.com/faq/mastering-and-loudness
---- This article has been amended to more accurately represent Ian Shepherd's approach to loudness. At the time of 'fact-checking' the article, Ian Shepherd's Production Advice website was unavailable. And though this is no excuse, it meant I had to work from memory when detailing his approach. I certainly had no intention of mispresenting his approach. I take 'fact-checking' my writing very seriously and can only apologize to Ian Shepherd for the error.