Audio effects are at the core of what we do as music producers. They allow us to turn a group of raw, unprocessed recordings into a musical statement that is ready to be blasted from a radio, soundtrack the latest video game or rock the main stage at a festival.
As producers, we use audio effects to shape sound. Used correctly they can sculpt individual audio tracks into something hugely powerful, or spine-tinglingly gentle. More than that, we can use them to put our own creative stamp on a song – they can help us to develop our own unique musical voice.
In this article, we will explain what audio effects are, why we need them and what they are used for. We’ll guide you through the most common types of audio effects, explaining how you can use them to enhance your music. We’ll also explain how to hear audio effects in other people’s recordings – a powerful tool that you can use to analyse exactly how your favourite recordings were created.
Lee DeCarlo, who has worked with John Lennon and Black Sabbath, explains, “Effects are makeup. It’s cosmetic surgery. I can take a very great song, by a very great band, and mix it with no effects on it at all and it’ll sound good, but I can take the same song and mix it with effects an it’ll sound fantastic! That’s what effects are for”.
To be slightly more technical for a moment, audio effects are hardware or software devices that change how an audio signal sounds. They manipulate the signal by putting it through various processors, and this can be done live or in the studio. In live situations guitarists often put their instruments through effects pedals, while sound engineers will add effects to instrument and vocal signals. In the studio, you might use outboard equipment to manipulate your audio or you might use plugins inside your DAW.
There are two broad motivations for using audio effects on a track; technical and creative. When we use effects for technical reasons it’s because we want to do things like making musical performances more consistent, making our mixes sound bigger and more powerful, or making sure that all of the instruments in a mix are audible.
If we use effects for creative reasons it’s because we want an audience to feel something. Whether we want to create a sense of mystery with reverb and delay or create a thumping riff with distortion and compression that makes people get up on their feet, we are normally trying to accentuate and heighten things that already exist in the music. Effects allow us to do this.
There are so many different kinds of effects that it can seem a little overwhelming at first. To help you understand what these different effects are and how you might use them, we’ve first broken them down into broad categories, before doing a deep dive on the five audio effects that are most commonly used.
These are effects that extend your audio through time and this category includes reverb, delay, and echo. These are sometimes also known as spatial effects as they can give the impression that your audio is being played back in different physical spaces.
This category includes chorus, tremolo, flagging, and phasing. All of these effects are created by modulating the source signal in one way or another and they can all be used to add a sense of movement to your audio. Phasing cancels different frequencies on a treated signal that is then mixed with the source signal to create a relatively subtle effect. Both flange and chorus work by adding a delayed version of the source signal to itself; flanging is when this delay is less than 5 milliseconds, while the chorus is when this delay sits between 5 and 25 milliseconds. Tremolo is created by modulating the amplitude of a signal; this makes the level of the signal periodically rise and fall.
The common spectral effects are panning and EQ. Panning allows us to move sounds across the stereo field from left to right. EQ allows us to selectively cut or boost certain frequency bands of our source signal. These effects help us to separate different instruments out from each other in our mixes so that we can hear everything clearly, and they have creative applications too.
Dynamic effects include compression and distortion. These effects make changes to the dynamics or volume level of a source signal – although they often color the signal in other ways too. More on this below
Filters are EQs in that they allow us to remove certain frequencies from our source signal. Filters are a specific type of EQ, however, in that they tend to remove all frequencies above or below a certain point. This filtering can be relatively subtle or it can be extremely dramatic.
Filters can be used for creative purposes – a dramatic application of a filter is often used before the drop on EDM music for example. Filters have plenty of technical applications too, they are useful for getting rid of unwanted frequencies that clutter up our mixes, such as mic stand rumble on a vocal recording, or high-frequency hiss from a guitar amp on a guitar recording.
The five effects below will probably make an appearance on most of your mixes. We’ve therefore covered them in more detail so that you really understand what they are used for. We’ve also explained how you can hear these effects in other people’s mixes so that you can study how they have been used on your favorite recordings – the best way of learning!
Equalization (or EQ) are absolutely vital when it comes to creating a good mix. Although what they do is quite simple, mastering the skill of EQing can take some time. You can use EQ to boost or cut certain frequencies in your audio signal – think of this as ‘sculpting’ the sound.
Humans can hear sounds between 20Hz and 20,000kHz; all the music that you make will therefore fall somewhere into this range of frequencies. Some instruments sit in the same places on this frequency spectrum and if you play them back together the result will be cluttered – they will get in the way of each other and you won’t be able to hear either instrument clearly. A kick drum and hi-hat sit at very different points on the frequency spectrum so if you were for some reason working on a mix that featured only a kick drum and hi-hats you might not need to reach for the EQ at all!
However a kick drum and a bass guitar sit at very similar points on the frequency spectrum. A lead synth and a lead vocal will also occupy some very similar frequency zones. EQing is important because if you try to combine multiple instruments into your mix that are prominent in the same areas, you can sculpt the sounds of these instruments so that they get out of the way of each other. This leads to a better-balanced mix with much greater clarity; allowing us to hear each element clearly.
We can also use EQs for creative purposes. Perhaps we want our strings to sound richer, or our snare to sound thinner. We can do this just because we just like the sound of it, or because we are working in a genre that requires this kind of audio quality.
Changes in EQ can be very subtle or very dramatic, and detecting small changes in EQ can take some practice. Cutting high frequencies will make a sound darker while cutting low frequencies will make a sound thinner. Cutting mid frequencies can make a sound more hollow while boosting them can make it more resonant – but all of this depends on the kind of material that is being EQd as well.
We use panning to move elements around in the stereo mix. Panning works by turning a sound down in one or other of the speakers. As you turn audio down in the right speaker, it sounds like it’s swinging to the left and vice versa. We can use panning to position audio right across the stereo field. This is a really good way of decluttering a mix. By panning guitars and synths out wide, you can leave lots of extra room in the center of the mix for your vocals, allowing them to shine through clearly. A good mix will generally sound nice and wide, while also being well balanced – the instruments on the left-hand side are matched by different instruments on the right.
Panning can also be used creatively – this effect can be automated so that a sound source sounds like it’s moving. You can add real excitement to a mix in this way.
If a sound appears to be coming from your left speaker, then it is ‘hard-panned’ to that side. If it sounds like it’s coming from your right speaker, it’s ‘hard-panned’ to that side. If the sound appears to be coming from the exact mid-point between the two speakers, then it is panned centrally – this means that it has the same volume level in each of the two speakers. Of course, it is possible to pan a sound anywhere between these three points, and with practice, you will be able to pinpoint where a sound is panned very accurately within the stereo field.
Short for ‘artificial reverberation’, reverb is an effect that was designed to mimic the sonic reflections that occur when you make a sound in a real three-dimensional space. This effect allows us to make it sound like our music is playing back in a cathedral, or in a tiny basement room, or in a completely fantastical and impossible space.
Reverb is used to make dry, studio recordings sound more realistic. It can also blend elements together – imagine you have a song that is a combination of recordings you made at home, along with samples you downloaded, and software synthesisers. By applying the same reverb to all of these disparate elements, you can make it sound like they are all being played in the same room at the same time. This adds a real cohesion to the music.
Reverb also has plenty of creative uses – a voice that is drenched in reverb has a very different effect on us than one that is bone dry. Think about how the levels of reverb that you set can give your song the feeling that suits it best. You can also use reverb to pick out specific moments to draw attention to them – a single snare hit, or a single word for example.
What space do the instruments sound like they are playing in? Does the room sound big or small? Does it sound like it’s made of a reflective surface like concrete or stone – or a less reflective surface like wood? Bring your real-world experience to bear – we listen to sounds in rooms all the time! When you listen to the music, do you feel like you’re in a church or do you feel like you’re in a theatre? There are other types of reverb plugins such as springs and plate emulations. Instead of mimicking real spaces, these mimic old reverb hardware and they tend to have pretty distinctive sounds. Try them out on your own music and over time you will learn to recognise them by listening.
There are other things you can listen out for too. How long is the reverb tail (how long does it take for the sound of the reverb to completely disappear)? What is the mix like between the dry and the reverberant signals – you can judge this by listening to how close the sound source appears to be to the speakers. If it feels close to you then the reverb mix is lower in comparison to the dry signal, if the sound source appears to be further away, then the reverb mix is higher.
In very simple terms, compression is an automatic volume control; when a signal gets too loud, the compressor turns it down. Compressors shrink the dynamic range of an audio signal – that means that they make the gap between the loudest and quietest parts of the signal smaller than it was before.
There are a huge number of applications for this in music. Compression can make a performance sound more consistent – very important on drum or bass tracks if you want a consistent groove to underpin your song. Compression is also important on vocals – it allows a vocalist a great range of expression while keeping unpleasant jumps in volume to a minimum. A singer can whisper one line and shout the next, and a compressor will even those levels out so that the vocal sits at a consistent level in the mix.
If our various sound sources are more consistent in their levels it also helps us to mix them more easily, as we can set a level for each instrument and know that it won’t vary too much from that point during the course of the song.
Hearing compression can be a little tricky, but it is a skill that can be learned. It is hard to hear whether the dynamic range of a piece of music has been compressed, but it is more obvious on some instruments than on others. Try listening to the snare drum – are there big differences between the loudest and quietest hits? If not, then it is probably being compressed. You can also try listening to the sound of the ‘ambience’. This is the sound in between the hits, so things like hi-hat sizzle, or snare reverb tails etc. If a signal is being compressed then this ambiance will become louder and more obvious.
Distortion was initially created by overdriving the circuits of analogue audio equipment. When this equipment was pushed beyond its intended limits, distortion was the result. So while this effect was actually initially created by people mis-using their equipment, it very quickly became the sound of rock and roll and manufactures started to build distortion capabilities into their hardware.
The most obvious uses for distortion are creative – it’s normally used because it sounds cool! Maybe you want to add some aggression to a vocal or a synth, or maybe you need distortion on your guitar because you are making heavy metal music. In some genres, distortion is such a fundamental part of the sound of that kind of music, that using it is essential. There are subtle ways of using distortion that can also add something to your mixes too. Adding a small amount of analogue-modelling distortion can warm an element up. This process mimics the sound of vintage recording equipment and can give your music a pleasant retro quality; sometimes you might consider adding a little of this warmth to every single element in your mix.
Distortion can be used for technical reasons too – sometimes a little of this effect added to the top end of a mix element can bring out some of the detail. For example, try adding some distortion to only the high frequencies of your bass part – this can often add clarity.
Distortion sounds are many and varied. You can buy plugins that model guitar amp distortion, or the sound of an overdriven compressor, or that mimic bit rate distortion that occurs when you down-sample digital audio. These all sound very different from one another.
In some cases distortion can thicken the sound with harmonics, in some cases it oversaturates it so that detail is lost, while in others it can highlight specific details of the sound. The best thing you can do is to try and listen to different types of distortion as much as you can – you will then start to learn about the subtleties of what different kinds of distortion can add to a specific sound.
This article is intended as a broad overview of the different kinds of effects that are available to us as producers, with an explanation of how you might use these effects in your mixes. Ultimately, we hope this works as a jumping-off point for what is an incredibly deep subject! There is no rush to learn everything you can about effects right away, but there is a wealth of information out there when you feel like you want to know more. Each one of the effects we describe is worth taking the time to get to know – the more knowledge of this area you have, the more versatile your toolbox when it comes to mixing you
Audio Ear Training for Music Producers and Sound Engineers