MIXING ARTICLES
Mixing Vocals: Sculpting the High End
By Dan Zorn
Published by Waves Audio
Source: www.waves.com/mixing-vocals-sculpting-the-high-end
​
​
The difference between a good mix and a great mix can often be the vocal sound. Learn how to properly sculpt the high end of vocals so that they’re present but not harsh, airy but smooth, poppy but not sibilant.
A great sounding vocal separates seasoned mixers from beginners, and when mixing vocals, controlling the high end is especially important in achieving a “professional” sound. Doing this correctly can be tricky, as there is an art to getting the vocals present and “pop” in a mix, yet also smooth and caressing to the ears. Moreover, the human ear has evolved an acute perception toward high frequencies, so there’s not much room for error in the high register. The good news is, all of these challenges can be overcome by knowing what to listen for.
The Psychoacoustics of the High End
The range of human hearing is approximately 20Hz to 20kHz, which fades away as we age, starting with the highest frequencies. A teenager may be able to hear up to 19 kHz, while an elderly person may only hear up to 15khz, or even less. Furthermore, the focal point of our hearing, or the frequency range we are most sensitive to, is 3.5-4khz. The extra sensitivity we experience from this range exists since the length of the ear canal matches the actual wavelengths of these frequencies. As a result, even at very low levels, these are the first frequencies we notice. In order to perceive other frequencies of the hearing spectrum with the same loudness as those in the 3.5-4kHz range, their physical volume must be higher in dB. The Fletcher-Munson Curve shows exactly how much louder:
The Fletcher-Munson Curve confirms that we are most sensitive to the upper mid-range, so any mix adjustments here must be made carefully. For example, on vocals, a 5dB cut at 3.5khz is more noticeable than a 5db cut at 100Hz, so considering the nature of how we experience different frequencies is imperative.
Also, keep in mind that high frequencies travel shorter distances than lower frequencies with longer wavelengths. In turn, the amount of high end in a vocal determines how far away the listener will feel from the vocalist. Vocals with less highs will always sound further away, while vocals with a present high end confront the listener more closely. Hip hop vocals, for example, should be particularly impactful and “in-your-face,” so making sure a rapper has enough top end is crucial to the musical emotion of the genre.
Different Areas of High End
High End can be a broad, loosely-defined term among those involved in music. For example, listeners may claim a vocal is bright, sibilant, or airy, so understanding the attributes and unique language used to describe high end is important to help us decide which specific area to address. Let’s dive in.
Intensity
Intensity refers to the lowest section of the high-frequency spectrum, approximately 1.5khz to 3khz. A subjective description of the “intensity range” would be piercing, powerful or just downright annoying. However, if you need vocals to cut through a mix volume-wise without physically turning up the overall gain, this could be a good area to boost.
Harshness
The word harsh typically describes a shrill or cold sound and generally speaking, harshness exists in the range of 3kHz-5kHz. Usually, this is the problematic area bothering listeners when they claim “the vocal sounds too bright,” even though “bright” can also be a term to describe higher, airy bands.
Sibilance
Sibilance describes vocals containing consonants sounding too sharp or “ess-y.” Often times, consonants like the letters T, X or S, or words containing a Sh or Ch can be piercing. These are the main culprits, but other letters can have sibilant qualities too, depending on how they are pronounced. These quick, sibilant “spikes” generally happen in the 4-10kHz range, and can be annoying to the ears, as well as interfere with other instruments. Sibilance must be tamed and controlled for a nice high end.
Air
Some mixers associate the air-bands around 8-9kHz and up with descriptions of “sheen” or “presence.” This area tends to give the listener a sense of “closeness” to the vocalist.
Polishing the High End
Let’s go over how to polish these high-end areas in a mix. Remember, for a full-sounding vocal, all of these frequency areas should be balanced with one another, and also against the music. Not only will every voice’s high end be different, but so will the accompanying instruments making up the highs of the mix, cymbals, hi-hats, etc. If the instrumental has a lot of high end already, you may not need to add as much high end in the vocal as there simply won't be room for it in the mix.
When mixing high frequencies in vocals, control is the name of the game. Traditionally speaking, controlling dynamics first, prior to treating problematic frequencies with EQ, tends to be an effective approach. The reason being, if the vocals are very dynamic, you may react by using EQ to cut a frequency area that was overbearing in only certain parts of the performance. In turn, the sound will be smooth when the vocals are loud; yet, suddenly too dull, or hollow, in quieter sections.
There are, of course, exceptions to this rule, as many engineers will tell you - “Rules? What rules?” However, you may conclude it’s useful to control dynamics after EQ so that you can hear the build-ups and issues with the high end more easily if you decide to brighten them up first. By doing this, you may be able to dial in a compressor, multiband compressor or de-esser more precisely.
You may even use some dynamic control before and after EQ, which is totally fine if it sounds good. The point is to think critically about what exact problem needs to be addressed, and then creating an effective, efficient chain of tools to get the job done.
Fixing Vocal Intensity
Frequency build-ups in the 1.5- 3khz range can have numerous causes, many of which are fixable before mixing: the microphone or its placement, the preamp, the distance of the vocalist from the mic during a loud performance, or even the natural resonance of the vocalist. Regardless, even if all of the above is considered when tracking, dynamic treatment is often needed in mixing.
A great way to tame this range dynamically (and transparently) is with a narrow-band de-esser. I like to use the Waves R-DeEsser in Split mode for this. Using the sidechain function, you can solo in on the frequency range you want to control, then adjust the threshold and reduction accordingly. The sidechain method can work well if you know the exact frequency you want to tame, but sometimes it's easier to pull up a static EQ, like the Waves Q10, and sweep around with a medium Q boost between 1.5khz to 3khz to pinpoint the problem frequency that jumps out the most. Then, you can type that frequency into the DeEsser and voilà, precise control that kicks in only when needed.
Fixing Vocal Harshness
Too much reduction in the 3-5kHz range will leave the vocals sounding dull, but overly dynamic information in this range can create an unpleasant listening experience. You have to use your ears and discretion here to try to identify what the real problem area is. Try to listen to the mix at different volumes to gain perspective. Remember, our ears are most sensitive to these frequencies. Consider if there’s just too much of this entire range or if there are smaller areas poking that need more precise control.
A great, all-in-one tool to control this range is the Waves C6, a multiband compressor that allows you 6 bands of individual, customizable control, ideal for maximum flexibility. If this area seems a bit too dynamic and things tend to jump out at you, then you can set one of the C6 bands Q to 3-5khz, then set the threshold to only attenuate when things are poking too much. Furthermore, if you find you’ve tamed the dynamics, but still notice build-up, you can always reduce the gain of the band within the C6 (effectively EQing it). The benefit of this method is the initial compression will lessen the amount of static gain you would otherwise reduce with an EQ. Transparency is the name of the game!
Let’s say there's something still not sitting right. Perhaps, the issue may be a small resonant buildup or “whistle” frequency. C6 can work great for this, as it also offers 2 sidebands that have the option for very narrow Q’s, perfect for small whistle tones.
​
Fixing Vocal Sibilance
Similar to harshness, taming sibilance prior to EQ is best, so that you can turn up the high end later without these consonants jumping out.
Sibilance occurs lower in the spectrum for males than it does females, due to the natural differences in octaves that males and females speak in. For males, 4-10khz is a good range to listen, whereas female sibilance may start higher at 5-6khz.
Adding a DeEsser is the classic way to reduce sibilant spikes. In addition to using narrow-band de-essing, the Waves R-DeEsser also allows you to attenuate everything above a set frequency in the form of a shelf. De-essing techniques often work, but sometimes even more control is necessary.
A great way to get rid of sibilant spikes outside of de-essing is to manually lower the volume of each sibilant peak with clip-gain edits or volume automation. This allows you to tailor the level and envelope of each sibilant spike on a word-by-word basis, since parts of a song may need more “esses” to poke through the mix, particularly against parts in instrumentals that are overly busy.
As of late, my preferred approach is to zoom in and chop the sibilant peaks on to a new track. Similar to manually lowering each sibilant peak, using a separate track allows for processing each sibilant separately, aside from just volume control. Treating sibilant peaks on a separate track, versus lowering clip-gain within one track, opens up new worlds for transparent processing: adding deeper EQ dips, de-essing, compressing or limiting harder, or even adding more saturation to smear some of the transient spikes.
Fixing Vocal Presence
The upper “air “band is one of the more gratifying areas to work on. Unlike the other areas of the high-end spectrum, treating this area usually means highlighting, or bringing more to life. You’ll usually find that boosting this range (above 8-9khz) will add pleasant sheen and definition to vocals, especially once you’ve dealt with the previous issues.
While the 9khz range is usually clear of sibilant spikes, certain vocalists will have frequencies that still poke through in this area. You can, of course, go back to any of the tools we mentioned earlier to tame this area, like a de-esser set at a higher frequency, or activating the top band on the C6 to suppress high, transient spikes. However, another way to address this problem is to add a bit of tape saturation to “round out” the spikes of the upper register. Using the Waves Kramer Master Tape, you’ll find that by lowering the Tape speed, raising the flux, adjusting the bias and playing around with various input and output ratios, it can truly help smear some of those unwanted high-end transients. Then when all is said in done, you can add further sheen with an EQ. Some of my go-to plugins for adding a nice top-end sheen to vocals are the Waves API 550 and SSL G-EQ.
Process Of Elimination
Getting the high end of vocals to sound present and pop, yet smooth and soothing can seem daunting at first. However, the real trick here is to not rely on anything but your ears and to ask yourself questions. The more questions you ask yourself, the more you can hone in on what the real problem is and ultimately make smarter choices.
Is there too much overall high end, or is it just harsh in a narrow area?
Are the vocals too “in-your-face” across the whole performance, or in just one area?
Can I use the same tool to fix all issues, or do I need multiple methods for a more detailed approach?
Training your ears to identify specific issues does not come overnight. After all, if treating the high end of vocals was as simple as just “turning the treble down,” anyone would be able to do it.




How to Make Your Auto-Tune Vocals Sound Like the Pros
By Dan Zorn
Published by Reverb.com
Featured in ASCAP's Daily Newsletter
Source: www.reverb.com/news/how-to-make-your-auto-tune-vocals-sound-like-the-pros
​
​
​
​
​
​
​
​
​​
​
​
​
​
What I find most fascinating about Antares Auto-Tune is that everyone and their mother knows what it is, despite the fact that it's just another digital audio plugin used in bedroom and professional studios alike. Even people who have no clue what an EQ or compressor does somehow at least know of the word "Auto-Tune" and even the general effect it has on the human voice.
But even though Auto-Tune has evolved to become this cultural phenomenon, very few artists or producers truly understand how to get it to sound like the way it sounds on major records.
In case you don't know what it is, Auto-Tune, in a nutshell, is a pitch correction software that allows the user to set the key signature of the song so that the pitch of the incoming signal will be corrected to the closest note in that key (and does so in real time). There are other pitch correction programs out there that do similar functions: Waves Tune, Waves Tune Real-Time, and Melodyne (which is pitch correction, but not in real time), but Auto-Tune seems to have won the standard for real-time pitch correction.
Auto-Tune traditionally is used on vocals, although in some cases can be used on certain instruments. For the sake of this article we will be discussing Auto-Tune and its effect on the human voice. Listen to this early example from the "King of Auto-Tune," the one artist who did more to popularize its effect than any other, T-Pain.
​
T-Pain - "Buy U A Drank"
Working as a full-time engineer here at Studio 11 in Chicago, we deal with Auto-Tune on a daily basis. Whether it's people requesting that we put it on their voice, something we do naturally to correct pitch, or even for a specific creative effect. It's just a part of our arsenal that we use everyday, so over the years we have really gotten to know the ins and outs of the program—from its benefits to limitations.
So let's delve further into what this software really is and can do, and in the process debunk certain myths around what the public or people who are new to Auto-Tune may think. If you were ever wondering why your Auto-Tune at home doesn't sound like the Auto-Tune you hear from your favorite artists, this is the article for you.
To set the record straight, as I do get asked this a lot of times from clients and inquiring home producers, there really are no different "types" of Auto-Tune. Antares makes many different versions of Auto-Tune—Auto-Tune EFX, Auto-Tune Live, and Auto-Tune Pro—that have various options and different interfaces, but any of those can give you the effect you're after. Auto-Tune Pro does have a lot of cool features and updates, but you don't need "Pro" to sound pro.
I wanted to debunk this first, as some people come to me asking about the "the Lil Durk Auto-Tune," or perhaps that classic "T-Pain Auto-Tune." That effect is made from the same plugin—the outcome of the sound that you hear depends on how you set the settings within the program and the pitch of the incoming signal.
So if your Auto-Tune at home sounds different from what you hear on the radio, it's because of these factors, not because they have a magic version of Auto-Tune that works better than yours at home. You can achieve the exact same results.
In modern music Auto-Tune is really used with two different intentions. The first is to use it as a tool in a transparent manner, to correct someone's pitch. In this situation, the artist doesn't want to hear the effect work, they just want to hit the right notes. The second intent is to use it as an audible effect for the robotic vocals you can now hear all over the pop and rap charts.
If you're an artist making your own beats or trying to lay down raps over another producer's music—here's some common mistakes you should avoid.
But regardless of the intent, in order for Auto-Tune to sound its best, there are three main things that need to be set correctly.
1) The Correct Key of the Song
This is the most important part of the process and honestly where most people fail. Bedroom producers, and even some engineers at professional studios who might lack certain music theory fundamentals, have all fallen into the trap of setting Auto-Tune in the wrong key. If a song is in C major, it will not work in D major, E major, etc.—though it will work in C major's relative minor, A minor. No other key will work correctly. It helps to educate yourself a bit about music theory, and how to find the key of a song.
2) The Input Type
​
You have the option to choose from Bass Instrument, Instrument, Low Male, Alto/Tenor, and Soprano. Bass Instrument and Instrument are, of course, for instruments, so ignore them if you're going for a vocal effect. Low Male would be selected if the singer is singing in a very low octave (think Barry White). Alto/Tenor will be for the most common vocal ranges, and soprano is for very high-pitched vocalists. Setting the input type correctly helps Auto-Tune narrow down which octaves it will focus on—and you'll get a more accurate result.
3) Retune Speed
This knob, while important, is really all dependent on the pitch of the input source, which I will discuss next. Generally speaking, the higher the knob, the faster it will tune each note. A lower speed will have the effect be a bit more relaxed, letting some natural vibrato through without affecting a vocalist's pitch as quickly. Some view it as a "amount of Auto-Tune knob," which isn't technically true. The amount of correction you hear is based off the original pitch, but you will hear more effects of the Auto-Tune the faster it's set.
So let's say you have all of these set correctly. You have the right key, you choose the right range for the singer, and the retune speed is at its medium default of 20ms. You apply it on the singer expecting it to come out just like the pros. And while their voice does seem to be somewhat corrected, it's still not quite corrected to the right pitch.
Here's why your Auto-Tune doesn't sound like the pros:
The pitch of the vocalist prior to Auto-Tune processing must be close enough to a note in the scale of the key of the song for Auto-Tune to work its best. In other words, the singer has to be at least near the right note for it to sound pleasing to the ears.
Whether you're going for a natural correction or the T-Pain warble, this point still stands. If the note the singer originally sings is nowhere near the correct note in the key, Auto-Tune will try to calculate as best it can and round up or down, depending on what note is closest. And that's when you get undesirable artifacts and hear notes you weren't expecting to hear. (Here is an example of how it sounds when the incoming pitch isn't close enough to the scale, resulting in an oddly corrected pitch.)
So if you put Auto-Tune on a voice and some areas sound good, some sound too robotic and a bit off, those are the areas that the singer needs to work on. Sometimes it can be difficult for non-singers to hear slight sharp or flat notes, or notes that aren't in the scale of the song, so Auto-Tune in many cases can actually help point out the problem areas.
This is why major artists who use Auto-Tune sound really good, because chances are they can sing pretty well before Auto-Tune is even applied. The Weeknd is a great example of this—he is obviously a very talented singer that has no problem hitting notes—and yet his go-to mixer, Illangelo, has said before that he always uses at least a little bit of Auto-Tune on the vocals.
If you or the singer in your studio is no Weeknd, you can correct the pitch manually beforehand with a program like Melodyne, or even with built-in pitch correction tools in your DAW, where you can actually go in and change the pitch of each syllable manually. So if you find yourself in a situation where you or an artist you are working with really want Auto-Tune on their vocals, but it's not sounding right after following all the steps, look into correcting the pitch before you run it through Auto-Tune.
If you get the notes closer to the scale, you'll find the tuning of Auto-Tune to be much more pleasing to the ears. For good reason, T-Pain is brought up a lot when discussing Auto-Tune. Do you want to know why he sounds so good? It's not a special Auto-Tune they are using, its because he can really sing without it. Check it out:
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
Hopefully this helps further assist you in your understanding and use of Antares Auto-Tune, and debunk some of the myths around it. Spend some time learning some basic music theory to help train the ear to identity keys of songs, find which notes are flat and which notes are sharp. Once you do, you'll find you'll want to use Auto-Tune on every song, because let's face it—nearly a decade after Jay-Z declared the death of Auto-Tune on "D.O.A."—it still sounds cool.

How to Mix Rap Vocals: Basic Techniques and Effects​
By Dan Zorn
Published by Reverb.com
​
​
​​
​
​
​
​
​
​
​
If you search online for "how to mix rap vocals," you will be scrolling through pages and pages of people telling you all different things. That’s because everyone has a different way to do it. Some are more accurate, some not—you just have to take them with a grain of salt and try them for yourself.
As an engineer who works at Studio 11 here in Chicago, a studio that has specialized in hip-hop and rap for more than 20 years (having worked with a young Kanye West, Lupe Fiasco, Crucial Conflict, Lil Durk, and more), we pretty much have it down to a science.
Here are just some general rules of thumb when mixing rap vocals that you can try for yourself—coming straight from the horse’s mouth.
Make Room in the Beat
You may ask, "But what does the beat have to do with mixing rap vocals?" Everything!
If you are working with vocals on top of an instrumental, there are two main ways the beat will look: either with tracked-out stems or as a single stereo file (like a wav or mp3 file). Of course, having the stems—separated kick, 808, bass, clap, hi-hats, etc.—will allow you to have greater control over the music. But either way, there is one thing that remains the same no matter which format your working with: Make room for your vocals.
If I do have the stems, I generally like to mix the entire beat before adding the vocals. Some people like to start with drums, bass, and vocals—that's cool too. But for me, I always end up changing the mix of these instruments once I add the other elements of the beat anyway. While I’m doing this, I keep in mind that the vocal is going to need some of the space that’s occupied by the other instruments. The goal is to keep space open in the low-mids and high frequencies so that the voice has room to sit within the beat, not on top.
​
If all you have is a stereo instrumental track (what used to be called a "two-track"), it will be harder to make room for the vocals without affecting a lot of instruments within the beat itself that maybe you don’t want to touch. But you can still use an EQ to carve out some spots that interfere with the vocal’s body and articulation.
The body of a vocal sound, where the warmth and weight resides, will be found in the low- to upper-mids (about 300Hz to 600Hz). Some rappers will have a deep voice, some have a higher voice, so where exactly you should dip the beat’s frequencies will change. A plugin like the FabFilter Pro-Q works great for hearing and visualizing these areas.
​
​
​
​
​
​
​
​
​​
​
​
​
​​
​
​
A rapper’s bite and the clarity of syllables will come from transients and other high-frequency information. Again, this will depend on the beat and the vocalist, but generally speaking, a dip somewhere around 2.5kHz to 7kHz should be a good starting point. Use your ears—if loud hi-hats at 7k or the claps at 2.5kHz are stepping on the words, scoop around those frequencies. Sometimes, you may need to go even lower, around 1kHz to 1.5kHz, if there is a piano or other instrument that is mixed too loud.
​
Bring Compression to the Vocals
I like to compress hard, but fool the ear into thinking I’m not. You’ll see people on YouTube compressing very lightly on vocals, because their college professor told them that was the correct way. While over-compressing can be an issue, don’t be afraid to see the meters moving—rap vocals are very percussive and dynamic, so you need to get those peaks under control.
Listen to the vocals without any compression first. Find any words or lines that are noticeably louder than the rest. Spend some time just moving the gain levels down during those spikes.
​
Once you even out the vocals, bring in the compressor. There are many compressors that can do the job, but my personal go-to compressor for controlling vocal dynamics is the Waves Renaissance Compressor. It’s not super fancy, but it has a unique color, especially once you start turning up the makeup gain.
Start with a low compression ratio, about 1:5 to 2:1. Dial back the threshold. On the input meter of the compressor, you’ll generally want the needle to hover around the peaks of the transients—sometimes right on the line or sometimes a little below, if you're trying to compress a little harder (which is good for really thin/harsh voices that need less high-end and more warmth). Adjust the ratio to taste.
​
If a rapper has a very percussive delivery, with lots of hard consonants, lower the threshold and use a slightly higher ratio (say, 4:1) to clamp down those harsh transients. If an artist’s volume fluctuates gently up and down, a lower ratio and lower threshold will smooth out the delivery.
Next, play with the attack if the vocals are sounding a little dull after compression—try to see if you can get those initial transients to poke through, then have the compressor clamp down on the decay of the words.
Since rap is fairly quick, you generally want the compressor to release before the next word hits, so keep the release time pretty short. Otherwise, the transients of the next syllable won’t come through, resulting in a duller/softer sound. I like to measure out the time of a 32nd note in Pro Tools and enter that in for a starting point, then move to taste. As you shorten the release even further, you'll start to hear more high-end information come through, and a vocal will begin to have more teeth.
​
"De-Ess" To Eliminate Hiss
Don’t be afraid to de-ess fairly aggressively. Getting out those sibilant frequencies, or the hard hissing sounds that hurt our ears, is imperative to making a vocal sound smooth. Sometimes I’ll put two or even three de-essers in the chain. Most standard de-essers work here, but I use the stock Digidesign de-esser, the Waves DeEsser, or the Waves Renaissance DeEsser.
The reason you want to de-ess hard, aside from getting rid of those sibilant spikes, is because nine times out of ten, you’re going to be adding some sort of high-shelf with the EQ to give the vocals more presence. If you don’t de-ess, it’ll be harder to get the vocals upfront in the mix without hurting the ears.
​​
​
​
​
​
​
​
​
​
​
​
Get Rid of Unwanted Frequency Buildups
There is a whole debate of whether to EQ first or compress first, but it’s honestly whichever sounds better to you. I like to do a little before and a little after, as the compressor will change the frequency response of the vocal anyway.
Start off with wider scoops to get rid of any glaring frequency buildups. I use quite a few EQs for this: Waves Renaissance EQ, Console 1 by Softube, and FabFilter Pro-Q, to name a few, but even stock plug EQs that come with your DAW will work just fine.
Generally, I start by rolling off some low-end (about 130Hz to 275Hz), but not so much that I lose the voice’s warmth and fullness. Dip some low-mid "boxiness" around 500Hz and some harsh sibilance around 3kHz to 5kHz. Then add a high shelf to taste. (Sometimes, frequencies will build up around 200Hz to 300Hz and also around 1600Hz to 2900hz, so pay attention to these areas as well.)
After creating a general EQ curve, listen for resonations in the voice, which will typically sound like a whistle or a single constant frequency poking through the speakers, which will often remain constant throughout a vocal take.
​
​
​
​
​
​
​
​
​
​
​​
​
​
​
To find them, boost a band of EQ with a rather small Q (not too small, as everything will sound like it's resonating) and a moderate amount of gain. When you hit the area where it sounds like a frequency is significantly louder than the rest around it, dip that down to taste. I like to use the Waves Q series, as the Q can get pretty tight and, to my ears, the EQ is very precise—perfect for small notches.
To finish, listen to the vocal against the music and see if it needs to be boosted anywhere. If the vocal seems a bit buried, a boost around 1kHz to 1500Hz will often help it come through. While I prefer accurate, uncolored EQs for taking away frequencies, I like to have a bit more color (like with an analog unit or analog-emulating plugin) when I boost to add some harmonic excitement.
​
Add Body and Bite with Saturation
Having a dedicated unit or plugin for saturation works great on vocals (especially really thin-sounding ones). Depending on how you set it, saturation can add thickness and body, as well as a bit more bite in the top-end. Sometimes, saturation can smooth out the top-end, depending on how you set it. .
I'm forever in search of the perfect saturation tool, but I typically will use the Digidesign Lo-Fi, Waves Rennasiance Axx (a compressor, but with a lot of grit), Waves Kramer Tape, and my current favorite for vocals—the overdrive on the Softube Console 1.
Rap vocals don't always need saturation, as sometimes an MC will be coming in hot on the mic and the preamp will naturally overdrive a bit, but if you have a recording with a dull/weak vocal that just needs a little dirty analog vibe, look no further than saturation. You’ll be surprised how far it will take you.
​
​
Use a Small Amount of Reverb
Reverb in rap music is actually a little controversial. Historically speaking (and with some exceptions), rap never really used it until fairly recently. Rap vocals have always been an expression of rhythm, and a reverb's decay tends to step on the natural percussiveness and immediacy of an artist’s performance, negating the "in your face" intensity that rap music often aims to achieve.
Having said this, on a lot modern rap records, because of the accessibility and desire to create new types of vocal sounds, reverb is often used at some point. When I use reverb on a lead verse, it's usually a very small amount—just enough so the listener can feel it more than they can hear it. Sometimes the Wet/Dry is on just 1%.
I hear it more as a "glue" tool than an intentional effect, as a small amount of reverb does tend to make the vocal sit in the mix a little better to my ears. It also takes me out of the zone of hearing a purely dry vocal that was recorded in a dead-sounding booth, which can seem a little unnatural to our ears. Play around with different reverbs, and see which ones work the best for you.
Add Excitement with Delay
Once, when one of my favorite engineers, Andrew Scheps, was asked what his favorite reverb was, he quickly responded, "Delay." While the effect can be used in different ways (and can be tricky to master), what Andrew is saying here is that using delay can be a way to make a lead vocal sound wet or pushed back in a mix, without actually using reverb. (As I mentioned above, reverb can murk up a rapper's performance.)
Experiment with tempo-synced delays. My favorite at the moment is Waves H-Delay, but stock plugins work well too. If set tastefully, a subtle 1/8 note or even 1/16 note delay on a lead vocal can really help fill in those empty spaces, without having the delay step on the unaffected vocals.
​
​
​
​
​
​
​
​
​
​
​
When using delay, I like to create an auxiliary channel for a couple reasons. The first reason is so that any delay I add isn’t affecting the original lead. The second is so that I can send multiple channels to the same delay without hogging up CPU. Lastly, using an aux channel lets me process the actual echo differently. After sending a vocal to the aux channel, process it with delay and other plugins to create a more musical echo.
By adding an EQ here and rolling off low and high frequencies, the echo won’t interfere with the body or articulation of the vocals. (One reason I love H-Delay is because it has these EQ filters built right in.)
Another way to make the delay duck out of the way of the vocals is to add a compressor after it, and side-chain the lead vocal to the compressor’s input. This way, whenever the main vocal is active in the mix, the delay is compressed and lower in volume, but as soon as a vocal cuts out, the echo from the delay creeps back in.
Process Additional Vocals
In-and-outs—also known as dubs or stacks—are recorded after the lead is already laid down, when the artist adds a second vocal to certain parts of the verse. These will add emphasis to punch lines and key phrases (usually at the ends of each bar) or help to bring clarity to syllables that may not have been fully pronounced the first time around. Because a rapper will be able to take breaths in between lines, they can really make sure that they nail the parts.
The doubled lines can also help give the verse some sort of movement. Instead of a single vocal track, which can be relatively stagnant in volume (especially after compression), it adds a sort of back-and-forth dynamic. To give them their own character and space in the mix, you can compress these second vocals a little harder than the lead and pan them to add a chorus-type effect or stereo depth.
Ad-libs, or additional words thrown in between phrases of the verse, will be treated differently, depending on the rapper. A lot of trap artists I record prefer a lot of effects here, but a lot of old-school hip-hop artists don't like any effects at all. But unless the artist specifies that they want the ad-libs dry, I typically put them in some sort of telephone filter created with a bandpass EQ—that is, strip away the lows and highs and isolate the upper-mids.
One of my favorite things to do is to experiment with different effects on top of the telephone filter. Phasers, flangers, distortion, delays, and reverbs all create a unique sound to the ad-libs, and can put them even more in a specific frequency pocket. I personally like the sound of a drastic reverb with a long tail, because it will give the ad-lib its own 3-dimensional space. This creates a cool overall effect—with the dryer lead vocal up front, quieter in-and-outs on the sides, and wet ad-libs in the back. Experiment and see what kind of new effect chains you can come up with.
​
It's important to remember that if there was a formula that worked every time on mixing rap vocals, everyone would be able to do it. Truth is it really depends on how the vocals sound to begin with. If they sound great without anything on them, chances are you won't have to do as much. And if not, at least you now have a guide of how to fix them.
Follow these steps and you’ll have rap vocals that sound like they were done in a professional studio.







8 Things to Check Before Sending Your Mix to Master
By Dan Zorn
Published By Reverb.com
​
If you’ve been mixing music for any amount of time, you’ve probably had the, “Oh I wish I had done that” thought after hearing the mix in the car. Too much bass, vocals too loud, sounds swimming in mud — these are all super common problems that bedroom and professional engineers alike experience from time to time.
The key in avoiding revisiting a mix multiple times is all in the pre bounce checklist. But before we start digging into the list, I highly recommend testing your mix on smaller, less expensive speakers and also giving it a listen in the next room (or outside the door). Sometimes, we can get so caught up in front of the speakers that we tend to lose sight of the whole picture, causing us to miss things that need addressing.
Once you feel that you’ve accurately judged your mix, it’s time to take a look at some commonly missed issues that arise in a mix before you send it off for mastering.
1) Are the vocals too loud?
Vocals are arguably the most important part of any song, but that doesn’t mean that they should necessarily be dominating the other elements in the mix. There are certain styles of music that have more “give” when it comes to the overall level of the vocal — think pop music, for example — but for the most part, you want your vocals present yet sitting within the mix.
A good way to see if your vocals are too loud is to turn the volume of your monitors way down to a level that you can comfortably talk over. When listening at this volume, levels are far easier to judge as being too loud or too quiet. When things are loud, our ears aren’t as sensitive to small changes in volume and everything begins to sound like they are at the same level. Plus, everything sounds good loud, so bring the volume way down, and use your critical ears.
​​
​
Oftentimes, the volume of the vocals will not sound right because the relationship between the vocals and snare (or clap) isn’t set correctly. If you listen to professionally done mixes, you will notice that, generally speaking, the snare and vocals are within the same ballpark level-wise. Sometimes they will be above and sometimes below, but generally, they’re within a couple dB of each other.
​
The snare is one of those elements that dictates how close or far we are from the music, due to the fact it occupies a very sensitive human frequency range. So if it's buried below the vocals, it will sound like the vocals are closer in the foreground. If the snare is too loud above the vocals, the vocals will seem further away in the background. If the vocals are further in the background of the mix, it won't create that desirable “intimate” feel that good vocal mixes have, and the emotion will become lost. Spend some time dialing in this relationship, and you’ll find that your vocals will sit much better in a track.
Maybe you can’t seem to find the right volume pocket for the vocals because other instruments are in the way of this imaginary pocket. Aside from clearing space in other sounds in the speech articulation range (2.5-5khz range), try reducing a little 3-500hz in some of the other sounds (think guitars/synths, even bass.)
This is where warm vocal harmonics typically are, so reducing this range in other instruments will begin to leave more space for them to pop through more without having to crank the volume (which will inevitably cause them to be too loud at the certain quiet parts in the song.. From my experience creating a specific frequency pocket for the vocals is crucial to getting them sit correctly in a song.
2) Are the vocals smooth enough?
You’ve probably been told to mix quietly to get the best result, and this is true for most things. But the one thing mixing quietly can’t show you is the overall smoothness of vocals. Turn it up before you print and see if any of the vocals sound too harsh. When doing this, you can hear if you need to de- less a little harder, or perhaps you missed some frequency spots that need attention. A lot of times, vocals will sound loud or unpleasant because there are rogue resonant frequencies in the 2k-5kk range (coincidentally, the frequency range at which babies cry).
This is a wide range so listen carefully to see if you can pinpoint which annoying frequency(s) are poking their heads through too much. There will almost always be something that needs a narrow EQ notch within this range. You’ll be surprised how much smoother a vocal will sound when these unpleasant frequencies are taken care of.
Doing this will also help with the first check in the checklist, and helps the vocals “sit” better in the mix.
​
​
​
​
​
​
​
​
​
​
​
​
Example of a small cut to reduce a harsh resonating frequency. Hint: Move the band around to find the offending frequency(s), but that 3-5K area HZ is typically a good starting point.
​
​
3) How balanced is the frequency response of the whole song?
How is the overall frequency curve of the song? Can the low-end be pushed a little more? Can the track use a little more mids? Maybe you scooped out some 3-500hz in the music to get the vocal to fit. Does it need a little bit more on the master to make up for the deficit?
Try to listen for “holes” or pockets in the mix that need to be filled in, or perhaps some frequency buildups you didn’t consciously pay attention to that need attention. It’s good practice to stand in another room or outside the door for this so that you can hear everything together.
Can the track be lifted/or excited with saturation, harmonic distortion? Sometimes a little saturation can go a long way in filling in some holes (and can make the whole song appear louder to boot).
4) How are the overall dynamics of the song?
Throughout a mix, it's common to use multiple stages of compression. Maybe you have some compression on the channel, some on the bus, some on the master. Chances are that you’ve heard when compression has gone too far — that awful squashed, pumping sound — but it's far more difficult to hear if you haven’t added enough compression.
Listen to parts that seem to jump out in volume too much. A bit of gentle compression or limiting can tame some of these peaks. Or if the song seems to dip too low in overall volume at certain parts, a nice low ratio compressor on the master can gel everything together nicely. I find that multi-band compression is best for this, and if used correctly, it can turn a good mix into a more finished-sounding product.
5) How is the stereo field?
Do things fit in their own space in the stereo field? Is the mix too wide or too narrow? It’s common to get so caught up in EQ, compression, and effects that you forget that simply panning something is enough to separate it from other instruments.
If a mix needs to be wider, experiment with some spatial enhancers or with subtle chorus or delay effects to give it some wideness. A little goes a long way here. Also, this is a good point to check if one side of the stereo signal is too heavy and whether you need balance the left and right channel more.
6) Did you edit all the clicks and pops?
Many newer engineers just crossfade over a pop/click, and though this may work once or twice (if you’re lucky and the crossfade is large enough), the real way to get rid of pops and clicks for certain is to splice the waveform at the zero point between the peaks and dips of the wave.
​
​
​
​
​
​
​
​
​
​
Where to cut on a wave to prevent clicks/pops
​
​
In other words, cut the waveform in between the compression/rarefaction (or peak and dip), and you will eliminate any pops or clicks. It’s always good to zoom in very close to all your splice points to see if the edits are cut correctly, as it may be difficult to see when fully zoomed out.
Sometimes when all the tracks are going you may not hear a click as easily, but certain speaker systems could emphasize it more when played outside the studio. Nothing is worse than a digital pop that could have been avoided in the first place.
7) Does the track ever sound boring or have a lull point? Is there something you can do to help?
Some songs just don't carry themselves well, either due to poor arrangement or just from using uninteresting sounds. If it is your song, try to see if you can solve any lull moments by either shortening sections or switching up the arrangement slightly. The best way to do this is to be brutally honest with yourself, starting from the top and stopping when it seems like it gets boring.
If you’re mixing music for other people, sometimes cutting sections out isn’t your place as an engineer. But nine times out of ten, there is something you can still do to help. If a chorus, for example, seems to drag or sounds a little boring as is, experiment with artificially creating a harmony on the second half to add interest to the listener. (Melodyne, even basic pitch-shifting, are good go-tos for this).
Maybe you can duplicate the vocals to a new track and put a distortion on it to change the tone on the second half so that when that part comes, it has more impact. The tricks are endless, and if done correctly, they can really help move a song along and prevent moments that lose the audience's’ attention.
8) Did you check the ending of the track before you bounce?
Before you send off your track for mastering, make sure the fade out at the end feels appropriate, and that you leave a little extra space at the end. If some of your analog-modeled plugins are creating noise — I swear some of these emulations are noisier than the actual units — leaving a couple of extra seconds of noise allows the mastering engineer to analyze that noise and, to an extent, cancel it out with noise reduction software afterward.
You will find that taking the short time to double-check your mix will save you a lot of headaches in the future and will help you nail the mix the first time. Use a checklist that works for you, and get in a habit of always checking it off before you print. I can assure you that you (and your clients) will be much happier.
​


How to Apply Radio-Ready Reverb and Delay on Vocals
By Dan Zorn
Published By Reverb.com
​
You ever hear a song on the radio with a drastic reverb on the artist's lead vocals that sounds really amazing (Post Malone for example), then, when you try to replicate that reverb at home, it just sounds washy and kind of amateurish?
While everyone knows about reverb, not everyone knows how to tastefully set it to make it work in the mix, as it’s not always just a case of "set and go." Delay, another common effect used on vocals, can also be similarly hard to get right.
Professionals who have been mixing for a long time have learned how to really dial in effects like reverb and delay so that they create a pleasing effect without being overbearing or stepping on any other part of the mix.
It can be a tricky subject to discuss when an effect is too much, as it's very subjective: Some people want a very heavily effected vocal sound, while some want it more natural. For the sake of discussion, I will mainly be referring to a type of natural vocal sound you may hear on a lead vocal on the radio. (Later, we'll discuss how to use these effects on background vocals.)
Once you understand the fundamentals of how to make reverb and delay work naturally in a mix, you'll be able to match the quality heard in so many radio hits—and then, if you wish, you can push your mix further into a more experimental sound.
As a full-time engineer at Chicago's Studio 11, I'm often working with hip-hop vocals, which, traditionally speaking, are very percussive, quick, and dry. But even in these situations, you can use a short-tail reverb to get the vocals to gel with the other instruments and get them away from that dead, dry-in-the-booth sound. Even if the wet/dry is on 1%, it does make a big difference in the context of the mix.
Whether you're working in hip-hop or any other genre, one key technique for keeping reverb out of the way of vocals is setting the correct tail, the length of time the effect will be applied once triggered.
​
If the artist is speaking fast, I’ll usually put a shorter reverb with the decay of the reverb's tail stopping before the next word so that it pulses in rhythm with the tempo of the rap. There are a couple ways to do this.
One is you can measure the time between the transients of two words in milliseconds to get a precise measurement of the decay time. Most DAWs have this ability. In Pro Tools, just selecting the measured space will show the time in milliseconds in the Transport window. You can also measure the milliseconds based on the tempo of the track (selecting 1/8, 1/16 note, etc. on the grid) to give a rough overall decay time.
Another way to do this is to use your ears. I like to crank the reverb wet/dry all the way up to hear this, while listening specifically to the decay of the reverb tail. Then, when it sounds like it’s pulsing in beat with the lyrics—with the tail stopping before the next word—you can adjust the wet/dry according to taste.
Either way, these methods will help ensure a subtle, tasteful, short reverb on vocals.
​
Setting a Reverb's Pre-Delay
Another important aspect about reverb is the pre-delay, which is basically when the reverb actually comes in. I kind of look at pre-delay as the "attack" of a compressor, where if you set it longer, the original sound is able to poke through first before being effected. Think of it as "fooling the ear" to be able to add more of that nice reverb sound without it muffling the intelligibly of the lyrics so much.
This can be ideal for a medium to long reverb, where you want that slightly drowned out effect, but also don’t want to lose the clarity of the words. Setting a pre-delay to come in a little after the transient of the words lets the initial attack come through clearly.
The pre-delay can also be set according to the tempo of the track, like the reverb's decay time. Using the same methods for finding decay times, you can measure a desired time for the pre-delay slap to come in. I usually measure a shorter time on the grid (something like 1/64, or 1/128 note) and set the pre-delay to that.
​
Using Long Reverbs
Long reverbs can add a great sense of space and texture to a mix. These can be a little trickier to set, as they have more of a dominant presence, unlike the subtle glue of a short or medium reverb. So it's worth spending a little more time selecting the best-sounding reverb that works with the song, adjusting the decay and pre-delay accordingly, and molding that reverb so it doesn’t step on anything you don’t want it to in the mix.
There are some specific reverbs that most professional engineers choose, whether they're in outboard or plugin form, that just naturally have pleasing qualities to them: Think AMS, Lexicon, and Eventide for hardware; Audio Ease Altiverb, Avid Reverb One, and Waves Renaissance Reverb or TrueVerb for plugins. While I agree that these units have a nice charm to them, you can still get a big-budget reverb sound with stock reverbs in your DAW, you just have to know what to look for and know how to treat them.
So, aside from the aforementioned timing with long reverbs stepping on vocals, there is another aspect of reverb that can also step on vocals (and other parts of the mix if you're not careful). It's the overall tonal quality of them.
All reverbs generally have their own frequency buildups and flaws, especially if they are a convolution reverb that models a specific room or place (church, cave, bathroom, etc). Taking a church for example, that room might have a lot of low-end buildup in there, so you will want to remove some of that through EQ so as to not step on the lows of the vocals and overall low-end in the mix. (There are exceptions to this of course—sometimes it can add what's missing in the vocals—but nine times out of 10, untreated convolution reverb will add an unwanted boomi-ness.)
​
Then, on the other side of the spectrum, a room may also have a lot of high-end flutter, depending on the material used. Sometimes this high-end is nice, but sometimes it can step on the articulation of the vocals, especially in "essy" ranges, or on things like hi-hats that you want in that same higher frequency range.
There are also buildups that can happen in the mid-range, say, between 300Hz to 500Hz, or around 2kHz. You just have to use your ears to see what part of the reverb you don’t like.
The way to getting rid of these buildups (or enhancing the parts you do like), is to either use the built-in EQ on the reverb (a lot of plugins have it, but some stock ones don’t) or a standalone EQ. For a longer reverb, you would want to set that up on an auxiliary channel and send the vocal to it via a send on the vocal channel. This way, you are able to process just the "wet" reverb without affecting the EQ of the vocals. Pop an EQ after the reverb on the return track and start listening to how it's blending with the vocals and mix. (Hint: It's easier to hear these buildups when the send is set very high to hear the reverb more.)
Once you’ve tailed the sound of the reverb, then you can bring the send back down to taste. The end result should be a reverb that complements the vocals and doesn’t step on anything in the mix. Then, seeing how you have the reverb on a separate channel, why stop with EQ? Try adding a phaser/chorus, compressor, or spatial plugin to get an even more interesting reverb. The combinations are endless.
​
When asked once what his favorite reverb was, well-known engineer Andrew Shepps said, "Delay." Delay is a great way to add an overall "wetness" to the vocals without actually swamping the vocal with reverb. A combination of delay and a lesser amount of reverb is a great way to really wet up those vocals without it seeming like too much.
Delay, however, has similar issues with reverb. It's very important to set the timing correctly. I would start by syncing the plugin with the track's tempo. Luckily, nine out of 10 delay plugins in have this ability. Then, set the note division according to taste. For general purpose, 1/4, 1/8, and 1/16 notes should cover it. But there are times where you may want to go higher for a desired "flutter" effect, or even do more interesting time divisions, like triplets or dotted note repeats, but that's another topic in itself, as it can get quite experimental.
For a lead vocal, I would stick with simple time divisions for maximum clarity that pulses with the beat. On ad-libs or background vocals, feel free to switch those up, as those occur less frequently and can be a little weirder and more drastic. (I actually love the sound of 1/8 triplets on ad-lib delays. More on that later.)
Setting a Delay's Feedback
Most delays have a feedback knob or setting, and this is where most people go wrong. Having too much feedback can step all over the next words in a song and actually decrease the clarity. Or, maybe there is one spot in the song, like at the end of the last chorus, that sounds cool with a longer feedback, but then when you go back to the middle, where other words crowd the chorus, problems will arise.
Feedback is something I would use your ears for. Generally speaking, unless you want large spaces between the words (for some creative purpose), you're going to want a shorter feedback. Some plugins even offer "negative feedback," which basically means that it still has that familiar extended echo, but the actual delay of the signal’s phase is flipped, so that it doesn’t constructively "add" volume to the next word. [Check out this primer on phase if you need it.]
​
Cleaning Up a Delay Signal
After setting the feedback to taste, there is the issue—similar to the tonality of the reverb stepping on other parts of the mix—of the overall frequency curve of the actual echo itself. Some plugins will have built-in low-pass and high-pass filters, like my all time favorite: Waves H-Delay. But others may not have that option, in which case you'll have to do the same thing as the reverb and set up a separate auxiliary send and return. If so, just follow the same steps we outlined above for reverb.
There also is another way to make sure the echoes don’t step on the lead vocals: Add a compressor after the delay and side-chain that to the lead vocal via a send. Doing this is pretty effective, as when the vocals are "active" or heard, the delays are being squashed and lowered; then, when the vocalist stops—even momentarily—the natural decay of the vocal's envelope will let the delay creep back in during those quieter moments between words.
So far, these techniques have been concerned with adding a "static" delay, or, in other words, a delay that remains the same across the whole vocal to create a sense of space and overall bounce. But there are also times where you just want a longer or different delay on one word.
The traditional way to do this is to set up another auxiliary track with your desired delay settings, then automate a separate send on and off for those specific words. The benefit to doing this, of course, is that you can set different amounts of delay you want on specific words—maybe a subtle delay on one and then a drastic delay on the next.
But another quick way to do this that doesn’t require time-consuming automation is to copy the vocal channel, add a delay to it at 100 percent wetness, turn the channel down a bit, and then just copy the words you want delayed and drop them in that track. It's a great way to quickly add different delays without going through the time to automate sends.
Using Reverb and Delay on Backing Vocals
Up until now, all of these techniques have been catered toward lead vocals, but what about background vocals like stacks or ad-libs? These are a bit more subjective, but what I like to do is create a contrast between the sound of the lead and the sound of the second vocal—not only to separate them tonally, but also to create some sense of depth or stereo spread to the song.
On stacks (otherwise known as double-tracked vocals, or dubs), generally a longer delay tends to clutter things, but a shorter stereo slap delay with one side set different from the other can actually create an expansive, chorus-type effect, which pushes the doubled vocals to the sides of the stereo field.
Reverb on stacks is also up to you, but if the lead is wet already, you may want to leave the reverb on doubled vocals drier, just to add a bit more articulation. As you’d imagine, a long reverb on the doubled vocals with another long reverb on the lead can make things a bit washy. But there are no set rules.
On ad-libs, however, it’s a whole other story. If the lead is fairly dry with a subtle reverb, I like to throw the ad-libs into outer space. What I mean by outer space is a long, drastic reverb that creates a large sense of depth in the mix. With the lead up front and stacks pushed a little back and to the sides, the ad-libs can go far away in the distance. That said, medium or short reverbs will, of course, work just fine. And if the lead has a washy reverb, sometimes a drier ad-lib creates more interest.
Feel free to get creative with the delays on the ad-libs too, as generally they don’t happen as much in the song, so they're not in as much danger of stepping on other parts of the mix. Changing the delay time to triplets or dotted notes can add an interesting rhythm to the empty spaces. Play around and see which ones work the best for the song.
Multiband Compression: The One-Stop Solution For Mixing Vocals
By Dan Zorn
Published By Reverb.com
​
Compression, a go-to tool when mixing vocals, is commonly used to treat the dynamics of the artist, as there are times when the performer can be really soft and then very loud. But not all compressors treat vocals equally.
Most compressors are what's called full-band compressors, which operate by attenuating the entire signal as a whole determined by the threshold, regardless of its frequency content. There are ways to compress certain bands more than others, depending on how you set the threshold, ratio, attack and release settings, but overall it will just compress according to what the loudest part of the signal is—and compress it uniformly to get a more even level.
Equalization is a second go-to tool for vocals, but it has its limits. It can be easy to overuse—making a voice sound unnatural. But even when not overdone, it can be difficult to make EQ sound neutral, especially in moments when the performer changes their volume or pitch.
Of course, you can get great vocals by combining a full-band compressor to tame the dynamics and EQ to shape the tone. But what if there was a way you could compress and EQ all at once, in a way that results in an overall more natural sound? There is! Multiband compression.
Multiband compression has actually been around for a few decades, originally in the form of hardware units that were a tad tricky to set. Now that they are available as plugins, they are easier than ever to access and use. (Not to mention you can use as many in a session as your CPU allows.) So what's so special about this multiband compression, and how can it serve as both an EQ and compressor at the same time? Let's delve in by looking into an example scenario.
Let's say you have a singer that you want to mix. In this hypothetical song, the singer starts out the verse using a soft low pitch, then as they progress they get louder and also go up in pitch. In those moments, you notice their voice becomes harsh in the upper mids (let's says around 2500Hz). Now, the traditional approach would be to pull up an EQ, sweep around til you find that frequency then apply a small cut in that area. The problem with this, however, is that now the area in the beginning, when they were singing low and soft, sounds extra muffled because that area did need that 2500hz.
So then, while listening to that first soft, part you realize that there is just too much muddy buildup at 350Hz, so you make a cut. Then you go back to the high part again, and realize that because you made that cut, the vocal now sounds thin, because when they shifted up in pitch, the warmth and body of the vocal that was around 200Hz to start shifted up to 350Hz, which is now cut. And that muddy buildup that was at 350Hz when when the singer was in a lower pitch is now muddying up around 500Hz.
You see the conundrum here: The problem areas change throughout the singers performance, so doing one cut may have worked for one part, but created problems in another.
Another way to look at it, aside from frequency variation, is if an artist is moving around on the mic too much. During some parts, they may be close to the mic, creating a boomy proximity effect. During louder parts, they be far away from the mic, resulting in a less boomy sound. So if you cut that boomy frequency range, those louder parts will then start to sound thin.
This is where multiband compression comes into play.
There are various multiband compressors out there, Waves C4, C6, Linear Phase Multiband, hardware multibands, and various stock versions that come with certain DAWs. They all revolve around the same principle, but for the sake of this tutorial I will be referring the Waves C6, as in my opinion, it has the most flexibility, and sounds very musical and transparent when set correctly.
​
A multiband compressor is essentially a few compressors in one, but unlike a traditional full-band compressor, they are are frequency dependent. In the case of the Waves C6, it has six customizable bands that you can set at different crossover frequencies.
So let's go back to the previous example. Instead of applying a static cut at 2500Hz, you can set one band there to attenuate that range only when it passes a certain threshold, or, in other words, whenever that area has too much buildup and needs to be attenuated. Applying a second band for that 350Hz muddiness to only attenuate when that frequency range is too much will similarly not affect the more balanced moments in other parts of the performance.
​
​
After all, EQ really is just turning a certain band down in dB, so you can think of it as cutting that range, but only in certain times when it needs it, as opposed to throughout the entire performance. (I would also like to note that there is such thing as a dynamic EQ, which also attenuates a specific frequency range depending on the set threshold—there are some sonic differences and benefits of when to use this type of processing, but the principles are very much the same. Or, of course, you can also write in or automate your standard EQ in the places you need it.)
That said, using a multiband compressor will allow you to control issues arising from the performer's distance to the mic, parts where they are too loud or too soft, while also shaping the overall tone of the vocal sound. By killing three birds with one stone you will be using less processing for CPU reasons, and your vocal will sound more pure, having only run through one processor, instead of three or four. It may seem daunting or tricky to set, and can be at first, but if you spend time really listening to the various areas in the performance and train yourself to identify what areas need attention at what specific moments, you'll be well on your way to creating that big budget, natural vocal sound.
What You're Getting Wrong When Mixing Hip-Hop Beats
By Dan Zorn
Published By Reverb.com
​
Working at Studio 11 here in Chicago, a studio that specializes in hip-hop and R&B, we hear a lot of beats from producers on a daily basis—and work with artists to turn those beats into songs. Whether the artist made the beat themselves or purchased it from another producer, we've heard it all.
More often than not, we see common mix issues that arise that we have to fix in order to make the instrumental work for the vocalist. Acknowledging these issues and fixing them before sending it to an artist—or knowing them before you finish your own mix—can really help the recording and final mixing process.
In this article I will be referring to standard, stereo MP3 or WAV files, seeing how these are the most common formats to record vocals over. Here are some of the mistakes that come across our board all the time:
The Hi-Hats Are Too Loud
Something I hear all too often is hi-hats and cymbals that are way above the other instruments. This can be due to the fact that the producer had poor monitors or a bad mixing environment—or they just weren't paying attention. The main problem with hi-hats being too loud, aside from it drowning out the high-end frequencies of other instruments, is that they make it difficult to make vocals really pop over the beat.
If there is a lot of high-frequency content in the hi-hats, the articulation or "teeth" frequencies of the voice (between 3.5kHz to 7kHz) have no room to really poke through. Therefore, hearing the words will be harder.
Many times, we'll scoop some of that range out with an EQ. The problem with that approach, however, is that we will be cutting out things that we don't want to cut out—like the top-end of a snare drum that really cracks or a lead synth that helps carry the song. To take care of the hats, we're muffling other parts of the track the producer intended to be there, which is a shame, considering that the problem of piercing cymbals could have been solved just by turning them down in the original mix.
When preparing a track before sending it to an artist or a production house like ours, take a step back and ask yourself if the hi-hats or cymbals are too loud. Are they stepping on the articulation of the other instruments? You'll be surprised how much more room there will be if the hi-hats and cymbals are at an appropriate level. Try taking them down and hear for yourself.
The Bass Is Too Low
Bass really is hard to nail if you're a bedroom producer. You really need a calibrated pair of monitors and room to hear what's going on down there. But there are some ways to write basslines that work with the song and are properly blended in the mix.
The first common issue I hear a lot is bass that's written in too low of an octave. If bass is written too low, it becomes hard to hear on small speakers. (Also, due to those frequencies' long wavelengths, the lower they are the more headroom they eat up in a mix.)
​
Because a too-low bass is so low, it's hard to hear, and therefore needs to be cranked up in dB to really be heard in the mix. This then eats up dynamic space in the mix and makes it tough to get a loud overall mix in the mastering stage without over-compressing everything.
Try the bass in an octave above and see if it still has the same impact. A lot of times, shifting bass an octave up and rolling off some of the top-end will achieve a similarly powerful bass sound as the octave below.
If you're absolutely sold on the low octave and don't want to change it, consider adding some saturation or subtle distortion—something to give it additional harmonics, especially in the mid- and high-frequency ranges. That way you can turn the bass down in dB, and because of the newly added harmonics, it will still seem as loud as it was before, when it was cranked in volume.
​
The Bass Notes Are Simply Wrong
Another issue that arises that pertains to basslines, and especially to those written in octaves that are too low, is that the notes are simply wrong. Because our ears aren't as sensitive to changes in pitch at the lowest of the low frequencies—as compared, for example, to a lead instrument in the high-midrange—you may accidentally write notes that are not in the key of the song.
​
Some producers write 808s in super low octaves on their home setup. Then, when they bring them into our studio and listen on our calibrated setup with a subwoofer, you can hear the dissonance from the incorrect notes. You may not know what it is at first, but something about them just won't hit right. It very well could be your note choice. You have to check your work.
The way I always recommend to get around this issue is to write the bassline in the octave you want it in—then, once you like it, shift it up two to three octaves. Now you can hear it better and confirm that the notes are correct. You'll often find that the note you thought sounded right really isn't supposed to be a C, but a C#, for example. Once you have the correct notes, transpose the phrase back down to your desired octave.
Now, you may ask, if you can't hear the pitch changes down there, why is it important to get this right? The problem is you'll be surprised when you hear it play on a large system, or a calibrated system like ours at Studio 11, where you will hear what's really going on down there. Plus, songs just sound better when all the parts are in musical harmony with the key of the song. It has more emotional impact.
The Bass Is Too Loud
Everyone loves bass. But you just can't turn it up too loud in the mix. Aside from creating headroom issues—that is, not allowing the space for other parts of the mix—a jacked-up bassline is going to make a mix sound boomy. When it comes time to record vocals over the beat, the vocal engineer will most likely make a cut down there. And, like we saw with the hi-hat issue above, there could be other important things in the same frequency range, like the kick drum, that might be turned down with it.
There's No Room for Vocals
​
Most songs that we receive simply have too much build-up in the 250Hz to 600Hz range. This covers up the "body" of the vocals and can make it tough to really make a vocal sit within the beat instead of on top of it. Mids are the most common range for build-up, because most instruments that aren't bass or cymbals have these frequencies. Take a second look here and ask yourself, "Are there too many things clashing in this range?" Chances are, yes.
We also receive a lot of beats that are just too loud and too compressed, meaning that all the quiet parts and all the loud parts are smashed together and cranked in volume. When a beat is super compressed, it can be very fatiguing to the ear. Everything starts to sound like one sound. And if a beat is already maxed out, volume-wise, where can you go from there?
But ear fatigue isn't the only issue. It's also hard for vocals to fit within the beat dynamically when the instrumental sections are too compressed. Vocals naturally rise and fall in amplitude. So when the beat dynamics are squashed, the vocals struggle to stand out, as there is very little empty space left in the mix for another sound to sit. Therefore, we have to cut even more of the beat out with EQ to make it work.
​
The Arrangement Is Bad
A common issue that comes in to the studio is having to rearrange songs to cater to a format that artists can write too. There are some exceptions to the rules (and this does depend on the genre or sub-genre), but generally speaking, having a separate section for the hook and a verse is a must.
Sometimes we get beats that have no structure to them, so the artist struggles on where to focus their attention. They may have written a hook, but it's over the same part of the beat where the verse is, so it doesn't have quite the impact they want it to have.
Another problem that arises occasionally is having verses or hook sections of the beat with an odd number of bars (11 bars, for example—no kidding). Try to stick to the common 12- or 16-bar verses. When a beat of a verse is short by a bar, it feels like the hook comes in too soon and can throw off the momentum of the song.
​
Your Samples Are Between Keys
​
This is also something that arises on a daily basis. A lot of hip-hop beats are sample-based, and a lot of them are ripped from vinyl. The problem with vinyl, though, is while it can add a cool texture to the song, unless the pitch fader is locked on zero, or there are no issues with the platter speed of the turntable, it's very easy to record a sample where the pitch is in between key signatures. (These inconsistent keys can also happen because the musicians in the '60s or '70s tuned to themselves instead of to tuners. You may rip the vinyl correctly, but their C isn't necessarily going to match your C.)
One way around this is to follow suit, and to detune your instruments so that they match the sample. The problem with this is now the whole song is out of key, and it makes using Autotune, Melodyne, or other pitch-based mixing software very difficult and sometimes not possible, as these programs depend on whole keys.
Another source for a song being out-of-key is using 432Hz tuning vs. standard 440Hz tuning. I know a lot of producers that claim 432Hz has its benefits, which is fine and dandy. But keep in mind if you want to make money off your beats and have people use them (especially when said people want to use Autotune), it can create issues.
Other Miscellaneous Issues
Pops or Clicks from Bad Edits: Make sure all the cuts are done at the zero point in the waveform and fades are added before printing a final mix. I hear this issue at least once a day, where an 808 will be cut in between the waveform, which causes a small "pop." It's much harder to remove the pop after the fact than it is when creating the beat. Edit them pops!
BPMs Aren't Whole or Half Numbers: Most producers work at a different tempo than what the actual BPM of the song is. For example, I know a lot of producers that work in double-time to get more spaces on the grid—working at 140 bpm, for example, for a beat that is 70 bpm. This is fine, as when an engineer records to the beat he'll lock the beat in its true tempo at half the value (70). But this approach can cause problems when producers work in something like 140.678 bpm. This takes more time for the engineer to find, as it's such a specific number. Chances are you won't really hear the .678 difference in tempo anyway, so just make it an even number.
Too Many Drops Before Vocals: I hear a lot of beats that have drops in them already. This is cool if there is only a couple of them, but when there is a lot, it can really make writing difficult for the artist, as they are now stuck with those drops in those specific places. Traditionally, in hip-hop and rap culture, a drop is an effective way to emphasize a punchline or an important part of the phrase. If they are in random places, it forces the rapper to change their lyrics to match those drops, which limits them in a sense. It's usually best to leave the drops for the recording engineer—after the lyrics are laid down—so that the drops can be custom-tailored to the lyrical content of the artist.
These are just some of the common issues that arise from my experience. The point is to take a step back and really listen to the beat and how the artist is going to use it. Put yourself in the shoes of a vocalist and imagine how they would interpret the song, including their writing process. If you do this, you will undoubtedly be successful.
Glossary of Studio Terms Every Artist Should Know!
By Dan Zorn
Take:
Perhaps a more obvious one to some, but a “take” is a single turn, or run through of a recorded phrase. For example if someone says I don’t like the first take as much as the second take, it means you tried the same thing twice, and they like the second time around better. And if it’s still not perfect, that’s when you…
Punch (In/Out):
To “punch” a take means to redo a certain part of a phrase. For example if you sung the line “Blackbird singing in the dead of night” (which you shouldn’t because that’s blatant copyright infringement), and you didn’t like the way you sang “Blackbird singing”, you can punch or redo only that part without having to redo the entire phrase. This means the engineer is hitting the record button for those words only (overwriting the previous lines), or he took those bad parts out and put you on another track to record on during that part.
Comp:
To consolidate multiple takes together in an attempt to use the best parts of each one, creating one good take.
Double:
To layer another identical (or closely similar) take on top of a previously recorded one. This is used to typically make parts sound “bigger” or more full. Occasionally the engineer or artist may even want to “Triple” for an even larger sound. This process is also known as Overdubbing (adding additional parts to existing parts).
Ins and Outs:
To layer a second vocal take on top of another take , but only doubling selected parts. This is used when one wants to emphasize or embellish certain words. Ins and Outs can occur anytime throughout the verses/hooks, but generally speaking it’s emphasizing parts where there are key words in the sentence. Think back to grade school when your English teacher asked you to go up to the board and underline the nouns and verbs, or perhaps the “subjects” or “action” words, but leave out the “articles” and filler words. It’s kind of like that.
Ad Libs:
Adding a track of extra vocals on top of a previously recorded phrase. Unlike “In and Outs”, ad libs don’t have to consist of the same words, or even of the same lyrical rhythm, but can have different lyrics and flow all together. Ad libs create a “call and response” with the lyrics. For example if the line is “Baby, you’re the one I want to be with”, an example of an ad lib that would fit that part would be something along the lines of “Only You”, or “It’s true”. Or if the line is “Gotta get that money”, an example of an ad lib would be “Yessir” or “Gotta get it”.
Reverb/Delay:
Many artists who are unfamiliar with the studio world, often get these terms mixed up. They are related, but create different end results. Reverb, or reverberation, imitates a space. It’s a familiar sound that we all know from churches, halls, bathrooms or concert venues. Reverb is used to make the sound seem further away, or perhaps hidden behind other layers. Delay on the other hand, or as some artists call “echo”, is an effect that takes a sound and repeats it, usually in time with the beat. Think yelling “Who is the king of Siam” into the grand canyon, and the repeating sound that you get back is what’s known as delay.
Scratch Track:
A scratch track is a guide track. A scratch track, or scratch take is recorded while performing to get an idea of how it will sound all together, but 9 times out of 10 will get re recorded with more attention to detail and performance. Vocalists of bands may want to do this when playing along with each other to either get an idea of how the vocals will sound in the track, or to help the players identify the different parts of a song. Which segues to…
Hook/Verse/Bridge:
On a fundamental level, it’s important to know the difference between these three sections of a song. The Hook, also known as the Chorus, is the part of the song that repeats a few times throughout the song and is usually shorter than the length of the verse (think “Billie Jean is not my lover…”). The verse is the longer parts in between the hook that change throughout. There are usually 2- 3 verses in a song (sometimes with different artists on each verse), and is also where most of the lyrical content lays. A Bridge is something that doesn’t occur on all songs, but in most genres, especially in Pop music, it is the part (typically towards the end after a verse ) that changes before it goes back in the chorus. A Bridge serves to break up the flow of the song, and to build suspense before the hook comes back in.
Harmony:
To harmonize means to sing or play another line on top of a previously recorded phrase that doesn’t consist of the same notes, or is the same note in a different octave.
From the Top, Front or Back:
This is pretty self explanatory, but just in case, from the top or the front means from the start of the song, and back means towards or the end of the song.
Hot/Clip:
In the studio , if the engineer says the sound is too hot, he isn’t complimenting your track, but is usually referring to something that is about to, or is overloading. This happens when artists are louder than the microphone can handle, (or an instrument is turned up too loud for the recording input) and is usually followed by “clipping” or unwanted distortion.
Fly:
Sometimes you’ll hear the engineer say, “let me fly the hook”. Fly simply means, “duplicate”. Seeing how most times the Hook is recorded once, it is necessary to “fly” it to the next part where the hook is supposed to come back in.
Flat/Sharp:
If the engineer says you’re flat, it means you are slightly below the correct pitch of the note. If the engineer says your sharp, it means you are slightly above the pitch. When an engineer says this, it generally means to be conscious of your pitch.
———————————————————————–————————————-————————————–
If you take your art serious, it’s beneficial to understand the language of the people who are an integral part of the process. You can be a brilliant film maker for example, but if you don’t know the terminology that your actors understand, you will never get across your point effectively (or it will take you much longer to do so). Similarly, if you understand your studio lingo, it will help your engineer understand you and vice versa (not to mention save you a bit of time and money in a session.) And in a world where time is money, every minute counts!
10 Useful Tips for the Modern Audio Engineer
By Dan Zorn
While good sounding audio has existed for a while , the process that goes into making it good has changed greatly. So here is a list of a some helpful tips for the new 2015 Audio Engineer.
1) Know Your Plug-Ins
Many studios and engineers have a lot of options of how to process sound. But because of the recent increase in processing power, flexibility, and cost, people in the modern era are now predominately using plug ins to mix their tracks with. With so much at hand, (especially with with massive bundles like Waves suites), it may be tempting to throw on a plug in that you haven’t used before because you want to try new things. But if you haven’t used it before, it’s ill advised to use it in a song (especially on a clients time) until you know the in’s and outs of the way it operates and sounds. If you want to use something you haven’t used before, take the time off the clock to play around with it a few times, and learn how the plug in sounds. Because each plug in has a unique character with different types of settings, you will need to spend the time to get it to sound good before plopping it on a channel and turning knobs. For example if you never used the Waves C4 before, but heard from a friend that it’s awesome on the mix bus, you will first need to know how that particular plug in works. While you may know what multi-band compression is and have used other products, chances are it’s not going to sound the same and will require some time to get to know it. So on the clock, go to what you’ve used before, and in your spare time expand your arsenal of tools by getting to know them well.
2) Don’t be Deceived by the Look of Plug -Ins (Use your ears!)
I read a post a while ago from a guy who was interested in turning off the GUI interface of his plug ins because they were “deceiving” him. And he’s right. It’s very easy to be persuaded by the look of a plug in, rather than the sound. A perfect example is the Waves Puigtec EQ or the Waves Kramer tape simulation.
Theres no doubt that they both look cool and emulate very reputable studio gear, but are you choosing to use it for the specific sound or because you subconsciously like the “vintage” look . I can recall a particular scenario a little while back where I pulled up the impressive animated Kramer Tape plug in to get a tape saturated sound on my bass, then went back to it later to find that the basic looking built-in Saturator plug in on Ableton had a sound that was more what the bass needed. Don’t assume it will automatically work for the track just because it looks cool. Always use air conditioner with air conditioning service hughesairco.com in the recording room!
3) Learn Some Music Theory/Rhythm Basics
If you are an engineer who is recording music at a studio, it’s probably a good idea to learn about the music your recording right? Seems obvious, but there are countless engineers that don’t know when something sounds off key, or out of time because they are only focused on the way it sounds and not the way it should be played. You should at all times be listening to both. Many engineers have the stance that it’s up to the band or the producer to know what is supposed to be played musically, but when it’s a technical problem, like an off key note or a weirdly shuffled drum hit, it’s up to the engineer to ask the artist to redo a take or fix the problem. After all, at the end of the day, the goal is to create a great sounding song, and it will only go so far if it sounds good but isn’t played right. Your clients will be happy you care. Trust me.
4) Monitor at Low Volumes for More Accurate Mixing
The biggest problem when listening to mixes at loud volumes is that for the most part ,everything sounds balanced and as a result, automatically “good”. At loud decibels sound becomes compressed and parts that are quieter are heard more easily, and parts that are loud seem on the same plane as the quiet parts. Also at loud volumes, especially in non absorbant spaces, there can be a lot of reflections that bounce back and impair the accuracy of your perception. Bass builds up and causes strange room nodes, and phase issues and can quickly skew the mix. But, if you mix at low levels it eliminates reflections and the deceiving flattened EQ curve.
5) Spend Some Time Getting to Know the Music you Record
Spending some time researching the type of music you are recording in advance can help greatly in the session. For example, if you are recording Trap or Drill rap, listen to the big Trap and Drill singles that are out, and take notice to what they are doing. If the artist is still under the radar, chances are they in some way are trying to emulate the sound of the ones who made it big, so you should do your part from an engineering standpoint and know what sound they are going for. When dealing with Rap music for example, things like placement of drops (cutting the beat out), pitch effects, when to use auto tune, stutter vocal effects, telephone filtering are all things that the artist will want , but don’t necessarily know how to explain (or wont be thinking of). If you beat them to the punch, or surprise them with a cool sounding effect, they will show you a lot of respect for really trying to make their song a hit song. If you do these things they will no longer view you as someone who is working for an hourly rate, but someone who is dedicated to their song.
6) Roll Off Unwanted Highs
You may know that rolling off low frequencies on most tracks that aren’t bass/kick can improve intelligibility in the mix, but what some people don’t think about, is that the same thing applies with high frequencies. Some sounds don’t need to have high end content, especially when it’s fighting with other sounds in the same frequency ranges (vocals, guitars). Putting lowpass filters on certain instruments can make room for other things to come through. In an interview with legendary engineer Chris Lord Alge, he said for a recent song he put a low pass filter on his drum buss so his vocals would come through more. Granted this is a bit out of the ordinary, the point is he’s making space in the high end, instead of just the low end.
7) Be Careful About Over-compressing
By the time a song is done it’s going to be compressed various times. For example, the vocal may have a compressor on the individual channel, on the vocal bus, possibly a compressor on the master, then compressed and limited again during mastering. So seeing how there are a lot of stages where the dynamics are getting squashed, make sure you don’t over do any of them, because the end result will be very additive and also very obvious.
8) Check Translation on Multiple Systems
This may be something you heard before (or something you at least should have figured out by now), but checking your mixes on different playback systems is a good way to judge how your song will translate, and if you made the right decisions. If I have the opportunity, I usually check my mixes on both my laptop and my headphones. The laptop is a good reference to check on, because that’s where a lot of your audience is going to be hearing your songs from. It’s hard to gauge sub bass on a lap top, but you should at least be able to hear the upper harmonics of the bass and kick. And if you can’t hear any bass, theres a good chance it’s sitting too low (spectrum wise) in the mix, or it’s too quiet. For me personally, I also listen on my Sony MDR 7506 headphones because I listen to a lot of great sounding music on them, and know how they are supposed to sound. I know right away if there is too much low end on the song with my headphones, so it’s a good tool to utilize.
9) Watch Interviews to Learn from Your Peers and Elders
It’s now 2015, and information is at the tip of your finger tips. The internet is chalk full of useful information, and should be utilized as often as possible! Watch tutorials on how to use a plug in, watch “in the studio” videos, and perhaps most importantly, watch the web series Pensado’s place on Youtube! Link> https://www.youtube.com/user/PensadosPlace .Dave Pensado, an award winning engineer, sits down one on one with some of the greatest and most famed audio engineers of our time, and picks apart their approach and engineering process for mixing and tracking. Another reason why it’s good to watch how other people do things is because it shows you another way how to do things. You may have always been stuck on a certain way of doing something, until you see that there is another way that also works well (if not better.) This is good to know because when something goes wrong, like gear or a plug in suddenly not working, you know another way to do it.
10) Hold a High Standard for Yourself and Your Mixes
My mixing technique has definitely changed over the years and similar to yours, will continue to change. Although I have to say, the one thing that has changed the most about my engineering is my standard of what a “great” mix is. For example, instead of setting things and leaving them, I utilize automation much more to make every section work perfectly (instead of lazily finding a middle ground). I take the time to go through all the components of the track, and make sure they sound good alone, grouped, and as a whole in the context of the mix. So if you have a high standard for what a great mix is, and you have the technical knowledge, there’s no reason why your tracks can’t sound amazing. Unfortunately recognizing great sounding audio verses just good sounding audio is not something that comes easily, and takes a lot of patience and perseverance. Listening to well mixed song on hi fidelity recordings on vinyl or cd (verses poorly represented mp3) will give you a good example of what great recordings sound like. After all you can’t make things sound great, unless you know what “great” is.
Tuning Electronic Drums to Fit Your Track
By Dan Zorn
For this tutorial, I’m going to demonstrate the importance of tuning your drum samples to fit the track that your working on. I’m going to be using Ableton live, but the theory applies to any DAW.
So you’ve found some good sounding drums, and a good melody that fits the song. Everything seems to be groovin and working together, but perhaps there is something that you may have missed that can make it sound even better? …
Tune your drums!
“But I don’t have a drum key to tune my digital samples.” Don’t worry, that’s not what I mean. Here is a scenario to demonstrate how tuning your drums can enhance your track:
Lets first start with the kick drum. Say you have a melody that sounds great, and a kick drum that you love. They sound okay together, but it doesn’t have that “wow” factor that you hear on the dance floor. Instead of spending a bunch of time going through different kick samples (which can be a very long procedure), try experimenting by tuning your drums. In Ableton, it’s a very simple procedure, and I can imagine it’s a similar process with other DAWS. In Ableton, all you have to do is click on the sample and utilize the “transpose” option. By transposing a kick drums fundamental pitch up or down, you will find that more than half the time the kick will begin to groove with the key of the song better. Sometimes you will find that the original pitch of the sample is the key of the song, but many times by transposing it up 1-3 , or more likely down 1-3 notes you will find a pitch that works better with the key of the song.
Here is a secret that I’ve found if you are having trouble identifying the main note of the kick drum. Because humans can hear with more accuracy for frequencies in the mid and high range, take your kick drum and transpose it up a whole octave (12 steps). Now, go up or down a few notes until you find the one that fits with the key of the song better, then subtract/bring it back down 12 from that. I don’t know if this is something everyone does who tunes drums, but it’s just a little trick that has helped me tremendously over the years.
Now that you realize you can tune your kick drum, why stop there? Try it with the snare/clap. You may find a spot where the snare drum “resolves” better. Maybe the hi hats sound better tuned lower than higher (where they seem harsh.)
Another reason why transposing is good is because you can effectively change the EQ curves of the drums without needing to do extreme EQing. Remember if the sound isn’t good to begin with, no amount of EQing can make that sound work. Tuning a kick drum down (if it works with the key of the song), can now make room for the bass on top, or vice versa. Maybe tuning your hats up, leaves more space for the synth pads to sit behind it. Be creative with it, but know that you can use it as a mixing tool too. At this point you may ask, well why don’t you keep cycling through samples until you find one that does work with the song. Well many times I’ll find that I like the characteristic of a sample, but it may not work with the song, or I don’t like the characteristic of the sound, but it works great from a mix perspective in the context of everything else. The point is to give you extra flexibility with samples, so you don’t have to spend hours cycling through them to find the one that’s perfectly in tune with the track. (Having said that, I do recommend you spend at least some time cycling through sounds to find the highest quality sample that sounds the closest to the end result you want.) Having to pitch and shape the sound is only added work, and if you get lucky with not having the do any of this , of course that’s preferable. But, chances are you will have to pitch and shape your drums at some point, so it’s good to get into a habit of experimenting with it.
Something Id like to also point out is that it is okay to leave some dissonance with your drum tuning. Everything doesn’t have to be in tune. Sometimes it creates a good tension , and makes it stand out in the mix more if a certain percussion isn’t entirely in tune with the rest (think detuning to make sounds stand out). So try around and experiment and see what sounds better.
Another aspect of tuning your drums is to edit the envelope. Similar to how a drummer would add padding and mute the drums so it doesn’t sustain as long, you can do this digitally in the realm of your DAW. Once again, using Ableton , you can take away some of the natural sustain of the samples by adjusting the sustain in the ADSR envelope editor. Very often with snares (especially real snares) for example, the initial pitch (the attack) is different from the pitch of the sustain of the snare. Maybe you find that the pitch of the sustain bends up or down and doesn’t sound in “key” with the track. In this case, try shortening the decay/sustain of the sample and see if it yields better results.
The bottom line is that you shouldn’t just find good samples and leave it at that without making them fit with the rest of the song. Chances are they can sound much better with the track. By simply adjusting the pitch, and envelope of samples you can really make or break the drums on a song. So play around with it, experiment, and have fun!
How to Give Your Electronic Drums More Feeling
By Dan Zorn
​
Whether you’re making house, techno, hip hop, or any other form of electronic music, chances are you are going to be using electronic drum samples that you’ve sampled yourself or downloaded from the internet. The way of sequencing these samples in your DAW may vary depending on the your preference, and what software/tools you have. You can use built in step sequencers, play out your samples on a midi keyboard, pencil in notes one by one, or move around audio clips. However, no matter what your method is, a lot of the times electronic drums can sound very stale, or robotic if the necessary steps aren’t taken. This is because computers and sequencers are repeating the exact same digital sample, and because we humans are very capable of picking up minute differences , (or a lack thereof in this case), drums quickly become plain and boring. So I’m going to give you a few tips on how to create these differences, and make your drums sound more human, less robotic , and have more feeling.
I mentioned that there are a lot of ways to enter your samples into your DAW. I myself have tried all methods of sequencing drums, and have recently found a good formula that works for me and hopefully for you too. Instead of sticking to just using one form of entering samples, using a combination of midi drums, penciling in notes, and placing audio samples, is the best way to get what you want. Every method has it’s pros and cons, so it’s good to learn the in’s and outs of all of them, so you can use each when necessary. So without further adieu, I’m going to walk you through a hypothetical situation of building up some drums and point out tips on how to make them sound less robotic.
1) Kick Drum
​
Start by finding a good sounding kick drum. When trying to humanize a kick drum, there’s actually not much you can or should do. A kick drum is mostly sub, lows and low mid frequencies, and because we “feel” more than we “hear” with those frequencies, any subtle variation of pitch, velocity or duration will be very tough to hear, and will almost always go unnoticed. So this is when I just pull up a kick drum audio sample, and drag it in to the edit window.Once you find a good kick sample, drop it in for a few measures, then loop it.
2) Snare Drum/Clap
​
This is an element that can go both ways. You can choose to have it robotic (sound more like a drum machine), or change it up and add some feeling to it. It all depends on the groove of the song. Unlike the kick drum, we can detect small volume, duration and pitch differences in this frequency range (mids, high mids) , so whatever changes you make will be heard more easily. If you do decide to humanize it here are some things you can do: Instead of penciling it in, or dragging the audio clip, try playing it on a midi keyboard (or even laptop keys) without quantizing it. The small timing inconsistencies will help them sound more like a real person is playing it. Because lets face it, even if you are a professional drummer, achieving perfect timing is very difficult. As a result, some notes will be close to dead on, some will be ahead and some will be lagging behind. This is called adding “shuffle” to the track. On top of that, if your keyboard is velocity sensitive (and the velocity on your drum rack is engaged), you will create a subtle change in volume as the song goes on. (You can also change the velocity after the performance to really get the perfect grove). Also try experimenting with changing the pitch a few cents/semitones every so often to change it up even further. You can also experiment with changing the decay of each hit. Maybe on every other note you shorten the decay a little bit, and have the last one of the measure extra long? The possibilities are endless!
3) Hi Hats
​
Similar to the snare drum, playing out hats on the keyboard will result in a more live sound. Although if your playing faster notes , like 1/16th notes, you may want to quantize, but not 100%. It’s good to leave some notes on the beat, and some off. Also with the quantize function, you can go through a variety of quantizing patterns. Certain DAWS have a pull down menu of different grooves. Swing, dotted note, shuffle, mpc style, can all add much human love to those hi hats! And aside from timing changes, playing out the notes on a keyboard is going to add subtle velocity changes. You can continue to shape the sound of the hi hats by occasionally altering the decay too. Then after that if that track calls for it, try adding an effect that helps it move even more, like a subtle slow phaser. If mixed in correctly, overtime the phaser will add a very pleasing change to the otherwise robotic sound.
4) Percussion/cymbals
​
Now that you have a basic groove, you can choose to add the percussion and cymbals in how you’d like. Again, you can experiment with velocity, pitch and time changes, but as your adding more elements, be wary of how it clashes with the sounds that are already there. If you drag a conga in for example, and have them playing in the same place as the hi hats, or snares, make sure they are shuffled the same. In other words, putting a conga on every 4 beats, when a hi hat also comes in every 4 beats, but is shuffled slightly off time, will result in a strange phased sound and will clash greatly. The fix, is to move them so they are aligned on the same plain.
5) Automation
Now that you have your drum groove, and its sounding live and interesting, you want to continue by processing it in a similar fashion. Think of how an actual drummer would play a drum kit during a song, and try to bring that knowledge into your drums. The goal isn’t to make your drum samples sound like a real drum kit, but make it sound like a human is playing the samples. So begin automating parts. Maybe during the chorus, the hi hats velocity get louder and more epic, then quieter again in the verse. Try to mold the drums to the dynamics of the song . Maybe automate the wet dry of the reverb so every 8 bars or so the clap as a big splash sound. Creating movement and variation are the two primary goals of making drums sound less robotic, so be creative and have fun with it!
Tips To Get the Most Out of Your Studio Time
By Dan Zorn
Let’s face it, we're not rich. We work hard for your money, and when it comes to studio time we want to get the most out of the few hours that were booked. In the studio, I’ve seen a lot of artists who work super efficiently, but I’ve also seen a lot of artists who use their studio time poorly. I’m going to give you a list of tips on what not to do, and some things that can help you get the most out of your time.
1) Rehearse , rehearse, rehearse!
This applies to all artists, bands and rappers alike. The tighter you have your grooves down, the less time we have to take doing overdubs. For rappers, unless part of your aesthetic is freestyling all your verses (Lil Wayne, Common), then I highly suggest writing your lyrics down, and practicing them so you don’t have to do too many do overs. Sometimes artists will just write their verses down in their iphone, then not practice reading them. You must actually practice them! Otherwise (and I’ve seen it) when you get in the booth, you have to do a lot of back writing because you didn’t read it out loud to count the syllable’s correctly,
2) Come early if you can
This way you have time to feel the vibe of the studio, and do any vocal exercises you may want to do before getting in the booth. This way the engineer can start up the session, which takes a few minutes anyway. And if you can’t make the session, always give the engineer a warning a head of time. It reflects poorly on you if you don’t show up without telling anyone, and aggravates the engineer for wasting his time coming down to the studio.
3) Get to know the studio lingo
Knowing terms like “overdub, from the top, fly, in/out/ ad lib, punch, stack” can all make communication with your engineer much easier. It also helps to know a little about music theory. Just basic terms like bars, phrases, measures can go along way. So instead of saying “ can you uh do that thing with that part”, you can say “can you punch me towards the end of the measure”. Any time spent trying to communicate with your engineer is potential time you can be recording!
4) Put your phone on silent
If you do this you wont have to waste time by redoing a take that your phone went off during, and you can wont be tempted to waste studio time by talking to whoever calls you.
5) Don’t come with your whole posy, unless they are all in the group.
If you are coming to the studio, come with 1-2 people. Chances are they will distract you, and waste your time. I’ve seen it happen too many times to count.
Follow these steps and you will surely get the most out of your studio time!
Using Effects, Effectively
By Dan Zorn
Overview: A very common thing with a lot of amateur producers and engineers is using effects (ie, reverb, delay, compression, chorus, flanger etc) when it is not necessary to use, or in many cases over using them. In this article, I’d like to address certain situations where effects are hindering the mix, and others where they can benefit the mix.
Reverb
The first thing that comes to mind when thinking about effects being overused is the ever classic scenario of “too much reverb”. Drenching your tracks with reverb can really take away from the intimacy of a performance , and often times adds unwanted clutter in the final mix. A lot of times people just add reverb on instinct because it sounds cool , or as a way to cover up an otherwise poor recording (including myself when I first started). Just because adding reverb makes it sound larger, and more epic on it’s own, doesn’t mean it will in the context of the mix.
When you consider what reverb actually is in the physical world, it helps you understand how to use it better in the digital world. In basic terms, reverberation is created when a sound source is reflected (quickly) off a surface(s) in a space, in that the reflections combine to build an “echo” of the sound, before it decays and gets absorbed by it’s surroundings. Of course, depending on the conditions of the room, the reverb from a source can sound drastically different. Consider the following situation: You have a portable speaker connected to a cd player with a recording of a dry , unaffected vocalist on it. You have a microphone with you to record the sound of the vocalist coming out of the speaker. Depending on what kind of space you go to record the speaker, you are going to be capturing different characteristics and impulse responses from that room. In the digital world, this is what convolution reverb is: mathematically capturing the impulse response of a room, to sound like a specific space. So lets say you record this in a medium to large sized cathedral. What are you going to get? You will end up with a fairly washed out, warm sounding recording that has a long decay time. Because the space is large, the only frequencies that are going to come back to you from the walls and ceilings, are the ones with longer wavelengths (ie, low bass frequencies) , and when they do come back it is going to take a long time to decay because it takes more time for the reflections to bounce off the wall. In the exact opposite scenario if you record the speaker in a small bathroom, you are going to get a quick decay, with a mix of higher frequencies (that contain shorter wavelengths.) So just by changing the space, you can get different tones, decay times, and impulse cues. So what does this mean in terms of using reverb in a mix?
Before I answer that, first consider if the sound needs reverb. A lot of times, having a dry sound in the mix works with the song. Or if the space you recorded already has sufficient reverb, leave it alone. No need to over do it. Just ask, is that what this mix needs before adding it? And if the answer is undoubtedly yes, and the then it’s time to choose which verb to use.
There is a lot of talk amongst engineers about whether you should use more than one reverb for your song. The idea is that when you use more than one reverb, you are putting the sounds in different spaces, and therefore will make it harder for the listeners’ ears to localize sounds in the stereo field. But often times finding one reverb that works on all instruments is tough (and in your humble narrators opinion, matters more for live music, than electronic music), so using more than one isn’t entirely taboo if done correctly. Say you’ve sent all the tracks you want to have reverb on it to a single medium hall reverb, but you notice the vocal needs more reverb. But when you turn the send up on the vocal, it sounds too far back and lost. So you turn it back down, and it sounds too close. There seems to be no medium ground where the vocal sits right. Basically what’s happening here, is that everything else sounds good with that specific reverb, but the decay and darkness of it is too much for the vocal . So the fix? Try pulling up a brighter reverb on the vocal track, with a shorter decay time. This will give it some space with a right amount of brightness and intimacy, then bus a small amount to the original reverb. This will make it stand out, give it a little more air, and also glue it to the room that everything else is sent to.
More tips on reverb settings:
Decay Time:
I mention decay time a lot in this portion, that’s because it’s very important to set up correctly! You always want to keep in mind whether the decay of a reverb is too long or too short.If it is too long, then it will step on the next transient and create a muddled mess. Adding a shorter decay will keep it from stepping on each other, and also leave some space in between to keep dynamics (especially after everything is squashed after mastering). But if it’s too short, you won’t hear the effect of the reverb as much. So it’s finding a balance that works both aesthetically, and technically.
Avoiding Mud:
Many reverbs, such as the Rverb by Waves comes with a built in EQ. When you add reverb to low frequencies, the long wavelengths reverberate and tend to interfere and mask with each other. What you get is a mud puddle, so it is wise to roll off any low end on the reverb, so that it is mainly affecting the mids, and some highs. A lot of times reverbs automatically, or have a setting to roll off high frequencies too. This is because, as I explained earlier, medium and large size rooms have naturally dark sounds because of the longer (and lower) frequencies that reflect back. Rolling off unwanted frequencies on your reverb can also add more headroom in your final mix ; )
Wet/Dry:
This may be obvious, but be subtle! When the effect is the dominant part of the track, it easily can sound over processed, or amateur. Less is more!
So now that we understand reverb , next we’ll dive into the do’s and don’ts of delays.
Delay
Delay is similar to reverb in a lot of ways. Delay can occasionally substitute for reverb, depending on the delay time, and characteristics of the delay. In fact, many times delays are preferable to reverb because 1) it adds glue and cohesion without a reverb tail to interfere, 2) it doesn’t affect the stereo field, while still sounding effected 3) can sound more upfront and intimate while sounding wet. Number 1 is fairly obvious, similar to reverb, sending multiple tracks to the same delay adds a cohesion amongst the tracks. Secondly, because a delay is mono (unless using a stereo delay), it doesn’t affect the stereo channels, and again, still sounds effected and wet. This is good because you can now use the wide stereo field for something else, such as other reverb or other wide panned things (instead of reverb that took up the stereo field). And lastly because it isn’t being added to reverberant space, the delay manages to sound mostly upfront in the mix.
Delay Time:
For the most part you can use your ears and hear which delay times work and what don’t. Aside from the basic ¼, 1/8th , 1/16th note intervals, many delays also have dotted time, and triplet timings. When going through times, and you come across these these odd balls, use your ears and make sure it doesn’t sound too cluttered, or simply not in groove with the song. I find from personal experience that many times the dotted 1/8th notes do actually work well in a typical 4/4 track. This is because it doesn’t really interfere with anything else in the track seeing that normally nothing else is on a dotted note timing. So from a mix perspective, it works out great. But there have been times where it doesn’t quite swing with drums, or it’s competing with something else in it’s place, so just use your ears.
Compression:
No matter what plug in or hardware compressor you use, all compressors are all essentially doing the same function. They are limiting the peaks of the audio signal, and bringing up the quiet parts. There are a variety of applications for compression. Compression can be used to smooth out volume differences, bring up parts that are inaubdle, reduce hard transients that kill headroom, glue tracks together, bring tracks more forward, or can even used as an effect. Compression can really add a lot to a vocal line for example. The human voice has a lot of wonderful subtitles and nuances that sometimes get lost in the mix. If you add the right amount of compression, you can really bring these characteristics out to create a more intimate sound. Adding compression can also level things out, such as a bass line or a really dynamic guitarist. Used in the right amount, compression can work wonders for a track .However, if used incorrectly or overused, compression can destroy a track.
When considering using compression on a track, it’s vital to ask if it needs it. Many times producers and engineers compress way too much, and as a result kill all the dynamics of a given track, and therefore the entire mix. In fact when I first started out in my teens, I habitually compressed everything because, similar to reverb, it made things sound “full” and “epic”. But I quickly realized when you make everything sound “epic”, nothing sounds epic. This is also why they tell you not to EQ, or adjust effects while a track is soloed. Because the listener will never hear just that channel soloed, so instead focus on how it sounds in the mix. This is how I approach compression. Sometimes I think something sounds over compressed when I hear it on its own, but in the context of the mix, it sounds just right. Or conversely, sometimes I want to compress a sound when I hear it alone to make it more full, but in the mix , other sounds create and fill in the fullness. But as a general rule, it’s better to under compress then over compress, seeing how more compression will be added during the mastering process.
So lets say you think something needs compression, so you pull up a compressor and right away go to the pull down menu for presets. That’s a start, but your not done quite yet.
Presets:
There is no one setting that will work on everything. Any time you compress you have to consider a variety of characteristics about the sound. Are there a lot of hard transients? Are there quiet parts that need to brought up? Is the sound thin, does it need to be fatter? Lets say for example you want to compress a snare drum. Normally, you’d open up a compressor in your DAW, click on the “snare” setting, and go from there. While this is a good spot to start, it’s important that you continue to tweak the settings to the specific snare drum sound and what it’s doing in the track. For example the preset “snare” does not know how fast your snares are being hit in succession. This is very important to know so you know how long your release should be. With a fast succession of snare hits, and a long release, you are going to suck the dynamics out of the snare hits that follow because the compressor is stepping over the transients , and therefore the attack of the snare shots. Lets say you dial in the snare then decide to compress the kick, so now you go to the “kick” preset. Another thing that presets don’t pay attention to is the envelope of a sound. Aside from the release, this is important to know for the attack setting on the compressor. If the kick has a quick sustain and decay time, adjusting the attack of the compressor to come in right after the transient for example, will compress and bring up the sustain of the kick before it decays. The result is that it will sound a fatter because the part that was quiet (the decay), has now been brought up. But if the attack of the compressor is too short, it will destroy the transient click and attack of the kick (say goodbye to intelligibility and punch!) While the ratio’s of this “kick” preset may be close, you may still want to adjust it according to taste. Does it need more compression, or limiting? Or in between. Adjusting the ratio from a softer 2:1-4:1, to a 6:1-10:1 can make a big difference to the peaks, and the quiet parts of the signal. Generally speaking, softer ratios such as 3:1 will bring up quieter parts, while compressing the peaks, and harder ratio times will focus more on compressing, or limiting the peaks . So it’s important to identify first whether a track needs compression, and if it does, tweak the setting it so it fits the specific sound instead of settling for the preset!
Other effects :Chorus, Flanger, Phaser
Here are the esoteric effects that can be used not only for experimental purposes, but also for the purpose of enhancing a mix. For example, if a bass sounds too narrow or dull, a subtle chorus can work wonders by adding harmonics and spreading it a little more in the stereo field. A chorus can also work on vocals, and if done right, wont sound like you’re even using a chorus (not in the traditional sense at least). Similarly, a flanger or a phaser can add a cool change to a high hat, or guitar that seems stale and robotic. Adding movement via these effects makes the track more interesting to listen to, and can even make some tracks stand out from each other. I like to use auto pan plug ins on certain percussion and vocals, and spread them just a little bit left and right at a ¼ note interval. This way instead of sounding flat and deadpanned in the middle, they dance around the center a little, and create for a more interesting listen.
Hopefully through these examples, I’ve shown you a thing or two about the importance of using effects to your advantage. The point of this article is to show that effects can be used as tools to create interest, and enhance a mix, rather than just adding them because they sound “cool”. Whether it’s reverb, delay, compression or other effects, using anything in audio inappropriately can take the audience out of the experience of the song. So remember to always use effects in moderation, and most importantly, when they are needed.
Getting Bass to Translate
By Dan Zorn
Getting bass to translate is one of the toughest things to accomplish as an engineer or producer. After many years of working with various genres (and making countless mistakes) , I have finally compiled a list of tips that will help you get your bass to sit right in the mix and to be heard on any playback system (including those wretched Mac Book speakers). But before we delve deeper, we first must understand a bit about the playback systems themselves, and how our human hearing affects the way we perceive “bass”.
On a fundamental level, humans are able to hear sound because our ears (through a complex series of processes), pick up air molecule displacements (vibrations), and convert them into electrical impulses. Our brain detects these impulses and then, as a transducer, translates them into “sound”. On a piece of paper, humans are capable of hearing frequencies from 20hz to 20Khz. However that’s “perfect” hearing. Most of us do not have perfect hearing, and on top of that begin to loose sensitivity to certain frequency ranges as we get older. We pick up vibrations through hair cells in our inner and outer ear, and as we age, some of these cells began to deteriorate. The first hair cells to go are typically specific ones that are attached to our “outer ear” , the part of our hearing responsible for detecting high frequency content. So depending on your age when your reading this, you can have a very different hearing response than someone much younger or older than you. So the reason your old man can’t hear you isn’t necessarily that he is loosing all of his hearing, but most likely because he’s losing or lost some of those higher frequency hair cells (typically where the articulation of the human voice sits).
Because of the way we humans have evolved, we are most sensitive to mid frequencies around the human voice (2-5Khz), and will hear these over other frequencies of the same SPL. This concept is widely known as the Fletcher Munsun Curve, and understanding this can help you greatly when mixing, and specifically when dealing with bass.
Fletcher Munson Curve Picture
The Fletcher Munson Curve refers to how our “frequency response” changes with volume. As shown in the graph above, when 1Khz is at 60db, it takes about 80dB to hear 50hz as the same perceived “volume”. A 20dB difference. As the overall dB increases , 1Khz at 110dB will sound the same as 50 hz, if 50 hz is played only 10 dB louder. This relationship changes or “flattens” out as the volume is increased. So what does this mean for you? If you listen to your mix at a louder volume, things are going to sound equal and balanced in volume. Bass, mids and highs will seem in their place, but it’s is a trick! Once you turn the level down, all of a sudden bass (and some highs) get lost in the mix. You may not have chosen to turn the bass up when it is at a high level because it sounds present in the mix, but when played at a quiet or reasonably loud level, it is lost in the mix. On top of the frequencies flattening out, if the overall volume is loud when mixing the music, the song has a greater impact. This “greater” impact fools your ears into being satisfied. But they aren’t satisfied because things are clear in the mix, they are satisfied because the music is cranked and your body can “feel” the bass. You aren’t going to think anything is really wrong with the mix if it’s loud…so the solution? Monitor at low levels, and you won’t trick your mind into thinking things are balanced when they aren’t, especially when dealing with bass.
Another reason to monitor at low levels is because it won’t interfere with the acoustics of your room as much. If something is cranked, and you’re in your not so perfect sounding bedroom, then that will reflect in your mix. Low frequencies will be boosted, standing waves will cause strange phase issues, and your mix will be all wrong. Listing at a low level, will eliminate any problems relating to acoustics, and will give you a more direct , unaffected sound.
If that isn’t enough, yet another reason to monitor at low volumes is so your ears don’t fatigue. If you spend enough time working on a mix at high volumes, you will certainly began to loose sensitivity to certain frequency ranges. Mids will become washed and hard to distinguish, highs will become less harsh, and your decision making will be less accurate. We’ve all had at least one song where we thought we nailed the mix, then after checking on it the next day, thought “Man, what the hell was I doing?”. Well that’s ear fatigue, and it can greatly destroy the quality of your final mix (and especially your hearing). And after all, without your hearing, you’d be out of a job! If you want more information on what loud sounds can do to your hearing from the perspective of a used to be engineer, now hearing specialist, check out this website http://www.heartomorrow.org/
Monitoring at low volumes isn’t at all a new concept, and has been a “secret” of mix engineers for decades. This “secret” is based off of the concept that if it sounds good quiet, it’ll sound good loud, but if it sounds good loud, it wont’ necessarily sound good quiet. After all a good mix sounds good at all volumes, on all playback systems. So next time you listen to a professionally mixed and mastered song on a laptop, or cheap playback system, listen to how the bass is audible and clear. You will find that even on your cheap 15 dollar portable whatever, that you can still hear the bass, crystal clear. Why is that you ask? Keep reading and you’ll see why.
It’s 2014 and we have entered an era where people are no longer listening to your mixes on vinyl through a good home stereo system. Now your audience is listening to your music on mp3 through their macbook speakers, cheap ipod earphones, and ihomes. There is even more demand on getting your bass to translate because, with the exception from the earbuds, these are playback systems that generally struggle to reproduce fundamental bass frequencies. Here are two frequency response graphs that illustrate this “lack of low end” The top graph is the frequency response for a Mac Book laptop, and the bottom is for a Sony laptop.
Mac Book Pro Speaker Response picture
Sony Laptop Freq Response
Looking at these graphs, one can see right away that there is a serious roll off of low end around 200-300 hz and below. As you may know this is where most “bass” or low end sits in a mix. So why can you still hear the bass on professionally made albums through your laptop speakers? It’s because the artist, and engineer learned to compensate for this issue. . Similar to what I said before about having your mix translate if it is monitored quietly versus loudly, if your bass sounds good through crappy speakers, it will sound good through great speakers. This is why you will see many professional studios, and even some home studios using “unflattering” speakers to run their mixes through. Speakers like the Yamaha Ns10’s are a staple in the recording industry not because they sound good, but actually because they sound “bad”.
There are two major steps in getting a good bass sound that will translate. And it all starts with the Artist. ( And if your the engineer, don’t worry, you still have options.)
For the Artist:
Proper Bass Arrangement
Many good artists are aware of this problem, and consciously try to avoid it during the writing process. Something amateurs do often when making music, whether it’s hip hop, house, rock or what have you, is to choose bass that is bone rattlingly low because it sounds “epic”. While this may sound “epic” in your Dre beat headphones, it’s not going to sound good anywhere else. Trust me. A good way to avoid having a lost bass, is to play either an octave higher than you normally would or if that’s too high, if you can play a different arrangement somewhere in between. As we discussed earlier, our ears are less sensitive to frequencies that are lower, and tend to gravitate towards ones that are higher in the spectrum. So bass with more upper mid/high end content is going to cut through easier. But that’s not bass anymore you say? Well actually even a bass line played in an upper register will still contain alot of low end content and also point to the fundamental bass frequency (a nice little trick you can thank your ears for).
Sub Bass-
As a general rule, for most genres I’d say stick away from writing in a sub bass line if you’re worried about bass translation. This is not to say sub bass can’t be used, but it all depends on where you are using it. If you have a bass with alot of low end content already, adding a sub is only going to make things worse. If you have a bass that hardly contains any low mid “meat”, you can put a sub on there . And if you do put a sub on there, filter out the highs, and low mids, so when combined with the actual bass, won’t sound muddy. Sub can easily destroy a mix if it isn’t sitting in the right place so take the time to make sure it sits in the right place.
For the Engineer:
If your an engineer, and you get a bass line that is too low to be heard on any small playback system and can’t be changed, don’t panic, there are a few things you can do.
1) Apply Maxbass.
The engineers over at Waves realized the issue of bass translation, and actually made a plug in specifically designed to help bass translate called Maxbass. Without getting too technical, Maxbass basically duplicates the signal, adds new upper harmonics to it then mixes it back in with your original bass. It’s essentially adding more audible frequencies to your bass to make it more audible for the listener.
2) Add Harmonic Distortion
Simular to what maxbass is doing, a great way to get your bass heard is to add frequencies that are more easily heard to it. By adding a bit crusher, or sample reducer you can add upper harmonics that will give your bass a lift in the mids and highs to make it stand out in the mix more, all without actually boosting the volume of it. Or try running your bass through a saturator, or subtle distortion effect. Used in parallel, can produce wonders for a bass.
3) Run Your Bass through a Transformer.
When lower frequencies pass a transformer, especially an old one, the audio signal gets more “DC” or slower moving. Transformers don’t pass DC current through them, so as the signal passes through them, various things begin to happen. Saturation, new harmonics, and interesting phase changes occur and are added to the signal. The end result will be a little more edge in the midrange, and the frequencies that were too low to be audible will have been shaped in a way to make them sound colored and surprisingly, louder! The added saturation and color of the transformer will shift your bass a little higher, and allow your ears to fill in the fundamental frequency. (Studio 11 offers individual track processing. So if you want to run your bass through one of our many units with transformers, it’ll only cost you about 10 bucks. Hint, hint ; )
4) Compression
Compressing bass can increase the overall subjective volume of it, and will help keep a more constant level throughout your track. Compression also naturally brings out the subtleties such as the sound of the pick, and guitar slaps that are more audible to our ears. Increasing these will increase the over all presence of the bass. Compress away!
5) Move Things Out of the Way
Sometimes the kick drum may have too much low end content and will be masking the bass. By side chaining the bass with the kick, you can increase intelligibility greatly between the two instruments, and as a result your bass will sound more present. Also, if the kick is in the way, roll off some low end content . When combined with the bass, the low end of the bass will fill in what the kick is missing and your ears will assume it’s from the kick.
6) EQ
Try simply boosting upper frequencies , and taking out some mud to improve intelligibility. It doesn’t hurt to roll off some sub on the bass either. Rolling off frequencies (like 30hz) we can’t hear well anyway will only make the bass cleaner, and adding upper frequencies around 1khz for example will bring out the presence of the bass more.
Try listening to your finished product on various playback systems to get an average of how the bass sounds. Check it on the laptop. If you can hear your bass clearly on a mac book for instance, in my opinion, you’ve nailed it. Also one of my favorite tests is to hear how it sounds is on my ipod earbuds. Because I listen to alot of music through these while i’m out and about, I have a good reference of my how bass stands next to other recordings. Or if you have a car, that’s always a good test too. The point is to try it all over,so that you can make adjustments as necessary. Follow these tips and you’ll be well on your way to crafting bass that translates!
Tips On How to Get An Internship At A Recording Studio
By Dan Zorn
Like many of you I started off as a musician, playing instruments and recording them through a cheap microphone on my parent’s desktop computer. I was fascinated by the idea of capturing sound, and being able to store and playback something that had happened. I became obsessed, and wanted to learn as much as I can about it. So after high school I attended The New England Institute of Art in Boston, MA. where I studied Audio Technology. After getting my Associates Degree there, I moved to Chicago and finished off my Bachelor’s Degree in Audio Engineering at Columbia College. Since then I have been involved in numerous recordings, and have interned at two major studios before landing a position at Studio 11, located here in the heart of Chicago. Through my experience, I have learned a thing or two about internships that will be helpful for you, coming from someone who was just in your spot.
—————————————————————————————————————————————————————–
The music industry has gone through some changes over the years. It is no surprise that because of this, so has the recording industry. The invention of the computer has radically changed the way we write, arrange and record music. People can produce full albums from home, and now even have all the tools to “enhance” an otherwise poor performance. Unlike the days of tape, you had to be a great musician to be in the studio. Because if you weren’t it was a waste of time and money. Fixing mistakes wasn’t as easy as today where we can nudge, undo, redo, pitch correct, time shift, etc. So as a result of this new digital era, where everyone has access to record, mix and master, there are many more “artists” putting things out on the internet. So while there is more music than there ever was, there is also more crap, or poorly produced albums to wave through, because not everyone can make things sound “good”. That’s where you come in. By reading this, I’m assuming you have put time into learning about audio, whether it be through training, education or personal research. If you have come this far, you are already lightyears ahead of the bedroom producer who is doing this as just a hobby, and you want to take things to the next level and do it professionally. Because after all there is large difference between a home recording and professional one. Many artists may not know how to explain the difference between a home recording and a studio recording, but they certainly recognize it when they hear it. I always think of an engineer it in terms of a film producer. If he does his job correctly, you will not know he was even there. You are aiming to be the ever important, but completely translucent part of the artist’s music and vision.
So if you want to take it to the next level and eventually become an engineer at a studio, you must intern first. It is important to know, and something I realized during my time as an intern, that studios rarely hire engineers without interning first. They may allow you to freelance out of their studio, especially if you’ve been in the game for a while, but won’t throw you work or put you on staff without being their intern first. The reason for this is that it takes time to understand how their particular studio is run. Every studio has different quirks and signal flow, and while most concepts remain constant between studios, some take getting used to. It would be unfavorable ,for example, for a studio to hire someone who came from a studio working on an Avid C24, mixing entirely ITB, to a 96 channel SSL with all outboard gear, recording to tape. Your going to be a little out of your element, and it will take time to get used to the change. No matter how experienced you are, there is always a bit of a learning curve in a new environment with different mics, outboard gear, plug ins, DAWs, live rooms etc., so it makes it easier to have someone who already knows this stuff. So without further adieu, I will offer you 1) Tips on how to get in intern position and 2) Tips on how to keep the position.
If you are currently going to school, or are near graduation, do not wait until after you graduate to find an internship. That’s what I did, and it set me back greatly. The reason why it’s smart to look for internship while your in school is because most internships are unpaid. While you are in school, your loans are deferred and you can afford to work for free (unless you are paying for school as you go, then it may be a bit harder, but still doable). It may come as no surprise that after graduation the real world kicks in, and unless you have millionaire parents who are willing to throw bundles of cash your way , your like me and have to pay for rent, bills, and the dreadful student loans! So long story short, start looking junior or senior year of school and if your lucky , you may land a job around or shortly after the time of graduation.
When you start looking, don’t worry too much about your resume. Studio managers don’t care that you worked at Guitar Center for a year and a half , nor do they really care too much about the band you are in or that you “engineered mixed and mastered yourself”. What they are looking for in an intern is passion, someone who can think for themselves, someone who is driven, and someone who has a desire to learn. Don’t get me wrong, having experience is a plus, but there is a big difference between working on on your home setup and working at a professional studio with unlimited routing possibilities. So it’s key to realize early on that you don’t know everything, and you can learn a thing or two from these guys that have been in the industry for 15-20 years. So start calling all the studios in the area, send emails out. If you send enough, at least a few are going to respond to you.
When I finished my schooling in Chicago and was a bit intimated to apply for internships, because I had assumed that all the other classmates of my large graduating class school had already hopped on it. How wrong I was! Everyone must have been thinking the same thing as me, and assumed all the intern positions were taken. I sent out maybe 10 emails, and got replies from about half of them saying they were interested in meeting me for an interview. By the end of the process, I actually had options of where I wanted to go, and I chose accordingly. My only advice when writing the email is to show your passionate and want to learn. Make an impression. Mention your schooling, or training and maybe some things you’ve done, but show that your’e eager to learn. Once you show this, if they are looking for an intern, they will send you an email back saying they want to meet up for a casual meet and greet.
Now emailing and setting up an interview is the easy part, it’s the one on one interview that requires the most effort. Before I landed my position at Studio 11, I met with a handful of engineers for different studios. By the end, I had a pretty good idea of what they were looking for in an intern. And here are some things I learned about the meet and greet:
Don’t Treat it Like a Corporate Interview
Engineers of the music industry chose a job in music because they didn’t want to do a 9-5 corporate job. So don’t show up in a suit, or even a dress shirt. Wear what you would wear on a normal day. After all this is Rock N’ Roll, baby.
Treat the Interview Like a Conversation
Being yourself is the most important thing you can do in the industry. If you act like you are some seasoned veteran, when you are only a junior still in school, or have never touched a console, they will see right through you. Talk to them, ask them about their story. Every engineer who has been in the business for a while has a story that they are more than happy to tell. Who have you worked with? What kind of music do you listen to ? Which segues me into …
Be Attentive
Be attentive to what kind of person the engineer/engineers are from the start. This will let you know what you are in for, and what you can and cannot do. I had an interview with an engineer that sat me in the back of the studio with him as he rolled up a joint and got high. From this I gathered that he is not uptight, and has a relaxed attitude about how to run things. I also had an interview with an engineer that brought me into his office, and immediately began on how he likes to “run a tight ship”. So be attentive towards the type of person that your employer is, and from there you can judge what kind of vibe you are working with.
Show Them that You Want to Learn
If your anything like myself your probably a gear head. And even if you’ve never touched a piece of gear, you already know how it works and the signature sounds that come from it. When you get to the studio, right away walk around the studio and closely examine the gear they have. This will show the engineer that you are enthusiastic about gear (a quality any successful engineer has), and you won’t be pretending because you actually are interested in this stuff!
—————————————————————————————————————————————————————–
So lets say you really jive with the chief engineer and he tells you he want you there 3 days a week. Cool. Now that you landed the job, here are some tips that I learned that will help you keep the job.
Act Professional
Every client that comes in , is going to assume you work there. They will either assume that you are an assistant, or a second engineer. Some will understand you are an intern, but will still think highly of you because you made it this far, so you must be doing something right. So don’t be texting or taking pictures to show on your instagram, bone head! (something that can get you in trouble legally if it’s the wrong client). Another reason to act professional is because you never know who is going to come in. It could be a crappy teenage metal band, or it could be a multi award winning producer who needs to get a few songs mixed while he’s in town.
Pay Attention
This may seem obvious, but if a client says “Man, my throat is parched”, Get him or her some water! A happy client will come back, and remember that the staff was friendly and attentive. Or if the client says to the engineer through the talkback “can you raise this mic up a little”, get your ass up and adjust the mic, before the engineer does it himself or tells you to. If you miss this because you are texting on your phone, or in la la land, the engineer has full authority to beat you over the head with an RE20.
Ask Questions
You are there to learn. So the best way to learn is to ask questions. But just know where and when to ask them. Timing is crucial. If you see that the engineer pulled up an EQ and is listening intently sweeping the frequency range , then it’s probably not the right time to chime in with a question. However if the engineer is bouncing down a song, or printing a song where he can’t do anything else anyway, ask away.
Know Your Boundaries
If there is one thing you shouldn’t do in the studio, it is question an engineers judgment, especially one that is significantly older and more experienced than you! Don’t ever tell him he should compress something , or he should do this or that because it’ll sound better. Believe me, he knows what he’s doing, he’s didn’t hire you for your advice. The only time where this dynamic can change is if he is trying to troubleshoot something that you may know, then by all means speak up.
Get to Know the Engineers
You’ve gotten to know the engineers a little bit during the interview, but go further with it.They are humans too, and were once in your position. After a session, offer to pay for a drink. The closer you guys are the more he can trust you, and the sooner you will beable to begin touching some gear, and running sessions.
Do Research
Never heard of that Orban Compressor? Heard of the Puigtec MEQ5 but don’t know how to use it? Look that shit up, brotha! Take a picture of all the outboard gear, and hardware and look it up when you get home. Check out the specs, how people use it, and when people use it. This serves a couple purposes. One is that when you see the engineer turn a knob, you know what they are doing so can take note on when and how to use it in the future. Secondly, the faster you learn the equipment, the sooner he will see this and let you touch some gear for yourself. Also if you have any questions about the console, all studios have a manual for you to look at. If your smart, you’ll photocopy it and study it when your at home.
Learn the Patch Bay Early
The seemingly most intimating part about a new studio at first is the patch bay. It is the center for all in’s and outs, and is in many ways the heart of the studio. If you are fumbling with the patch bay it’ll affect your efficiently as a potential assistant or engineer in the future. The sooner you learn this, the sooner it will free up time for actual engineering. I know patching in cords makes you feel like your working as lab coat engineer at RCA labs in the 50’s , but it’s not engineering, it’s basic signal flow.
Come in as Often as You Can
This one is more obvious, but if you are only scheduled for 2 or 3 days of the week but are able to come in more, ask the engineer if you can just sit in on some sessions, even if other interns are there and they don’t technically need you. As long as your not in their way, chances are they won’t mind, and you’ll be able to soak up more information and faster.
—————————————————————————————————————————————————————–
If you go out of your way to please the engineer and clients often enough, they are going to notice. Think for yourself, don’t wait around for the engineer to tell you to put back the mics if they are done recording. Or don’t wait around to clean up after a session. As an intern you are exchanging your time for knowledge, and if the knowledge pertains to your dream job, then I think it’s a pretty damn good exchange. Go the extra mile, come in more than you should, be driven , be honest and you shall succeed!
Organizing Before The Mixdown
By Dan Zorn
​
So you’ve done it. You made, or received your first song with over 50 tracks,and it’s ready to be mixed down. You have 30 drum channels, 10 synths, 10 vocals, 2 bass tracks, and 10 SFX/ambient tracks. Where do you start?
A big part of mixing that often gets over looked is the part where you are supposed to enjoy it (it’s why your in this business in the first place right?). Mixing should never be frustrating, and should always keep moving while the juices are flowing. Because lets face it, it’s no fun if you are spending more time looking for a sound in a sea of tracks, then actually mixing the song. So it’s wise to first organize them in some sort of order to avoid wasting time and confusion.
Everyone has their own process, but it seems commonplace that engineers (including myself) usually start with drums on the top of the session and proceed down from there. Traditionally it will look something like this (from top to bottom) : Kicks, Snares, (Claps), Hi Hats, Toms, (Overheads), Cymbals, Percussion, Bass. The rest of the tracks such as guitars, strings, synths, piano, and vocals tend to vary more on personal preference. I tend to arrange my tracks based on the order I’m going to mix them in. Drums typically get treated first (because drums are the backbone of a song and need to sound good first), then I proceed to Bass to make it sit right with the kick drum, then melody/ or vocals. Again there is no correct way to do this, (in fact I know some engineers that start with vocals first and carve around that because they deem that the most important part of the song).So your orderdoesn’t have to be exactly this but it helps if you have a formula with all your sessions, so that after a while you can identify the location of a track without even thinking about it. Next, clearly label the track, and Id even recommend color coding them for easier identification (most DAWS will let you do this). And similar to keeping a consistent order of the tracks from session to session, it’s also a good idea to keep a consistent color for each instrument group (For example, my drums are always red, instruments green, and vocals yellow.)
So now you have the order of the tracks, but you still have over 50 tracks to keep count of and work together. First, take a look at what you have and decide if the song actually needs all of those tracks. For example, do you really need 3 layered hi hats? Is it important to the song? If it’s your song, then take a listen and strip down what you don’t need ( however, if it’s for a client be wary of deleting tracks without asking them first.) If you explain to them that it’s not adding anything to the track and gets in the way of other sounds, then chances are they won’t mind if you get rid of them. In mixing less is always more, which is why usually my next step is to combine similar sounds together via a bus to a single mono or stereo track, grouping them together, or assigning them to the same output to create a submix. Before doing this, it is important to listen for sounds that sit in the same frequency range that can be processed similarly. For instance, it would be unwise to group together a bass and vocal track, because they are going to be processed quite differently during the mix down. The first thing that should be consolidated into a single track would be all channels in your DAW containing overdubs of the same instrument. So if you have 3 guitar tracks,recorded with the same, or similar tone, with the same mic etc, level them together then consolidate that to one track .Next look for tracks like similar background vocal harmonies, hi hats, similar sounding percussion (bongos, congos) and consolidate them.
They key here is to reduce the track count , and stay organized as much as possible so you can focus more on mixing, and not finding. Some engineers may prefer to have the most available options for the mix down, keep everything separate, and not worry as much about the organization of the session, which is fine. Whatever works best for you, is the best way. But unless you are working on a 50+ channel console or control surface,
and have a photographic memory, staying organized and consolidating tracks to keep the track count down is less time consuming, visually easier, and will lead to a more efficient mixdown.
What Volume Should I Mix At?
By Dan Zorn
​
If you are an audio engineer, or aspiring audio engineer, you’ve probably heard that it’s best to mix at low volumes. This is because according to the Fletcher-Munson curve, our sensitivity to volume changes at different frequencies. Generally speaking we can hear speech frequencies (2-5khz) over low or very high frequencies at the same dB, and the louder the mix gets, the less of a subjective difference there is between these ranges.
​
​
​
​
​
​
​
​
​
​
Figure A: Here is the Fletchur-Munson curve. If you look at 50 DB , for example, the volume needs to be much higher than 50db in the lower and high frequencies in order to match the perceived same volume at around 1khz.
When mixing at a low volume, you are eliminating the “allusion” that the frequency spectrum is balanced which is what we perceive when the volume is loud (notice the compression or flattening out when you get to high dB’s on Figure A). For example you may think that the kick drum sounds tight and punchy next to the bass when the volume is loud, but when you turn the volume down you realize that kick hasn’ been processed enough and becomes lost in the mix. So making judgments about arrangement, EQ, and compression are much better to make at low volumes. So if mixing at low volumes seems to reveal more accurate results, why is this article called mixing and low and high volumes! Well there is one thing that is difficult to hear at low volumes: sibilance. If you aren’t familiar with sibilance , basically sibilance is the result of exaggerated “s” or “ch” or “sh” sounds from a vocalist, which causes the frequency response to peak anywhere from 4- 10Khz (sometimes higher). A highly sibilant vocal may look something like this on an Frequency analyzer :
​
​
​
​
​
​
​
​
​
Figure B : A noticeable sibilant spike around 8-10Khz.
Hearing these sounds throughout a song sounds very harsh and fatiguing to the ears after time, so at all times must be controlled or tamed in some way. However at a low volume, not many things sound fatiguing or harsh to our ears. You can listen to a 5 Khz tone for hours at a low volume, but on a loud system will drive you nuts after seconds. So when listening for harshness, turn up those speakers, and start de-essing, and eqing away!
​
Taming Vocals Without Compression
By Dan Zorn
​
During any given session, there is a good chance you will experience times where the artist gets really quiet, and parts when they get really loud. While dynamics are good to have in a song, having too much can hinder the mix and result in parts getting lost, or parts being too loud. Often times when this situation arises, new engineers slap a compressor on the channel to tame some of the peaks and bring up some of the quiet parts. However,
more times than not, this can do more harm than good because it doesn’t work for all the parts on the channel. I’m going to show you a way to fix this problem without relaying solely on compression to do it for you.
When recording vocals, it’s common for the vocalist to move around , whisper, sing, rap, yell, all within the same take. If you are constantly getting overly dynamic recordings from quiet to loud to clipping, first take the time address it at the source. Start by giving your client a heads up to try and stay within a certain distance from the microphone for the best quality. If you mention that it will have a better result, chances are they will have no problem trying to keep their distance in check. Now, of course they still will move around, and things will still sound overly dynamic, but if it’s even 10% better than before, it helps.
Secondly, check the settings on your preamp. Most preamps have a built in compressor in them. “But wait, I thought you said taming
without compression!”. Well compression on the front end before being converted into your DAW, not only becomes part of the recording chain before mixing, but can avoid clipping, and work wonders when it comes time to mix the song. Use compression at the source, not when it’s too late.
Here at Studio 11, we have 2 main preamps that we use for vocals. The Manley Vox Box, and the Drawmer 1969 Mercenary Edition tube-preamp. Both have compressors built into them, and as a result become an integral part of the front end chain. If your preamp doesn’t have a compressor on it, look into putting one in your chain (but be careful to get one without too much coloration or you can do more harm than good). However if you do have a compressor, a gentle ratio, a fast attack with a few dB of gain reduction can work wonders. The fast attack will grab those loud transients to avoid clipping, tame some of the mid sized transients, and boost up of the quiet parts. If all is done correctly, you will end up with a hotter signal for your A/D converter (hotter the better for conversion), and a more solid, thicker looking waveform. But there will still be parts that are too quiet. As I mentioned before, the easy (and lazy fix) is to slap a compressor on there to bring out some of those softer parts. The problem with doing this is that it may work for some of the sound, but when there are very loud parts, its going to
sound very audibly compressed and lifeless (which you don’t want). It’s a good rule of thumb to try to always use compression as a tool, not an effect. In most cases compression should be mostly translucent and shouldn’t take you away from the performance of the vocals. Of course there are some cases where over compressing can be used as an effect (think “All In” setting on the Universal Audio 1176), but most cases subtlety is best.
​
​
So instead of jumping right to the compressor, bring out that simple Gain plug in that you forgot existed the Audiosuite in Pro Tools. (or if your in Ableton Live, break the clip, and adjust volume of the section manually). Go through the track , using your ears and looking at the size of the waveforms, bring up those quieter parts so that they are on the same level as the rest. Now of course I’m not saying that the entire vocal track should be equally as loud, because that would be boring and dynamicless, but put them on a similar volume plane so they are always audible. Keep in mind there will be parts that are supposed to be quieter ( think intros, bridges, outros ), but the parts that are in the middle of phrases , i.e words that were sung quietly when they moved their head from the mic, or maybe a word that was much louder than the other parts, can be gained up or down accordingly.
If you prefer using a smoother analogous volume adjustment (instead of gaining sections), reach for that volume automation. Riding the volume to level out quiet parts can lead to a very rewarding result in the end, and will certainly be better than slapping a compressor to “fix” those spots. Once you fixed everything and everything is leveled out and audible, then you can put a compressor with a gentle ratio to glue it even further (if you even need to), or to bring out quiet transients that you couldn’t access via that gain tool or automation. Following these steps will help your vocals feel more present in the mix, and will avoid that dreadful “overcompressed” sound that you hear so often. Just try it out, play around with the front end compression, gain, and volume automation and you’ll be well on your way to taming those vocals , without sucking the life out of them.
How To Make Your Digital Tracks Sound Analog...In The Digital Domain
By Dan Zorn
​
Let’s face it, with the convenience and quality of modern DAWS , plug ins, and virtual instruments, it’s hard to justify spending a fortune on
physical analog gear. Digital (computer) processing has become so good that using the right plug ins and techniques, can yield a surprising (and convincing) analog sound . I’m going to show you just a few ways to bring the pleasing qualities of analog in that sterile digital recording.
​
Adding Warmth: Analog Modeling Plug- Ins, EQ, De-Essing
​
One way to add warmth to your digital sounds is running them through some analog modeling plug ins. Instead of grabbing that stock Pro Tools compressor, try using plug ins like the CLA2A from Waves (modeled after the infamous LA2A compressor), or the SSL bus compressor. You’ll find these compressors will react a little differently than a stock digital compressor, and tend to have more coloration, more natural saturation, and a bit more noise (all characteristic of analog) to add to the signal. For EQ, Waves also offers an API parametric EQ modeled after the modules in their analog consoles that sound great. Actual analog EQ will add harmonic distortion (more on this later) simply because they are slightly non linear, where digital EQ can introduce harmonics not related to the fundamental which sound unnatural when pushed hard. But you’ll notice when using the digital versions of the EQ, in particular the API EQ, that they took the non linear harmonic distortion of the actual unit into consideration when designing it, and as a result more closely resembles an “analog” sound. Another simple way to warm up a track is by subtractively EQing some high end content . Rolling off, or reducing some unwanted highs (specifically the harsh 4khz -8khz range), can add a smoothness to your track that analog processing naturally gives. Also
experiment with de-essing things other than vocals such as cymbals, guitars (renaissance de-esser works great) to have a similar smoothing effect. Take some time, learn the characteristics of each plug in. You will be able to utilize and control them much more when you do.
Adding Harmonics: Harmonic Distortion, Emulated Tape Saturation
A common problem with digital sounds , is that they sometimes lack harmonics that analog naturally adds from it’s physical circuity. Even simply running a sound through an analog console will add pleasing harmonics related to the fundamental pitch by the time it reaches the end of the circuit. There are a couple ways to add harmonics to a track in the digital domain. A go to plug in Iuse to add a bit of harmonics or grit is the Lo Fi by Waves. Even with just a small amount of the saturation or distortion or bit/sample rate reduction , you can introduce new harmonics to give the sound a ton more analog character. Another favorite of mine is the Kramer Tape emulation. Achieving a similar sound as the Lo Fi, in a slightly different way (tape compression, instead of bit and sample rate reduction), you can add tons of warm harmonics that a real tape machine would reproduce. Adding harmonics in this way can also be referred to as adding distortion to the signal. Some people think of distortion as a negative thing in audio (especially in the digital domain), but when used subtly can not only make the sound more rich and pleasing to the ears, but it can increase the subjective loudness of it as well (by shifting the perceived frequencies closer to our sensitive hearing range). ...So if you want to make your mixes appear louder...hint hint.
Increasing The Noisefloor:
In a modern digital system the noisefloor is pretty much close to complete silence. This can be looked at as a good thing if you want crystal clean sounds, but one of the pleasing qualities of analog is that it is not clean but in fact a little dirty. So bring up a sample or generate a sound of white noise, vinyl noise, or background hum, and put it in the background of the song so that it is just barely audible. The way I usually add noise is through a free plug in by iZotope called Vinyl. Just put vinyl on a separate track (so you can have better control over it), increase the “mechanical noise”, roll off some of the low frequencies with an EQ, and have it sit quietly in the background. Aside from the noise sounding aesthetically pleasing, and adding to the overall harmonic content of a song (increasing subjective fullness), it also fills in gaps of a song that cut out to silence (drops, breaks). Hearing complete silence in a digital domain really just tends to sound strange to our ears, especially after or before a full spectrum of sound. This weirdness is most noticed on headphones where your ears are blocked off from outside noise (giving the impression that for a split second you think your wearing ear plugs!) We like a noise floor and hear it every second of every day. It not only adds an analog quality, but it also adds a very desirable human element.
These are the basics of how to make your digital tracks sound more analog using digital plug ins. There are even more things you can do (adding tape wow and flutter in some cases for example), but these are good starting points to bring a little more analog flavor into your digital tracks. Play around with the different combinations, listen to analog recordings, and try to mimic them, you’ll find that you had the tools to do it this whole time.