When working with the endless options of vocal effects in today’s average digital audio workstation (DAW), it can be very tempting to go overboard. It’s like having a huge, free buffet in front of you — of course you’re going to want some of everything. But that doesn’t mean you need to put chocolate on pizza or eat four plates until you get sick. Several artists get away with large swaths of effects on their vocals. Look at Radiohead for example. Their seminal album Kid A opens with the song “Everything In Its Right Place,” in which singer Thom Yorke’s voice is reversed, looped, pitched up & down, and drenched in a variety of distorting effects. However, above all of those vocal FX lies Yorke’s clear human, emotive singing voice. So, when experimenting with effects like Radiohead, be on the lookout for these five signs that your vocals have too much processing.
1. The vocal mix is muddy
One of the first signs to look for when using heavy effects on your vocals is how those effects are balancing in the overall mix of your track. If you’re listening back on speakers and things sound muddy or like frequencies are clashing too much, check your effects rack. Turn off things like reverb, delay, distortion, etc. one by one and test the mix. If it sounds cleaner without certain effects, consider taking them off your vocals, toning them down, or figuring out why they’re making the mix muddy. For example, if it’s reverb, try EQ-ing your reverb so only the frequencies you want are being heard. If it’s distortion, try softening the tone of the distortion so it’s less abrasive, etc.
2. You can’t understand the lyrics
Since we already reference Radiohead, let’s use them as an example again. Thom Yorke is a beloved singer, but fans know it’s not always easy to understand what he’s saying, even when his voice is bare without effects. So, before we make our next point, we acknowledge that not being able to understand lyrics can be because of singing style — not necessarily too much vocal processing. On the other hand, let’s assume you have pretty good diction and are a clear singer. If other people who don’t know the lyrics have to ask what you’re saying, that isn’t a very good sign. As much as effects like delay might make your falsetto sound ethereal and pretty, they can also make what you’re actually saying hard to decipher.
3. The emotion is lacking
One of the reasons people flock to singers like Adele or Christina Aguilera (whether you’re a fan or not, let’s examine them for academic purposes) is because of the emotion in their voices. You can hear what they’re going through in their vibrato; their vocal fry; their screams; their soft moments, etc. If your voice is buried under an ocean of effects, those subtleties can get lost, leaving the listener feelings like they can’t connect with what you’re singing about. Think about it — have you ever heard a song in a language you don’t speak, but you could feel what that vocalist was singing? It happens all the time, which is why we call music the universal language. Before you go crazy with the vocal effects, make sure the emotion you want to capture is still present.
4. The vocals blend too much with other instruments
Bands like Sigur Ros have several songs where singer Jonsi more-or-less uses his voice as another instrument. He’s not singing any lyrics, rather, he’s just singing wordless falsettos, and he’s doing so usually under an ocean of effects. If that’s your goal, then carry on! However, for most artists — from pop to funk to indie rock — you want your vocals to be front and center. When you become too friendly with digitally processing your vocals, they can get so blurred in the mix that it’s not always clear to the listener whether they’re hearing a singer or just some random vocal sample that isn’t meant to be heard prominently. If you can’t clearly tell between a shoegaze-y guitar and a reverb-blurred falsetto, you might want to tone back the vocal processing.
5. They don’t feel human
Many popular artists use vocoders, AutoTune, and several robotic effects on their vocals. But if they sound a little bit like a cyborg, they still feel human. Of course the feeling we get from music is subjective, but when Kanye West screams at the end of “Blood On The Leaves,” you can tell it’s a human screaming — and really putting everything into it, even if it sort of sounds like AutoTune gargling. So, unless you’re purposefully trying to make yourself sound and feel like a robot, be sure that even with various vocal effects, that there is enough room for your human voice to come through — whether that be the strain in your voice, your dialect, your accent, etc. Anyone can have the computer speak out words (like Radiohead popularized on their song “Fitter Happier”), but only you and your voice can sound like you. Don’t let the temptation to become a full-on robot get rid of your own unique feeling and energy.
Sam Friedman is an electronic producer and singer-songwriter based in Brooklyn, creating music as Nerve Leak. Praised by major publications such as The FADER, his unique blend of experimental and pop music has earned him hundreds of thousands of streams across the web.