Computer monitor with wave forms on screen

Being able to correct issues in post-production that were created in the field will not only make you a stronger producer, but will also make your video sound — and look — that much more professional. Poor knowledge of audio holds many producers back from creating a truly polished finished work, and the difference between good and okay audio production can be the defining point in viewer engagement.

What is graphic EQ?

The graphic equalizer, also known as a graphic EQ, is a powerful tool for audio production. You might find yourself dealing with a distracting hum from a refrigerator that, if you can’t remedy, will ruin a perfect take. Knowing how to use the graphic EQ to solve this problem will save you from having to re-shoot.
Many people have encountered a graphic EQ at one point in their life, but very few use them correctly. When you’re adjusting the bass or treble in your car stereo, for instance, it adjusts each corresponding band of EQ. The main difference between your car stereo and a Graphic EQ used for audio production is the precision with which each frequency can be adjusted.

Defining a Band

A car stereo, generally speaking, has bass, mid and high-frequency adjustments. That would be considered a three-band EQ. The Q, or range of frequencies that make up a band, is the main difference between a five, 10, 20 or 30 band EQ. A 30-band graphic EQ would have 30 different sliders, each representing a given frequency range. These manipulate the volume of each frequency within an audio clip, allowing you to raise or lower the volume of a frequency range according to the desired effect.

The more bands in an EQ, the narrower the Q, or range that each fader controls. The average human spectrum of sound is from 20Hz to 20kHz. Because we produce things for humans to hear, that’s the range we slice up to make a graphic equalizer.

Hey, that hertz!

A hertz, or Hz, is the unit for frequency and is defined as one cycle per second. A cycle refers to how often a sound completes one rise and fall of its wave. Think of a lake. You throw a rock into the lake. The ripples coming from the rock falling into the water can be compared to a sound wave. The distance between each wave and the next defines its frequency. In the world of audio, a low frequency would mean more space between the waves whereas a high frequency would mean each wave is closer together.

To get a good idea of how frequency ranges work, examine a piano. Each key on a piano makes up a given frequency. Pianos are tuned according to these frequencies. The very middle key of a piano is called Middle C. The A note just above middle C is 440Hz. If you were to use your voice to hum at a constant note, matching the A above middle C, your voice would be at 440Hz. Every musical instrument is tuned based on this standard. Instruments such as bass guitar have a lower frequency range, and instruments like a violin or flute are in the high range. The larger the Hz number, the higher the pitch. For example a sound at 1kHz is higher than one at 50Hz. In line with that, the lower the frequency the lower the pitch.

Finding Your Range

Every sound has a frequency range — a male voice, a female voice, the note of the exhaust of a motorcycle and every musical instrument. Knowing what frequencies to remove to extract the offending sound is key. The constant hum of a water cooler, an annoying cricket and even white noise exist within specific ranges of frequencies. Identify the offending frequency by removing one band at a time until it’s lessened or removed.

It’s best practice to subtract frequencies rather than add to them. By adding to the presence of a frequency, you’re adding to the dynamics of that overall sound. This can create muddy audio. Also, many graphic EQs don’t do this transparently, meaning that they color the sound source, and this is typically unwanted. That being said, if you want to create more presence you might add a bit to certain frequencies so that the audio cuts through your mix. Adding and removing needs to be done in small increments as it’s very easy to muddy the sound by boosting frequencies or to change the quality by removing them.

Problems like hum might exist in a very small frequency range, but things like the human voice make up a much wider range. For example, typical male speech lives between 85Hz and 180Hz, and female speech range is 165Hz to 255Hz. Knowing this is very helpful. Say you’re interviewing a man and you want to get rid of all unwanted frequencies. You can roll off frequencies above 155Hz and below 85Hz. Doing this removes the frequencies outside of those that make up the man’s speech. However, this can affect the quality of the voice, so in many cases, it’s better to be subtle with rolling off frequencies rather than just cutting them completely. Lowering them gradually along the frequency range to create a hill ramping up or ramping down will give more natural frequency removal. As we know, some people have high voices, and some low. Because of this, the range might slip around. This is a starting point, not the law of the land.

Understanding what things live in each frequency range is a huge advantage when combining or mixing many different sound sources together. Mixing your audio well creates a soundscape and allows you to bring focus to one thing over another. An analogy to this from the video world is depth of field. By having a defined focal plane, you are telling the viewer to look at one thing over another. The same is true for audio.

Your primary goal is to give your audio better clarity.

Having too many sounds within the same frequency range creates a chaotic soundscape. Think of a large room with many people all talking at the same time. With so many sounds happening simultaneously and within the same frequency range, hearing nuance is very difficult or impossible. By working the frequencies of your audio sources, you can create space for each to live, allowing for all sound sources to be heard as a cohesive whole.

Mixing for Clarity

Your primary goal is to give your audio better clarity. Audio that lacks clarity is considered muddy because it has frequencies in it that are not necessary and because when you raise the volume of a given sound source, you’re increasing the volume of every frequency within that source. To gain clarity, remove unnecessary frequencies from your sound source. Like in our example of the male voice, roll off the frequencies that aren’t within the range you are trying to accentuate.
Many audio editing programs have a graphic EQ and come with a large slew of pre-sets. This is a great way to get to your desired outcome more quickly. Starting with a preset can give you a baseline that you can then adjust to meet your desired outcome. Also, saving your settings once you have found and successfully removed a problem will allow you to easily get rid of the same kind of sound at a later date.

Increased Precision

The graphic EQ’s brother from another mother is the graphic editor. It works in the same way as a graphic EQ, but the Q of each frequency is user-selected. The Q, or range of the frequency you are manipulating, can be whatever you’d like it to be. Want to remove all frequencies below or above a given frequency? It’s simple with a graphic editor. Choose the point at which you want to begin the range kill, and then lower all frequencies. You can also do this to notch out frequency ranges.

Summing it Up

A graphic EQ is a very simple tool that is easily misunderstood and misused. Use the approach of less is more, and you will find that just a bit of EQ change will go a long way toward giving your audio clarity. Having good audio production along with good video production is key to professional video work. One without the other is best summed up with this: you are only as strong as your weakest link.

Chris Monlux is Videomaker’s Associate Multimedia Editor and Video Producer.