V6N6 7.14.15
Amateur podcasters can call them what they want, but between us broadcasters, we know those so-called subscribers are really listeners with earbuds and a cellphone.
And that means we can reach them like we usually do -- through their ears.
No one knows those ears better than broadcasters. We know about good content and good sound. What’s new to us are the codecs and the listening environments and devices used for podcasts. To explain what it all means, we asked our audio pros Jeff Keith and Mike Erickson to give us a quick sound check on podcasting.
Oh, the places they go, the things they do
The iPhone, Android and other smartphones are skewed to the vocal range for obvious reasons. Subtract from this equation the codec bit-rate reduction needed to get that sound to those earbuds, not to mention all that background noise your listeners are subjected to while listening on the move, and there’s no way you should hand them a full dynamic range of sound.
Removing program content that can’t be heard by these devices will improve the subjective quality of your audio. Jeff suggests that anything below 100Hz and above 12kHz won’t be missed. In fact, he says, “Removing those frequencies might actually help your sound, due to reduced or removed ‘codec teasers’ such as hiss or hum.”
For all those other unwanted frequencies that happen during pauses in programming or when the AC kicks on during a recording, you’ll need a noise gate same as any other program production. Any good mic processor (such as our M1, M2 or M4-IP mic processors) should have a noise gate to keep the noise floor from rising during pauses in vocal content. This, too, will give the codec less nonsense to work with and turn into noise.
Processing to the codec
Unlike processing your on-air signal in which modulation control is the goal, processing for podcasting is all about controlling what the codec sees. This is why it’s important to give the codec consistent levels and a balanced left and right, especially at lower codec bitrates. Jeff recommends switching from stereo to mono for podcasts at bitrates less than 48kbps in order to preserve audio quality. The ideal is to maintain consistency going in, although often some audio processing can be helpful to smooth out level variations than can cause the codec to overwork. Avoid overly boosted highs, any noticeable hiss or hum, and distortion due to badly clipped audio, all of which adds to the codec’s work (and bit) load.
Adding a trace of AGC or compression can add a measure of “presence” to a podcast, but careful. Keep in mind that many of your podcast listeners will be listening in on headphones, and too much compression this close to the ear could cause fatigue. Others will be listening to longer form podcasts through their sound system in the car, all the more reason why processing that isn’t fatiguing is important.
How aggressive should you set the processing for podcasting? Mike says just enough to raise the audio above any ambient noise for listeners who don't have noise cancelling headphones, but not so much that you remove all trace of quality for those who are downloading low bitrate podcasts.
Most any audio processor that you have in the chain will work. But if you have a choice, use a processor like our Aura8-IP processing BLADE (which has eight separate multiband processors, one of which you can use for podcasting). It lets you selectively add AGC, compression or limiting by bypassing the other sections, rather than require all three functions to operate interdependently. This selectivity makes it a little less tricky to get the right amount and type of processing needed.
For more information on processing for the Internet, download Jeff Keith’s white paper, " pdf Audio Transfer over the Internet. " Wheatstone also introduced a new Audioarts console (our new Audioarts 08 has USB and balanced or unbalanced stereo mixing bus) made for podcasting that is worth checking out if you plan to set up a separate sound booth or studio for podcasts.