TV News January 2019

WHEAT:NEWS TV  JANUARY 2019  Volume 6, Number 1

Sneak Peek: WTOP Installation

Here is a quick preview of the massive News Room RadioDNA is building for WTOP. Facility launch is early February. Stay tuned for more updates! Thanks Rob Goldberg from RadioDNA for the video!

If you want to know where you’re going,
you have to first know where you’ve been

Below, we take a quick look back on the stories and the people that defined the year and point to what’s ahead for our industry in 2019. 

 

All for One. One for All.

WCCB WEATHER

October 2018: There’s nothing quite like a Category 4 hurricane to prove that you can broadcast live from just about anywhere there’s a dry studio and an IP connection. Here’s how Bahakel Communications kept the information flowing for its three Carolina stations in three separate communities during the worst of Hurricane Florence.

READ: All for One. One for All.

On most days in the Carolinas, Bahakel Communications produces newscasts and weather reports from its main newsroom in Charlotte for WCCB-TV, Charlotte; WOLO-TV, Columbia, SC.; and WFXB-TV, Myrtle Beach, SC. 

But, on other days when, say, a hurricane pays a visit, live broadcasts can take place just about anywhere there’s a dry studio and an IP connection.

Bahakel Communications has an IP-networked Dimension Three audio console that handles mixing, mic control and IFB for three stations: one in Charlotte, one in Columbia and one in Myrtle Beach, SC. The console is part of the AES67-compatible WheatNet-IP audio network and is located in the main newsroom in Charlotte, which is almost 100 miles from Columbia and more than 150 miles from Myrtle Beach. 

All studio cameras, mics, IFB, prompter, weather system, and set monitors in the Columbia newsroom are linked to Charlotte by IP. In Charlotte, it all goes through production and master control, and is then re-encoded with all the sub-channels and sent back to Columbia via IP transport for transmission. The same setup is used for some programming for WFXB-TV in Myrtle Beach. Mid-day weather is presented from the Columbia newsroom, through WCCB-TV production, and fed back via IP to WFXB-TV for air. 

At any time, news and weather reports can originate from any of the three locations for all or any of the stations. Should staff need to evacuate one studio location, they can continue to broadcast updates from another studio location. This was the case when Hurricane Matthew required a mass evacuation in Myrtle Beach. As WFXB-TV’s building sat empty and dark, the staff was able to remotely produce live updates for the Myrtle Beach community from the main studio in Charlotte.

For Hurricane Florence, most of the live coverage was done out of the WCCB-TV studio in Charlotte, and our friends at Bahakel Communications tell us that all three stations weathered the storm without incident. 

5 Key Findings for Commissioning AES67 (July 2018)

BladeFest 5

July 2018: Over the summer, we decided to take AES67 out for a spin. We set up a trial run with AES67 devices from Genelec, Ward-Beck, Dante, and Axia into a WheatNet-IP system of 12 mixing consoles, 62 hardware BLADEs (I/O access units), 100 software BLADEs, talent stations, SideBoards, Smart Switch panels, and software including three different vendors’ automation systems. It was all tied together through Cisco and Dell switches. Here is what we learned.

Read: Five Key Findings for Commissioning AES67

By Andy Calvanese 
Wheatstone Vice President/Technology

By now, you’ve heard that AES67 is part of the SMPTE 2110-30 standard and that all the major IP audio vendors offer this audio transport standard as part of their system. 

The AES67 format will be useful for streaming audio between the control room and the master control and there’s good reason to believe that it will effectively eliminate the practice of HD/SDI audio embedding/de-embedding with video, and all the hardware that goes along with HD/SDI workflows.  

There’s been a great deal of talk about AES67, but that is as far as it’s gone for most broadcasters – essentially a new standard still sitting on the dealer lot waiting for a test drive. 

How easy will it be to commission AES67 in your plant?

We decided to take AES67 out for a spin to find out. Earlier this summer we did a trial run of AES67 through a large WheatNet-IP system staged at the Wheatstone factory in New Bern, North Carolina, during what we call a BLADEFest. (BLADEs are the I/O access units that make up the WheatNet-IP audio network). We do BLADEFests periodically to test our system under real-world conditions, and for this one, we added in a few AES67 devices while we were at it.  

We added AES67 devices from Genelec, Ward-Beck, Dante, and Axia into the WheatNet-IP system of 12 mixing consoles, 62 hardware BLADEs (I/O access units), 100 software BLADEs, talent stations, SideBoards, Smart Switch panels, and software including three different vendors’ automation systems. It was all tied together through Cisco and Dell switches. 

We ran the system through a series of automated torture tests that included completely rebooting the system and verifying proper operation afterward. We’re happy to say that after more than 160 reboots, not a single connection failure or loss of audio occurred. We also learned a great deal about commissioning AES67 in a plant. Here are a few major findings.

Finding #1. To use AES67 devices, your system must have a PTPv2 clock reference device, preferably synced to GPS for absolute timing reference.

AES67 specifies version 2 of the IEEE-1588 Precision Time Protocol, or PTP, a protocol so precise that under ideal conditions, timing accuracy of better than 1 microsecond can be achieved. While some AES67 devices can provide PTP timing signals which might suffice for a small system, an ordinary crystal oscillator in a PC or I/O device is nowhere near accurate and stable enough to provide an absolute timing reference for a larger system, hence the need for standalone master clock generator.

For even greater timing accuracy you can use PTP compliant switches. These are significantly more expensive and are not needed for normal audio distribution, but are necessary for those applications that require absolute phase accuracy for audio signals distributed across complex networks with multiple switch hops.

Once the PTPv2 clock is running, it’s possible to begin connecting AES67 devices to the network.

Finding #2. Before connecting AES67 devices, map out an IP and stream multicast address plan with all devices on the same IP subnet. 

Each AoIP vendor has their own way of allocating addresses; a plan will assure there’s no overlap and that AES67 devices will be on the same IP subnet since multicasting does not normally cross subnet boundaries. Start with the AES67 devices that are least common or least flexible in specifying or changing multicast addresses. 

Wheatstone AES67 1

Finding #3. When adding an AES67 device to the network, set the system sample rate at 48kHz unless you know the device sample rate.
AES67 does not require devices to support 44.1kHz and many do not. You’ll most likely find this setting option and others in the admin software that comes with the network system, such as the WheatNet-IP audio network's Navigator software, an interface screen of which is shown here.
 

Wheatstone AES67 2

Finding #4. When adding an AES67 device to the network, pay attention to packet timing incompatibilities.
WheatNet-IP uses 1/4 ms packet timing for minimum latency. Most AES67 devices also support 1/4 ms packet timing but some, such as Dante, do not. For those devices that do not use 1/4 ms packet timing, we enabled the AES67 1 ms Support option in WheatNet-IP Navigator, as shown here. 

Finding #5. Some AES67 devices do not offer an easy way to manually manage streaming details, although these devices often can read these details in the form of an SDP file.
In our case, we created SDP files by simply right-clicking on the desired source stream’s name in the Navigator crosspoint grid and opening a window that let us create the file. 

Below are a few sample SDP files from WheatNet-IP and Dante showing multicast address, packet timing, sample rate and stream formats.   

Wheatstone AES67 3aOverall, commissioning AES67 in most broadcast plants should be a nonevent as broadcasters begin adopting the SMPTE 2110 suite of standards. 

 

Plugfest 2018. We're Not Talking.

DannyAtPlugfest

PicturedDanny Teunissen - all things Wheat in The Netherlands and beyond - behind the racks at Plugfest, which we are not talking about...

September 2018: In August, we attended the plugfest in Wuppertal, Germany, which made huge strides in IP interoperability and AES67 compatibility. The results were heard at the IP Showcase during the IBC show in Amsterdam -- one more example of what we as an industry can accomplish together. 

Read: Plugfest 2018. We're Not Talking.

 

The first rule of Plugfest 2018 is to not talk about Plugfest 2018 so we won’t tell you that Wheatstone was there with the AES67 goods. Nor will we mention the huge strides we made in IP interoperability, or how much we enjoyed getting together with other industry manufacturers to talk IP in Wuppertal, Germany, last month. 

What we can tell you is that you’ll get to hear and see the results at the upcoming IBC show.

It’ll all be laid out for you at the IP Showcase in room E106/107. You’ll get instruction, case studies, and demonstrations of what we’re talking about – or rather, not talking about. This is the third IP Showcase at IBC and it promises to capture the momentum behind the migration to standards-based IP infrastructure for real-time professional media applications. 

IP Showcase is hosted by major technical and standards organizations within the broadcast industry: Audio Engineering Society (AES), Alliance for IP Media Solutions (AIMS), Advanced Media Workflow Association (AMWA), European Broadcasting Union (EBU), Society of Motion Picture and Television Engineers® (SMPTE®), and Video Services Forum (VSF).

Our AES67 compatible WheatNet-IP audio network units will be there doing their part and demonstrating AES67 compatibility in this overall showcase system based on SMPTE ST 2110 standards. AES67 is a critical part of the move to IP because as the IP audio transport standard specified in SMPTE ST 2110, it can eliminate the practice of HD/SDI audio embedding/de-embedding with video and all the hardware that goes along with HD/SDI workflows.  AES67 is an IP audio multicast transport standard that uses the Precision Time Protocol IEEE 1588 as the master clock reference.

Be sure to look for our I/O BLADEs at the IP Showcase and while you’re at IBC, stop by Wheatstone stand 8.C91 and let us know what you think.

 

Best of Show Awards

Lightning Combo

April 2018: When we get an award for the products that we sweated over all those months leading up to the NAB Show, it means a great deal to us. For the 2018 NAB Show, we received three awards that represented our key technological achievements. As we start our sprint to this year’s NAB, we’re reminded again that all the products we produce are judged first and foremost by the people who use them. 

Read: Best of Show Awards. Wheat Gets Three.

Every NAB show feels like a finish line. We design, we build, we make changes, we add features, we tweak features. We push ourselves as fast and as hard as we can to bring our individual and collective vision to the industry. We see the first day of the show coming and start counting backward to the cutoff point - the point where we need to pack up what we’ve been working on and take it to the public. But rather than being the finish line, it’s only the next starting line.

We believe we make a difference in the broadcast community. We have ideas - some of which become products, some of which become blueprints for the way forward. We look at the broadcast community every day and evaluate the way it works and the way we work in it, always with a single objective – to improve things. The ways we work; the audio we hear; the ways we interact.

And when we finally get to the show, the excitement of being able to share what we’ve been working on becomes the driving force. We pull out all the stops and make sure we’re presenting it in the best possible way so that you – the broadcast community – can partner with us to implement it all.

While our reward is seeing the difference we make, we’d be lying if we said it didn’t feel great to be recognized by the media in the industry. Next to customer acceptance, nothing’s better for a manufacturer’s self-esteem than receiving a NewBay Best of Show Award. It’s the closest this industry gets to an Oscar and it says that a new product has been evaluated by a panel of engineers and industry experts, and selected based on innovation, feature set, cost efficiency and performance in serving the industry.

So, as we counted down the minutes to the close of NAB 2018, we held our breath, hoping to get at least one Best of Show award.

And guess what? We received three:

Our new ScreenBuilder 2.0 virtual environment creation tool took one from TV Technology

ScreenBuilder Award Combo

Our new PR&E EMX AoIP console received one from Radio magazine.

EMX AwardCombo

And, yep, even our analog console, the Audioarts Lightning standalone successor to the venerable R-55E, took one from Radio World (see top image in this story).

Stereo Miking Technique. (March 2018)

MS TOP IMAGE

March 2018: We solved our share of broadcast related problems during the year. Here’s a miking technique we told you about for capturing a clean, wide stereo image that can also play nicely through those TVs in airports, kitchens and restaurants that have mono speakers.  

Read: Stereo Miking Technique

Playing the middle against the sides

By Scott Johnson

Every broadcast engineer finds himself in this situation from time to time. Say your morning news show has brought in a local jazz combo who’ll play into a break. You want a nice, clean, wide stereo image, but you want it to be mono-compatible for all those TV’s in airports, kitchens, and restaurants that have mono speakers. 

MS Patterns SPACED PAIROne common technique is the spaced pair, where you place two microphones far apart, just as the early experimenters with stereo sound did. This results in an unnaturally wide stereo listening experience, which we might like. But when it’s collapsed to mono it’s a mess because of phase cancellation. So that one’s out.


MS Patterns ORTFAnother method is the near-coincident pair, or ORTF method. (The letters stand for the Office de Radiodiffusion Télévision Française, where the method was invented.) This technique uses two cardioid microphones with their capsules about as far apart as human ears are, and angled at 110 degrees. It produces a very natural sound and is fairly compatible with mono, but the distance between the microphones can cause cancellation at some frequencies.


MS Patterns COINCIDENTThen there’s the coincident pair. Angling two cardioid mics at 90 degrees with their capsules right on top of each other gives us an expansive pickup pattern, and also prevents virtually all phase cancellation issues since all sounds arrive at both microphones at exactly the same time. The disadvantage here is that the stereo image doesn’t sound as wide or natural. The mic patterns overlap quite a bit, and our ears aren’t perfectly coincident.


MS Patterns MID SIDESo how do we get the wonderful width and sense of depth of an ORTF pair or even the exaggerated image of a spaced pair, but achieve the same perfect mono coherence as the coincident pair? The best way is with a technique called mid-side (variously abbreviated as M-S, M/S, or just MS) miking. The technique was developed by EMI recording engineer Alan Blumlein, circa 1933, and it’s ideal for broadcast use because while it produces a great stereo image, it’s also totally mono-compatible.

Click to learn the technique and why it works...

To do this, we’re going to need two microphones. One of them should be a bidirectional microphone. A dual-diaphragm condenser like the AKG C414 is ideal, but any bidirectional microphone will suffice if it’s of reasonable quality.

The other microphone is usually a cardioid. It should be of comparable quality to the bidirectional microphone and is also usually a condenser. A Neumann KM184 is a great choice, but again, microphone selection is not critical to the technique’s success. Only the patterns matter.

The bidirectional mic is placed on a stand with its capsule facing left and right. Generally, the “front” or “positive” side of the mic faces left and we will assume that here.

The second mic is placed facing the source. For reasons of phase coherence, it should be as coincident with the other mic as possible; ideally, its capsule should rest right above the bidirectional’s capsule.

MIC SETUP 030218 2560pxSo now we have cables from two microphones headed back to the console, and that’s where we do the interesting part: matrixing these middle and sides signals to a left/right pair. We’ll need three faders to do this.

MS Patterns MID SIDE

On fader 1, we’ll assign the middle (cardioid) microphone. We’ll set it to a nominal level and pan it center on our stereo bus.

On fader 2, we’ll assign the bidirectional (sides) microphone. We’ll pan it to the left on our stereo bus.

On fader 3, we’ll assign the sides microphone again. (On an analog console, we can do this with a Y cable, or by way of the patch bay.) On this fader, we REVERSE the phase (polarity) of the incoming signal and pan it to the right.

Now all we have to do is set the trims on faders 2 and 3 to a nominal level, ensure they’re both set to exactly the same level, and bring up all three faders. You’ll be capturing a very wide, rich stereo field thanks to the combined patterns of the three microphones.

Here’s the best part. Try moving the level of the middle mic. As you pull it down, you’ll sense that the stereo image widens. If you push it up higher, the stereo image will narrow.

How does it work? For purposes of discussion, let’s call the middle mic M and the sides mike S.

The left channel of the stereo bus is receiving M + S, meaning that the sounds arriving at the left side of the sides microphone are being added to the middle mic signal.

The right channel is seeing M – S, meaning that sounds arriving at the right (back) side of the sides mic are being added to the middle mic signal. We flipped the polarity on this fader so that the sounds arriving at the back (right side) of the bidirectional mic, which are naturally of opposite polarity from the front, will again have the same polarity as the middle mic and add properly.

Signals arriving from dead center will enter both the front and back sides of the bidirectional mic at the same time, producing opposite signals that cancel, so those sounds are picked up only by the middle mic and are fed to both sides of the stereo bus.

But more importantly, because the two side mic faders are precisely level-matched and opposite in polarity, their contributions to the mix are exact opposites, and if the stereo bus is summed to mono, the sides signals will cancel out, leaving a perfectly clean mono signal from the middle mic. No mono compatibility problems can arise.

There are a couple of variations on the technique. You can try using an omnidirectional microphone for the middle channel, which makes the entire array essentially a stereo omni microphone. This can accentuate the pickup of room tone and reverberation if your room has good acoustics. You can also use a second bidirectional microphone for the middle channel with similar results, extending the pickup pattern.

The technique isn’t just for music, either. M-S miking a live shot, for example, or a speech before a live audience, has a tendency to put the viewer right in the middle of the crowd, improving overall fidelity of the experience. It’s even a good way of capturing crowd or environmental sounds that will later be mixed with a voice-over, dialogue, or an interview as stereo nat sound.

Mid-Side miking can save the engineer a great deal of time and aggravation, capturing a clean, clear, dimensional stereo image that’s ideal for television broadcast. It’s a technique every audio engineer should have tucked away in his bag of tricks.

Scott Johnson is a systems engineer and webmaster for Wheatstone. He has plenty of mic techniques up his sleeve as a lifelong audio engineer. When he’s not experimenting with Wheatstone mixers and mic processors, he can be found at the local community theater mixing sound for the latest production.

 

Sportscast Like A Pro

SportscasterKit

May 2018: It’s apparent that small sporting events are becoming a bigger part of the broadcast mix. We wanted to know how it’s done these days, so we called up Mike Janes with the NBA Portland Trail Blazers and got a few good tips on covering collegiate sporting events like a pro.

Read: Sportscast Like A Pro

Collegiate sportscasting, we’ve been told, is a contact sport. It’s all about being on the field and in the moment. No one knows how to do this better than pros like Mike Janes, the Vice President of Engineering and Technology for the NBA Portland Trail Blazers, who has been a student of the game for more than 20 years.

He says that you don’t have to be in the big leagues to produce compelling sportscasts. Even if sporting events are a small part of your larger broadcast mix, you can still cover college football, basketball or baseball games like a pro with these few simple tips.

1. Add to the picture with multiple microphones. “If you have only one camera for, say, a small college football game, it makes sense that you’ll be shooting from the stand. But you have a lot of area down below on the field to cover, so use microphones to add dimension to that picture,” advises Janes, who recommends shotgun mics for this purpose. His crew places microphones on each side of the basketball court, both underneath the nets and on the ground pointing away from the basket to catch effects. They also place a mic at the free throw line and an XY pair on center court. “Audio is one of the more underappreciated aspects of what we do,” says Janes. (Full disclosure: Portland Trail Blazers uses our WheatNet-IP audio network with E-6 IP mixing console at its production studio in Portland).

2. Keep everyone in the loop. Things can change in an instant. Everyone on your team needs to be in the loop, and that includes the technical director at the home studio as well as the announcers on the ballfield, and everyone in between. The Trail Blazers uses a complex intercom system that loops in more than 30 production crew members at one time, including the graphics designer for generating logos and other graphics as the game rolls. Your intercom doesn’t have to be as complex, but it does need to be a reliable backchannel of communication that includes all crew members. Mixing consoles made for broadcast have talkback features that can be useful for this purpose. Even better: IP consoles, the IP audio network of which can serve as the backbone for a simple, easy-to-set-up IFB system and other purposes.

3. Be able to call the shots. It’s one thing to report from the steps of the Capitol Building, but it’s something entirely different to cover the play-by-play of a game. In broadcast sports, the key is knowing where the ball is going next – and that requires a good understanding of the game. “Most of our guys are very tuned into the sports they’re covering,” comments Janes.

4. Have backups of backups. When things go wrong in sportscasting, they tend to go wrong in a hurry. You’ll want spares, and spares of spares. That includes routing equipment and paths, and codecs, mics, mixers, headsets, and power cords. “I learned the hard way; have backups of backups,” says Janes.

Stay up to date on the world of broadcast radio / television.
Click here to subscribe to our monthly newsletter.

Got feedback or questions? Click my name below to send us an e-mail. You can also use the links at the top or bottom of the page to follow us on popular social networking sites and the tabs will take you to our most often visited pages.

-- Scott Johnson, Editor

Site Navigations