The BBC And IP: A Match Made In Cardiff

By TVNewsCheck on June 28, 2018

Faced with rebuilding its major hub in Cardiff, Wales, eight years ago, the BBC decided to make the leap to an IP infrastructure on the production side even though key standards were still unsettled and hardware and software were difficult to find.

Mike Ellis
Cardiff Project

Looking back on that decision today, BBC Enterprise Architect Mike Ellis believes the decision was the right one and that when the project is finally completed late next year the BBC will have the facility it needs to provide multiple services on multiple platforms well into the future.

“I have never subscribed to the fact that IP will be cheaper. What I subscribe to is that IP will be better. It will be more flexible. With an IP-based infrastructure, we can instantly change facilities from one format to another.

“So, our facilities won’t need a ground-up rebuild in five years’ time with all new cables everywhere, all new equipment everywhere. We can use IP, just change the bandwidth where you need to change the bandwidth, change the end points where you need to change the end points and the whole lot should work together seamlessly.”

The Cardiff facility will be a hybrid, Ellis says. Playout for the linear HD will remain SDI, but the production will be run on IP.

“All the studios, all the edit suites, all the post-production facilities — they are all IP because that’s where we can exploit that flexibility to change formats quickly and produce content that can be personalized for delivery forms [over the internet].”

Ellis says equipment is finally catching up with the BCC’s ambitions thanks to the publication of the SMPTE 2110 standard.

Ellis says he is working with a “very wide range” on vendors and that some are ready for IP and some not. “Some, I think, will want to stick with SDI until the last dying day.

“One of the hardest challenges we have had so far is finding a monitor you can actually plug an IP signal into. The best you might get is a HDMI connector.”

Unfortunately, Ellis says, standardization doesn’t absolutely insure interoperability between gear from different manufacturers. In fact, he says, interoperating has been a problem, particularly for audio.

“So, we are currently trying to find solutions around that which involve boxes which will convert from one flavor of 2110 to another flavor of 2110. The long-term goal hopefully is that we can support all the options in the standard, not just a subset.

And there are also “all sorts of problems” with the way 2110 is handling the audio and video separately, Ellis said.

“We need to insure the timing is brought back together so that the audio and the video match and you get the lip-sync right, but without overly constraining that.

“If you put two cameras in two different studios and two microphones in each of those two, it doesn’t matter whether the camera in studio one matches the microphone in studio two because they are not the same content and some of the problems in the standards are, well we must delay everything to be the latest arrival time.”

In testing various systems, Ellis said problems have arisen as they try to scale up.

“We entered a partnership with Grass Valley, who is fronting a Cisco routing switching solution. It’s a very good solution. It does seem to work, but every time we build a little bit more on we find another limitation buried somewhere in the system.

“We have actually redesigned the network structure three times now, twice on paper, once having built it. At that point, fortunately, it was only a software tweak that was needed, not a hardware tweak. But we are realizing there are hidden limitations there that a small-scale proof of concept won’t necessarily reveal. Not until you build out a large system do you find out just how bad it can get.”

IP and the cloud often go hand-in-hand, but that is not the case in Cardiff. The technology was still in its infancy when the BBC did its initial planning for the site and Ellis remains wary about latency, especially in live production.

Amazon’s closest cloud center is in Dublin, which is 100 miles west of Cardiff, he says. But a round trip between the two facilities is actually 700 miles because of the way the BBC networks are structured.

“Any time you are moving data over a large distance, you are going to have latency. Shading is one very good example. Talk back is another one. Getting content across from the microphone to director to the earpiece of the presenter and then back again without the presenter hearing a little bit of their own voice delayed by that round-trip can be a problem.

“As soon as you get that delayed echo, the presenter might as well not be a presenter. They can’t speak because they are distracted by what they are hearing in their ears.”

Ellis says that he saw first-hand another latency issue when he visited the production facility of the PAC-12 athletic conference in San Francisco. They tried to use the cloud to remotely produce games at venues around California, but found that they could not make camera adjustment quickly enough. So, they had to go back to local camera operators.”

Broadcasters really can’t do anything about geography, Ellis says. “What you can do is be intelligent with where you put the processing, which is why Amazon is actually a very bad choice in my mind because they won’t let you put their cloud services on your premises. You can’t put it close to you. Other cloud providers, you can put it close to you.”

Cybersecurity is, of course, a big concern, Ellis says, and the BCC is getting help from the National Security Center, a government organization. “They came up with some ideas for things you really, really, really do not want to put anywhere near the internet.

“So actually, our live media network will not be connected to the network for the rest of the BBC at all. There will be no internet on it at all. The control network will connect to it and will connect to the internet indirectly via a series of firewalls to provide the protections we need.”

Ellis says the post-production environment that is already file-based gives him sleepless nights. “The thing that worries me most is someone finding a way into our media store, deleting content or putting something in the middle of a program.

“I am going to look at the beginning, I am going to look at the end and I might spot check two or three points in the middle. But they could get five, 10 minutes of something they want to say in the middle of a program and no one is going to have eyes on that until it goes out.”

Ellis cautions facilities designers not to rely on a single network. “You think back to the good old days of the SDI TV studio. You had four cameras in the studio. If one of the cameras failed, you would point another camera at the presenter, you would stay on air. Each of those cameras have a separate cable coming out of it.

“If all of your cameras and all your vision mixers connect to one IP network and that one network has a hiccup, you are off the air. You get no ifs, ands, buts, maybes. You are gone.

“Putting two networks in and maintaining them separately gives you the option of actually staying on air when otherwise it would have gone off air.”

Ellis is a stickler on metadata, identifying information (who, what, where, when) that rides along with the video so that it can be easily found by producers, and he hopes that IP will facilitate proper tagging.

“People who know me will know I hate the word metadata with a passion because as soon as you say metadata, someone says, I don’t care about that. I am not going to preserve it. I am going to throw it on the floor.”

They now figure that they will be able to recover the video using artificial intelligence, Ellis says. “Let’s not throw it on the floor in the first place. Keep it, transport it with the audio with the pictures, store it, use it. That’s the way you get personalization.”

Editor’s note: This story is based on an interview with Ellis conducted by TVNewsCheck Publisher Kathy Haley at TVNewsCheck’s NewsTECHFutures Retreat in Atlanta on May 31.