Did the Soundie make a mistake or did I?

So, I am curious about the scenarios where the on-camera audio is being used vs the audio being recorded on a mixer. I get the scenario where there is no separate audio recording other than on camera, where that becomes the primary audio recording. I also understand very fast turnaround scenarios where the extra step of syncing would be a problem. I'm just curious to hear more from those who are indicating that the on-camera audio is used preferentially or treated as primary even when there is a separate and more complete set of tracks being recorded by a sound person via mixer and recorder.

My experience with that latter scenario (and again tell me if this isn't typical) is that any and all audio sources whether wired or wireless are recorded on separate tracks by the sound person, and then a mix track is sent via wireless hop to the camera as a reference. For that on-camera audio to be used as the primary source seems illogical given the inability to separate out the tracks plus the degradation of having been transmitted twice.

Does that scenario really exist? Oy vey.

I'll field this and give my .02, Charles. I do a lot of work that ends up on network TV. Some of it is (very) quick turn, like when we're at an NFL game and shoot an LTT post game with a key player from the winning team and that needs to run on SC shortly after being fed in. But even on things like sit-down interviews, a lot of times separately recorded ISO tracks are unnecessary(we still record them, just in case, though). Especially when it's just the subject or subject and reporter (one or two tracks). Perfect example: about two weeks ago, I had to shoot some sit-down interviews on my Amira. It was just the individual subjects(one at a time), we had them lav'd and boomed. Split tracks recording in camera fed straight from the mixer and when we were done, we fed them out of the camera on a bonded cellular pack system, kinda like it was news, but not. It was just a way to get the footage back quickly and easily. Audio and video married together. They didn't have to do anything except cut the bites that they wanted/needed. I doubt anyone outside of the high-end audio post world could tell a difference between the ISO tracks recorded in the mixer and the split tracks recorded in camera. And they may not even be able to.

Even though everyone thinks that it's trivial to sync separate audio and video files, sometimes it's not, because stuff is being fed in and they're not dealing with files with metadata. And some producers just prefer to work with the camera files with the audio already married vs. separate files, when they can. And honestly, if you have a good mixer, the camera audio is usually more than "good enough"(I dislike that term and philosophy, BTW). It wasn't that long ago that we only had two channels and everything was mixed down to that. I'm not suggesting that we go backwards, just that quality work can and is still being done like that.

And having a news/sports background and shooting a lot of high-end sports doc work, I never shoot without a quality nat mic on my camera(all of my VariCams have stereo nat mics and my F55 and Amira both have Sanken CS-M1 mics). Even when I have an audio guy, the nat mic is always feeding the cam(most cameras have at least four accessible audio tracks, now, and I have the nats going to at least two of them). Also, most modern cameras, in addition to analog line and mic signals, can take AES digital audio in, so you can do end-to-end digital audio from the mixer to the cam. Some of the audio guys I work with that are running Lectro digital cam hops send AES in on our Amiras, wirelessly. I've also had one of my guys do four split channels AES into an F55 when they were doing some big follow job a few years ago. I mean even the Alexa 35 has an optional audio module built by Sonosax, so that it can record quality, multi-channel audio in-camera. If all that was ever needed was scratch or reference, they could have just left it at the multi-pin line input that's on the main body and the two built-in mics on the front. Yes, I ordered the Sonosax module... ; ) I think everyone I know that already got their A35's have the audio module.

So, yes, I get it in the narrative world and we do plenty of second system sound in my world, too, but there is still a lot of production that wants and requires quality audio in-camera.
 
Thanks Run&Gun. I think that does cover the expected scenarios (I've shot a lot of single-system sound stuff over the years also). I wasn't aware of the AES interface, that's interesting.

I'm willing to bet that there are still plenty of instances where multiple lavs and a boom are being mixed down to one or two tracks sent to camera and the editor ends up working around clothing noise and bumps baked into those combined tracks that would have been easy to manage from the discrete audio files, simply because of the extra step of syncing (which is a blazingly fast and easy process today, whether from timecode or waveform, especially given good scratch audio). But that is the nature of our industry. Someone's concept of what is best or fastest ends up taking someone else more time and energy down the road and all they can do is shrug because "that's how the boss wants it". Or as I call it, "perceived efficiency".

Years ago I directed a short with five characters: we had four lavs and a boom. The sound guy didn't have Comteks but halfway through the day I checked in with him and he said it all sounded great. When I started editing I discovered that he had mixed everything down to two channels. The boom was combined onto a track with two lavs and one of those lavs was underneath a guy's sweater that was scratching against the mike the entire time. 80% of the production sound was useless. I had to have all five actors come in and ADR all of their lines, then spent a week and a half razor blading them to perfect the lip sync and then building complex ambient and foley tracks. When I asked the sound mixer about it he said "oh yeah, I heard that sweater, huh". It turned out he had come from the reality TV side and was used to just capturing sound as simply as possible--finesse wasn't his thing. Thanks dude.
 
Thanks Run&Gun. I think that does cover the expected scenarios (I've shot a lot of single-system sound stuff over the years also). I wasn't aware of the AES interface, that's interesting.

I'm willing to bet that there are still plenty of instances where multiple lavs and a boom are being mixed down to one or two tracks sent to camera and the editor ends up working around clothing noise and bumps baked into those combined tracks that would have been easy to manage from the discrete audio files, simply because of the extra step of syncing (which is a blazingly fast and easy process today, whether from timecode or waveform, especially given good scratch audio). But that is the nature of our industry. Someone's concept of what is best or fastest ends up taking someone else more time and energy down the road and all they can do is shrug because "that's how the boss wants it". Or as I call it, "perceived efficiency".

Years ago I directed a short with five characters: we had four lavs and a boom. The sound guy didn't have Comteks but halfway through the day I checked in with him and he said it all sounded great. When I started editing I discovered that he had mixed everything down to two channels. The boom was combined onto a track with two lavs and one of those lavs was underneath a guy's sweater that was scratching against the mike the entire time. 80% of the production sound was useless. I had to have all five actors come in and ADR all of their lines, then spent a week and a half razor blading them to perfect the lip sync and then building complex ambient and foley tracks. When I asked the sound mixer about it he said "oh yeah, I heard that sweater, huh". It turned out he had come from the reality TV side and was used to just capturing sound as simply as possible--finesse wasn't his thing. Thanks dude.

You hit the nail on the head. Knowing the expectations and requirements. And when in doubt, ask for clarification.

I feel that that is actually one of my strongest suites. I float between different worlds, so to speak, so I understand "it" and get "it" and know there are many different types of productions and when things need to be done certain ways. Too many people today get hung-up on only one way to do things(the way they do them) and they think that that is the only way and any variation is wrong (or beneath them). There are different production styles with different needs and requirements and we'd all be better off acknowledging that and realizing there may be a legitimate reason why that person over there may be doing it differently and vice-versa.
 
There are a million billion different cameras, and more coming out every second.

This is why it is ultimately the camera crew's responsibility for the camera setting (and any hardware added to the camera too, they're responsible for it, once the Sound Dept has put it there).

But it is normally expected someone from the Sound Dept handles this setting up at the start of the day. Whoever if you're using the latest Arrisonythingamajig mk3 camera body, and they ask for your assistance, the ball is now in your court to handle this. As it is your camera.
 
Forgive me, but bearing in mind the chain of compression, expansion and limiting, plus the radio artefacts, then your limiter probably won't have made any difference. I assume, that he was also recording a nice uncompressed, unprocessed stream, and the RF link was purely for convenience? With RF in the chain, then quality is set by the old maxim, the most expensive best wireless link is nearly as good as a $20 cable. He's worried about a little limiting that may or may not have cut in and not the damn great phut when the metal shed got in the way?
 
Back
Top