Audio Drift

Btw Pluraleyes is no longer for sale and you can't download it. Only existing customers can use it. Not the end of the world or a surprise once all the major editing software added syncing.
 
For years I've been recording musicians live at small clubs, cafes and I let the camera/cameras run plus my main audio will be recorded on a Tascam unit fed by my own microphones placed to suit. The cameras get their audio from the Tascam eg. DR60, DR70D, so when I punch in tone (use the automatic insert option) to the beginning AND to then end, that is embedded in the video files. Putting tone at the end seems to be ignored by many.
I just line up the beginning tone on the timeline, jump to the end and reduce the audio track's duration (thus changing its speed) to line up the ends. I'm not guessing percentages by trial and error. Yes the sound used to drift and still does but this correction gets it sorted out to my satisfaction. Typical run times can be 30 to 45 minutes.
 
Clever.

Sorry I'm a little confused how the Tentacle Sync E works especially with the Zoom F3. Tentacle generates time code but how does that go in to the F3 the F3 has no internal timecode or way to input it.

I forgot to answer this. Yes, with the F3, being only two tracks, well, I doubt you want to use one of them for timecode. That is normally what is done for equipment like mirrorless cameras that don't have a timecode input; the timecode is recorded on an audio track and the other track is an audio scratch track. So that's not a solution for you.
 
Even though I’m probably not adding time code to my workflow. This did force me to become reacquainted with both my cameras and the process of time code either using an audio track or meta data. I don’t like the audio track method but understand its purpose. So Zoom uses Bluetooth for meta data and thus requires ultra sync blue and ultra sync one has cables the connect to the camera. To me the biggest downside is the complexity and added setup time…
 
Good to know. I do have an A74 but I don’t use it for theatre. I have a pair of JVC HM600 that I was unaware until this thread has timecode jack. My other pair of camcorders don’t have timecode jack which I’d have to use the audio track method but it’s strange that it has timecode in the menu. In my experience having an extra layer of complexity isn’t welcome. Leave one cable adapter at home and no time code.

When I film for another person they often don’t want to deal with drift. As a work around send stage mics wirelessly back to the camera to eliminate issues by still can have problems because camcorders have only two audio tracks so shotgun mic and sound board into one camera and stage mics to 2nd camcorder opens up drift. Funny just adding stage mics opens up a can of worms.
 
For years I've been recording musicians live at small clubs, cafes and I let the camera/cameras run plus my main audio will be recorded on a Tascam unit fed by my own microphones placed to suit. The cameras get their audio from the Tascam eg. DR60, DR70D, so when I punch in tone (use the automatic insert option) to the beginning AND to then end, that is embedded in the video files. Putting tone at the end seems to be ignored by many.
I just line up the beginning tone on the timeline, jump to the end and reduce the audio track's duration (thus changing its speed) to line up the ends. I'm not guessing percentages by trial and error. Yes the sound used to drift and still does but this correction gets it sorted out to my satisfaction. Typical run times can be 30 to 45 minutes.

This tip would have saved me about 8 frustrating hours the other week had I seen it then! Thanks - noted and remembered!
 
Thanks Chris. Elastic Wave seems like the answer for Resolve users. Is the quality high on the altered track?

Judge for yourself, BMan. This example is of a 1959 mono film track that has been reworked to stereo and "Retro ReSynced". In this particular example, the 1959 film mono dialogue track has been converted to stereo where there is no music. When a song turns up in the movie, in this case "A Voice In The Wilderness", the song, as there was a remaster available, has been replaced with the 1987 studio stereo remastered release of that song. The studio remastered stereo audio release varies from the timings that the song has in the film. If you look at the attached JPG you will see the numerous cuts made to the audio remaster to lip sync it back to the movie vision track. This is all done in Resolve using "Elastic WAV". The total audio track is then completely reworked. Reworking the stereo with field width with iZotope Ozone and then applying some Haas effect.

https://producelikeapro.com/blog/haas-effect/

Then the whole audio sound field gets a bit of a sweetening tweak. Primarily using plugins like Fresh Air and various other sweeteners and levelling plugs. The final audio is the levelled and mastered to -14 LUFS for streaming delivery. The vision is also cleaned up, de-noised and tweaked to modern day streaming standards.

Overall Resolve Elastic does a pretty decent job. But I've been using elastic WAV stretching and shrinking for years when making TV commercials. After many takes, sometimes the perfect read for a 30-second spot comes in under or over the time required. So I shrink or expand the audio to fit the 29-seconds we require for the spot. Our 50Hz specs require 12 frames mute at the head and tail of a 30-second spot. The best NLE I used for time shifting and keeping the pitch correct is Vegas Pro as it has a variety of different algorithms for music, speech etc. Vegas also allows you to apply vision stretching and shrinking if needed.

Chris Young


Elastic Audio Resync.JPG
 
It does sound good. Thanks. I am a bit confused about all of the blank areas in between the cuts. The elastic movement should help you to not have to cut so often right? But I can see how difficult tracks need everything you can throw at it.
 
It does sound good. Thanks. I am a bit confused about all of the blank areas in between the cuts. The elastic movement should help you to not have to cut so often right? But I can see how difficult tracks need everything you can throw at it.

What you will find when working with examples like this is the need to resync many times. For example. The first six words may need a six frame stretch to maintain lip sync. The next eight words might need a seven frame shrink. The next four words maybe need a four frame shrink and the next eight words may sync with an eleven frame stretch. This in and out of sync will occur every few words.

The reason I used this example was just to show the flexibility of the Elastic WAV in a complex resync. With camera and standalone recorder on long record sessions I find you have to resync every few minutes as a rule of thumb.

I've often encountered scenarios where there can be anything up to two seconds drift in an hour. Just syncing head and tail often leaves out of sync areas throughout the timeline. Necessitating resync periods every few minutes throughout the hour.

Chris Young

EDIT:

Here is a JPG showing how Elastic WAV in Resolve/Fairlight demonstrates where out of sync, both before and after out of sync audio, over a matter of six words can be corrected. The top of the JPG shows the Resolve timeline with out of sync audio marked. The middle of the JPG shows cuts made where made on the timeline where syncing is required. The bottom of the JPG shows the Fairlight timeline after the split clip segments have been slipped, stretched or shrunk to fit using Elastic WAV.

 
Last edited:
Back
Top