If you are youtubeur / streamer, you surely know this ultra-frustrating moment when your video is perfect but the sound is offset as in a bad kung-fu film dubbing from the 80s.
This is exactly the problem I had with my YouTube channel (by the way, if you have not yet subscribed, what are you waiting for? Subscribe Vouuuus !!!). Indeed, for my video tutorials, I use a small cannon camera, connected to an HDMI capture case as well as a XLR microphone connected to an external sound card. And all this under MacOS.
Only, here, when I record a video, I systematically have a gap between the sound and the image. It’s normal, that does not borrow the same channels, and I have a lot of additional sound treatment to remove all that is parasitic noise + some homemade tweaks to have a sound that sounds a little more “radio”.
So far, I recorded with Obsso I had configured a small delay of 200 ms to reduce this discrepancy. The concern is that it forces me to go through obs. And above all, this is not correct. I put 200 ms but maybe 198 ms or 205 ms would be more appropriate & mldr; I don’t know, it’s a bit with a wet finger by testing.
And there I changed my recording tool and there is nothing to configure a delay as in Obs. Sniiiif. Then resynchronize the sound in post-production, I know nothing more boring to do. It’s a bit like trying to eat a good big kebab without putting sauce everywhere.
BREEEEF & MLDR; I still inquired a little and I discovered a native solution for macOS that I completely ignored. This is a tool named “Audio and noon configuration” In macOS utilities which makes it possible to aggregate devices and above all to synchronize them with their internal clock. The kind of thing that can save your life.
And it works nickel !!! No more approximate discrepancies, make way for perfect synchronization. So here is how to do in a few simple steps:
First, you launch the utility Audio and Midi Configurationyou click on the “+” at the bottom left [1] And you create a Aggregate device [2].
Then you check the devices you want to aggregate [1] And you check which device will be drift control [2]. Basically, I have my Elgato box which corresponds to my camera and which will be the referent device. And the one who will be corrected will be Blackhole 2ch which is my audio device (my microphone).
You can also see at the top that theclock source is Elgato (my capture case), which confirms that it is him, the reference and the sub-peripheral is well blackhole 2ch. It is this configuration that allows macOS to automatically adjust synchronization in real time.
There you go, you name it then as you want [1] And above all, via a right click, you make this new device used as “Audio input” by default [2].
And here it is & mldr; Now in any capture software, you will be able to specify on the input device this new aggregate device and you will see, the sound will be perfectly synchronized! Amazing ! No need to hack manual offsets or spend hours in post-production.
This tip works great because in fact, MacOS uses the internal clock of the reference device (in my case the camera) to dynamically adjust the synchronization of other devices. It is much more precise than a fixed time because the system adapts in real time to possible variations.
Here is & mldr; It was so simple! Come on, good recording everyone and see you soon!
Source link
Subscribe to our email newsletter to get the latest posts delivered right to your email.
Comments