My tutorial for AETuts+ is finally out !!
It covers Time Remapping, Waveform to keyframe conversion, expression, … I got really inspired by watching all those reels from motion designer or filmmaker with music most of the time from Hecq. I was wondering how they did their editing and cuts, so I came up with this idea. I don’t know if it’s the way they did it, but this is my approach.
If you are doing Matchmove, you probably bumped into Lens work-flow issue, where you have to un-distort the footage in your matchmove software, then track it, and export a new undistorted footage, so your client can compose the 3d rendering on top of it and then distort it back.
I don’t really like this work-flow, since for instance AfterFx do not have Cubic Lens Distortion FX and it would be really hard for the client trying to match the distortion back.
Thanks to Jerzy Drozda Jr (aka Maltaannon) for his great tips about Pixel Bender.
So now, you can create a new comp with your distorted footage > pre-comp it > undistorted it with the shader > track it in syntheyes > export the camera to a 3d package > render the scene > import the render into your pre-comp > desactivate the shader. Should match perfectly 🙂
(yeah I know, PFTrack grid with Syntheyes … not cool ! :p )
Since there is no mask in the “Compositing Editor” of Blender yet, I found a simple tricks which could work pretty well especially in case of color grading.
You’ll see nothing really fancy here since the mask can only be square (or pretty close to a square shape though). But if you check out Colorista for instance, the two allowed shapes are ellipse and rectangle.
It seems they fixed the horizontal panning issue, but still not the vertical one, which at this point looks impossible but I’m actually waiting for another paper on stabilization which should show up at siggraph too. Anyway it’s going to help a lot for matchmove !!!