This tutorial will demonstrate how you can use the 2D trackers to help your rotoscoping workflow in Blender. (see the previous post about Corner-Pin & Stabilization)
This tutorial will demonstrate how you can use the 2D trackers to help your rotoscoping workflow in Blender. (see the previous post about Corner-Pin & Stabilization)
Here are the two first tutorial about tracking with Blender, showing you how to do Corner Pin tracking and Stabilisation as well. ENJOY my friends !!
And stay tune since I’ll be releasing some scripts pretty soon which will do that “automatically” for you
If you are like me working a laptop, and don’t have any numpad, this application is for you. There is a lot of numpad apps for iphone, which basically connect to your laptop via wifi. But if you are a Blender user, this one is even better !
Beside being a numpad it is also a toolbar with shortcuts for Blender features !
Check it out, it is free and it works on Mac & PC !
Here we are finally !!! It’s coming !
In case you missed it, Blender got a GSOC this year to integrate Libmv into it. Which means by the end of the summer we should have 3D tracking available in Blender (at least in a branch if not in Trunk).
But the first step to that is to implement 2D trackers, and Sergey did already a great job for integrating this feature. I just tested it today, it is of course very new and not so stable, but we are really close to be able to do nice FX. Actually all I’m missing to start and really play is being able to use the tracking data as f-curves and re-use them into the compositor (which according to Sergey, should be enable soon).
Exciting days are coming my friends 🙂
You will need to get a version of blender which is build against the Tomato branch in the SVN. You can build it yourself, or try to find one at Graphicall
To build it yourself, I recommend you to follow those following tutorials :
Man ! did you remember 1.6 ? that was a long time ago 🙂
If you want to play with the old versions, you can find most of them here : http://download.blender.org/release/
Have fun 🙂
EDIT : the project started at http://ocl.atmind.nl/doku.php?id=design:proposal:compositor-redesign . Please donate and help the project
I have been at the Blender Conference this year & had the chance to met a lot of great guys as also talking with Ton about doing vfx with blender. I attempt the “VFX roundtable” talk… yes, I was the annoying guy who stoll the microphone the entire time 🙂, and yet didn’t have time to give my entire list there. So after talking about it with Ton, he asked me to send him an email of it, which I did.
I tried to make it as much detailed as I could and also to not making it too personal as well but keep it as a general need basic features.
When opening this kind of topic I know that a lot of Blender User like to throw stones at the guy writing it :), so here are some facts before starting. I’m working on a every day basics with After Effects and sometimes I do use Nuke. I’m a TD Artist for a video game company working on off-line & realtime production, so my job is to design pipeline and workflow or either new tools to make a production more efficient and comfortable for the artist (means I’m use to work with developers a lot, and know how much a pain in the ass I can be sometimes… trust me 🙂 ). On the side I’m also doing Matchmoving for a living also as VFX compositing & some Color Grading too. And those days I’m also developing scripts & filters for After Effects which are distributed over at AEScripts.com (mostly stuff to enhanced workflow -sometimes inspired by Nuke I must say :p-).
All this doesn’t mean that I know stuff better than someone else at ALL !! I just try or I’d like to try to use Blender as much as I can in my Freelance time, and sometimes I just can’t because basics features are missing, and that’s what I’m going to point out here (or at least try).
I do understand that we don’t want to make a copy of AE or Nuke in Blender and I do agree that this would be totally wrong. But since there are leaders in this field and I know them pretty well, I will mention some of there features or workflow as illustrations & ideas. So you will have to take my words as “something like that…” rather than “it has to look like that!”. Note that most of the features I’m going to talk about are present in all compositing software not matter if it is nodal or layer based.
One thought we had with Sebastian at the Blender Conference, which I believe illustrate all those wishes pretty well, is “Try to do a VFX tutorial for Nuke or AE using Blender, and if you can’t do it, it means you are missing the tool”.
A great website for this is VideoCopilot.net ! And I’m NOT talking about the ones with third party plugins but actually the one that does use the basics provided tools. (I’ll mention some of them later on).
VideoCopilot’s tutorial are great in way that they are very professional looking, accessible to anyone in the field, and covers what you would do on an every day basis as a VFX Artist. And usually you would be able to do those tutorial with any Compositing software as far as you understand the logic behind it. Tools looks sometimes different, but they are all there in all the programs (except Blender for now, and that’s what I’d like to change).
I don’t need evidence that Blender could do the same by doing some tricks and finding some way around the problems. I’m always looking at an efficient, production ready approach of the problem. For instance, recently the RotoBezier Addon came out, and while this is a great and smart trick, to me, it is just a work around in the mean time, but not at all production ready for several reasons that I could mention if you asked me :).
Ok so now let dive into the list !
As for today the CM (color management) is not working very well and too hidden IMO.
I did talk about it with Matt Ebb a lot. Even if we have different opinions about it, I know he has good ideas and great libraries to use in mind, but not so much time to work on.
His philosophy is to have it completely hidden in background and doing all the work for you, which I deeply disagree with (but again just my opinion). CM is good for having LW (linear workflow), and LW is important, but CM is also important for Color profile and Color models and you really have to know in which space you are currently viewing stuff (linear, sRGB, … ).
Today I can never tell if my viewer is showing me in linear or sRGB, and same for color picker.
In Nuke you always have an option in inputs, outputs or scopes to choose which color space you want to look at (ie screenshot) so you can have a complete control over it.
So having it hidden and automatic is ok in case I can overwrite it anytime I want to !
And as I mention at the roundtable, you don’t always need linear workflow (and thats where Matt is going to disagree with me 🙂 ), for instance it is usually more a pain in the ass for Color Grading than anything else. And I have been talking with some professional colorist who told me there big color grading workstation are working in non-linear (ie Davinci Resolve). Which don’t give me wrong can work with linear too, but if you take something like the Color Balance Node which Matt and I develop, the formula I used works better in non-linear. Same formula used in Magic Bullet Colorista and very close to what use Color grading suite by the way. So LW, great for compositing shots and playing with Render passes, for video edition & final color grading, not so much.
At this point Blender cannot read any passes that it doesn’t support or create itself. So when I’m working with external client who works with other pieces of software I cannot read their passes (except the normal kind, diffuse, spec…)
In any Nodal based compositor I know, there is a special node called “Shuffle” (or whatever) which separate the passes. The reader (input) just gives the default pass (beauty ?), but the shuffle does the job of selecting a specific pass.
Now I have to create a new RenderLayer with a material overall and save it to another file. when actually what I’d like to do, is create a new pass, name it and affect a material overall to this pass. all that in my default render layer. (this part is not just EXR but also rendering related).
You can read or write attribute in exr. Some kind of metadatas which can be very useful even to pass datas to your compositor as for instance the actual position of the camera or values of the distortion, anything you can imagine of that can be useful (not as important as the two above, but again full support is better 🙂 ).
I am so missing that !!!
Basically it means that for each params you have you can set a script. In AE it is called “Expression” as in Nuke by the way.
The idea is to right click on a params and then select “set expression”. Expression can link to everything in blender other params, maths, … so at the end you can do some really crazy stuff as constrain or operation or conditions…
We are using that ALL THE TIME
In Nuke you also have an expression node which can gets several inputs and then you can write python expression by channels. It becomes almost an image processing or kind of pixel processing kind of node. Very very powerful :).
Groups are actually not that bad in Blender, but a bit too rigid !
A nice recent approach on this topic : http://www.youtube.com/watch?v=OUidTgzy8zo
I did develop the Color Balance node in compositor with Matt. Actually I must say Matt did most of the job just because I couldn’t understand the way making UI node was done in C. First because I suck in C, secondly because there were so many file to change I was totally lost (register, type, processing, name, RNA, DNA, …. all that different files). So while I understood pretty well in C the part where the pixels got process, all the UI was a nightmare for me, and Matt did handle it.
I can only imagine how many people get stuck by that while it can be very easy to do some pixel processing just by typing a few lines of code. Several solutions for that :
But more importantly for somebody like me, it means I can buy a plugin like Keylight (for green screen keying, because lets be honest Blender’s keying tool are a bit old) and use it directly with Blender, without the needs to buy After Effects or Nuke to use it. PLEASE read my blog post about it which clarify the point on that : http://www.francois-tarlier.com/blog/index.php/2010/01/openfx-ofx-an-open-plug-in-api-for-2d-visual-effects/
-> I’m doing some filters for After Effects (http://aescripts.com/category/plugins/francois-tarlier-plugins/) , and mostly because I cannot do it with Blender, it’s way too complicated to achieve the same thing !!
Ok I plead guilty on this one, it is more of a personal wishes that something else or VFX related. At least I get a point for honesty ?
Today every compositor as some simple 3d capability. Blender gets both features, but it is impossible to use it together. Why would you need 3d in compositing ? (best reason here : http://www.youtube.com/watch?v=upG81s75UD4#t=7m13)
Let say I have this render of a street w/ 3d and matte painting stuff. And I want to put some guy shoot against green screen in this street, this is what I would have to do in Blender to get the shot done :
Do you see the problem here ? so many steps, so many in-between render… when actually all I wanted is pretty much place those keyed plane/billboard in 3d space with a bit of orientation and getting the camera data from your 3d scene !! w/o in-between rendering !
Also know we can see more and more of what we call “Deep image compositing” which basically use a point pass (or position pass) and generate a 3d point cloud based on this pass. it helps for all kind of things (preserving edges, getting geometry information, occluding, relighting, …. )
lenses, camera sensor size, distortion, … all the good stuff 🙂 (Matt already started some work on that http://mke3.net/weblog/blender-sensor-sizes)
As I mention earlier, being able to create custom passes which we can assign Material. Pretty much like material overwrite function but into passes. And also with some additional passes by default as Point pass (world and object), and normal pass (world, object & camera).
I still can’t even imagine how we can do compositing without masks. If there is one feature above all I’m ALWAYS using in ALL my comps its masks. And to be honest not having it, is one of the main reason I’m not using Blender as primary compositing tool today.
I know someone did talk about some fancy/geeky/gadget mask tool he would love to have in Blender instead of simple/basic vector shape with handle. My opinion on that ? Keep it basic at first ! Simple tools are always good to have first. And even if it might be tedious to do, at least we know it works in all cases.
In case you really want to go with the geeky automatic paint / rotobrush tool, take a look at GraphCut & SnapCut algorithm. That’s a paper from siggraph which is used in AE now know as “roto brush”. Which is a great tool by the way, but from my experience, not yet better than a good old vector shape 🙂
VERY IMPORTANT !!! you’ll never be able to do some good VFX compositing w/o it.
DJV does that pretty well actually. you can render the sequence into RAM to play the preview in realtime (Again I still don’t understand how we can survive w/o it)
Downgrade the viewer (full, half,quater,.. resolution) to save some memory or put more frame into RAM would be great too.
Press a button > render to RAM > play it !
Being able to select one of the “layers/node” and rotate it, scale it, move it,…. directly in the viewer. (like you would do in photoshop for instance (yes yes it is possible with node base too of course 🙂 )
I’m a matchmover, so trust me I know how cool it could be to have a camera tracker in Blender. I’m following the libmv project very closely and talk a lot with Pierre Moulon, one of the active developer on the project (since Keir is kind of busy those time).
But the fact is I don’t really need a 3d camera solver. First it would take a long time to have something which works great and be production ready. As for today I do have syntheyes, which is not expensive, and that I can use.
BUT, having a simple 2D tracker in the compositor to do things like that for instance : http://videocopilot.net/tutorial/set_extensions/ that would be PERFECT !
I did talk about it to the LibMV team and they would be glad to help, but they would need somebody experience with Blender C code to implement it quickly. They asked me for a proposal a year ago which is here :
It’s not everything, but it is quite a lot for now. Some of those request are more important than other, and I would be glad to discuss this topic with anybody.
One tutorial that does sum up all those features is this one http://www.videocopilot.net/tutorial/blast_wave. Bring the missing tools which will let you do that, and you’ll be much closer to be a “production ready” VFX tool… IMO :).
I would also encourage you to take a look at Nuke PLE (personal learning edition), it’s free and complete (only rendering are watermark.
Also take a look at the work done on Ramen VFX, Estaban did some great things there
And finally, make FxGuide.com your favorite website and watch a lot of breakdow 🙂
This is not directly VFX related but as I understand, you had trouble with hair simulation on Sintel. I did talk about it with Daniel Genrich at the time, because I think he is the dynamics guru who could nailed this kind of thing. So I gave him a few papers to look at which he thoughts were very interesting and not very hard to implement, but as I recall it was too late to implement at that time and you guys didn’t have enough time to do it or even take the risk to test it. But I guess it is not too late for next projects.
When I was working on face tracking/recognition at Ubisoft, I met someone (Basile Audoly) working on mathematically modeling the dynamics of hair. He gave us a demo which was very impressive. And if I’m not mistaking, he has been working for l’oréal research after that 🙂
His program was able to model curly, straight, dry, wet, growing,… hair, but also dynamic cutting, all those features model in his research. I have personally seen his demo, and I have never seen something so real, see by yourself with the video !
Here are the papers and videos :
Super-Helices for Predicting the Dynamics of Natural Hair
“In this article by Reynante Martinez we’ll tackle a topic that is commonly misunderstood and often overlooked inside of Blender. But before we begin, I’d like to make a few remarks. First, I do not claim to be a professional color grading artist nor do I imply the following explanations as “rules” to be strictly followed. What I will try to share with you in this article are my thoughts and experiences on Color Grading using Blender.
Bring your 3D world to life with lighting, compositing, and rendering
You can get it on Amazon here : Blender 2.5 Lighting and Rendering