Tag Archives: tutorials

Blender VFX wish list features

Introduction

EDIT : the project started at http://ocl.atmind.nl/doku.php?id=design:proposal:compositor-redesign . Please donate and help the project

I have been at the Blender Conference this year & had the chance to met a lot of great guys as also talking with Ton about doing vfx with blender. I attempt the “VFX roundtable” talk… yes, I was the annoying guy who stoll the microphone the entire time 🙂, and yet didn’t have time to give my entire list there. So after talking about it with Ton, he asked me to send him an email of it, which I did.

I tried to make it as much detailed as I could and also to not making it too personal as well but keep it as a general need basic features.

Andrew, Sebastian & Jonathan asked me to post that mail on my blog to share it with the community, so here it comes…

Disclaimer

When opening this kind of topic I know that a lot of Blender User like to throw stones at the guy writing it :), so here are some facts before starting. I’m working on a every day basics with After Effects and sometimes I do use Nuke. I’m a TD Artist for a video game company working on off-line & realtime production, so my job is to design pipeline and workflow or either new tools to make a production more efficient and comfortable for the artist (means I’m use to work with developers a lot, and know how much a pain in the ass I can be sometimes… trust me 🙂 ). On the side I’m also doing Matchmoving for a living also as VFX compositing & some Color Grading too. And those days I’m also developing  scripts & filters for After Effects which are distributed over at AEScripts.com (mostly stuff to enhanced workflow -sometimes inspired by Nuke I must say :p-).

All this doesn’t mean that I know stuff better than someone else at ALL !! I just try or I’d like to try to use Blender as much as I can in my Freelance time, and sometimes I just can’t because basics features are missing, and that’s what I’m going to point out here (or at least try).

I do understand that we don’t want to make a copy of AE or Nuke in Blender and I do agree that this would be totally wrong. But since there are leaders in this field and I know them pretty well, I will mention some of there features or workflow as illustrations & ideas. So you will have to take my words as “something like that…” rather than “it has to look like that!”. Note that most of the features I’m going to talk about are present in all compositing software not matter if it is nodal or layer based.

So you wonna do VFX with Blender ?

One thought we had with Sebastian at the Blender Conference, which I believe illustrate all those wishes pretty well, is “Try to do a VFX tutorial for Nuke or AE using Blender, and if you can’t do it, it means you are missing the tool”.

A great website for this is VideoCopilot.net ! And I’m NOT talking about the ones with third party plugins but actually the one that does use the basics provided tools. (I’ll mention some of them later on).

VideoCopilot’s tutorial are great in way that they are very professional looking, accessible to anyone in the field, and covers what you would do on an every day basis as a VFX Artist. And usually you would be able to do those tutorial with any Compositing software as far as you understand the logic behind it. Tools looks sometimes different, but they are all there in all the programs (except Blender for now, and that’s what I’d like to change).

I don’t need evidence that Blender could do the same by doing some tricks and finding some way around the problems. I’m always looking at an efficient, production ready approach of the problem. For instance, recently the RotoBezier Addon came out, and while this is a great and smart trick, to me, it is just a work around in the mean time, but not at all production ready for several reasons that I could mention if you asked me :).

Ok so now let dive into the list !

VFX Feature list for Blender

Color Management

As for today the CM (color management) is not working very well and too hidden IMO.

I did talk about it with Matt Ebb a lot. Even if we have different opinions about it, I know he has good ideas and great libraries to use in mind, but not so much time to work on.

His philosophy is to have it completely hidden in background and doing all the work for you, which I deeply disagree with (but again just my opinion). CM is good for having LW (linear workflow), and LW  is important, but CM is also important for Color profile and Color models and you really have to know in which space you are currently viewing stuff (linear, sRGB, … ).

Today I can never tell if my viewer is showing me in linear or sRGB, and same for color picker.

In Nuke you always have an option in inputs, outputs or scopes to choose which color space you want to look at (ie screenshot) so you can have a complete control over it.

So having it hidden and automatic is ok in case I can overwrite it anytime I want to !

color profile selection in Nuke's reader (input)color profile selection for Nuke's viewer
color profile selection for Nuke's viewer

And as I mention at the roundtable, you don’t always need linear workflow (and thats where Matt is going to disagree with me 🙂 ), for instance it is usually more a pain in the ass for Color Grading than anything else. And I have been talking with some professional colorist who told me there big color grading workstation are working in non-linear (ie Davinci Resolve). Which don’t give me wrong can work with linear too, but if you take something like the Color Balance Node which Matt and I develop, the formula I used works better in non-linear. Same formula used in Magic Bullet Colorista and very close to what use Color grading suite by the way. So LW, great for compositing shots and playing with Render passes, for video edition & final color grading, not so much.

EXR Full Support

Support for reading any custom passes :

At this point Blender cannot read any passes that it doesn’t support or create itself. So when I’m working with external client who works with other pieces of software I cannot read their passes (except the normal kind, diffuse, spec…)

In any Nodal based compositor I know, there is a special node called “Shuffle” (or whatever) which separate the passes. The reader (input) just gives the default pass (beauty ?), but the shuffle does the job of selecting a specific pass.

Shuffle node in Nuke

Support for writing custom passes :

Now I have to create a new RenderLayer with a material overall and save it to another file. when actually what I’d like to do, is create a new pass, name it and affect a material overall to this pass. all that in my default render layer. (this part is not just EXR but also rendering related).

Support for reading/writting attribute :

You can read or write attribute in exr. Some kind of metadatas which can be very useful even to pass datas to your compositor as for instance the actual position of the camera or values of the distortion, anything you can imagine of that can be useful (not as important as the two above, but again full support is better 🙂 ).

Expression Py

I am so missing that !!!
Basically it means that for each params you have you can set a script. In AE it is called “Expression” as in Nuke by the way.
The idea is to right click on a params and then select “set expression”. Expression can link to everything in blender other params, maths, … so at the end you can do some really crazy stuff as constrain or operation or conditions…
We are using that ALL THE TIME
In Nuke you also have an expression node which can gets several inputs and then you can write python expression by channels. It becomes almost an image processing or kind of pixel processing kind of node. Very very powerful :).

Nuke's expression node looks like that, but creating expression to any inputs like in AE sounds more powerful to me !

Enhanced Group

Groups are actually not that bad in Blender, but a bit too rigid !

  • One thing I’d like possible is to choose the inputs I will have in my final Collapse group UI. Today to do that it is a bit hard and not very flexible. Plus you cannot add special UI inputs which might not be a node but just a value.
  • I’d like to keep some group expand but still being able to work with nodes which are not in this group. Right now if a group is collapse you can only work on this one.

A nice recent approach on this topic : http://www.youtube.com/watch?v=OUidTgzy8zo

SDK, Py, OpenFX

I did develop the Color Balance node in compositor with Matt. Actually I must say Matt did most of the job just because I couldn’t understand the way making UI node was done in C. First because I suck in C, secondly because there were so many file to change I was totally lost (register, type, processing, name, RNA, DNA, …. all that different files). So while I understood pretty well in C the part where the pixels got process, all the UI was a nightmare for me, and Matt did handle it.

I can only imagine how many people get stuck by that while it can be very easy to do some pixel processing just by typing a few lines of code. Several solutions for that :

  1. A well documented SDK (or “BDK” ^^) which as a bad coder I usually hate, too C related for me, but good for developers who wants to make plugin
  2. Python script/expression. As mention above being able to control a input via a python expression or even better an “Expression” as mention above as well
  3. Being able to load/write GLSL code for processing ? I discovered Pixel Bender for After Effect, its kind of a glsl-like for processing image in AE, VERY easy to code, VERY powerful with possibility to add user UI inputs in the shader which AE will transform as interface ! This is perfect for a crappy coder like me and so great in “production” environment. you can prototype things or solve problem in no time.
  4. OpenFX, this is my personal favorite. I have a full thread about it on my blog ! Ramen did implement it, it even did port some Blender’s nodes to OFX already. For a developer stand point of view it means he can create a plugin for blender, and make sure it will also work on any other software that does support OFX (Nuke, shake, …).

But more importantly for somebody like me, it means I can buy a plugin like Keylight (for green screen keying, because lets be honest Blender’s keying tool are a bit old) and use it directly with Blender, without the needs to buy After Effects or Nuke to use it. PLEASE read my blog post about it which clarify the point on that : http://www.francois-tarlier.com/blog/index.php/2010/01/openfx-ofx-an-open-plug-in-api-for-2d-visual-effects/

-> I’m doing some filters for After Effects (http://aescripts.com/category/plugins/francois-tarlier-plugins/) , and mostly because I cannot do it with Blender, it’s way too complicated to achieve the same thing !!

UI Design

Ok I plead guilty on this one, it is more of a personal wishes that something else or VFX related. At least I get a point for honesty ?

  • Straight lines !!! Like Pablo called them the “spaghetti lines”. This is so rigid and messy at the same time. It makes you work only horizontally and from left to right, you cannot redirect lines like you want. I hate it ! But maybe it is just me ! Take a look at one of my comp here : http://www.vimeo.com/11279314 ! (yeah I know, If I had “expression” I wouldn’t have so many nodes but… hey… you got my point 🙂 ) As I recall, Digital Fusion does straight lines with angles by itself, I hate it too. Nuke (ie screenshot) does straight lines left to right, right to left, top to bottom, so on … which you can break by pressing CTRL and adding a “dot” on itThat is what I call flexibility !!!
  • Plug lines into dot socket is a bit old and rigid too. In the below example (same in Fusion), each node would have an arrow for each inputs it can get (w/ name on the arrow). Makes it more flexible to layout (it goes with the above point)
  • Do not put controls in the nodes, keep it in the panel (float or dockable panel). Blender kind of have both now, but I’d like the option that if I double click on a node the panel would show up for instance ?

Break the wall between compositor & 3D view

Today every compositor as some simple 3d capability. Blender gets both features, but it is impossible to use it together. Why would you need 3d in compositing ? (best reason here : http://www.youtube.com/watch?v=upG81s75UD4#t=7m13)

Let say I have this render of a street w/ 3d and matte painting stuff. And I want to put some guy shoot against green screen in this street, this is what I would have to do in Blender to get the shot done :

  • import green screen footage in compositor
  • keying it
  • render it
  • place 3d plane into my scene
  • add a material
  • create a texture with the rendered keyed greenscreen footage
  • apply with UV stuff
  • probably render those plane on seperate passes/layer than the enivronement in case I want to do some more fx/compositing on top of it.
  • import that result back in compositor
  • do all the compositing magic

Do you see the problem here ? so many steps, so many in-between render… when actually all I wanted is pretty much place those keyed plane/billboard in 3d space with a bit of orientation and getting the camera data from your 3d scene !! w/o in-between rendering !

Also know we can see more and more of what we call “Deep image compositing” which basically use a point pass (or position pass) and generate a 3d point cloud based on this pass. it helps for all kind of things (preserving edges, getting geometry information, occluding, relighting, …. )

(ie http://www.youtube.com/watch?v=upG81s75UD4 ||  http://en.wikipedia.org/wiki/Deep_image_compositing || http://www.fxguide.com/fxguidetv/fxguidetv_095/)

Real Camera

lenses, camera sensor size, distortion, … all the good stuff 🙂 (Matt already started some work on that http://mke3.net/weblog/blender-sensor-sizes)

Custom Passes

As I mention earlier, being able to create custom passes which we can assign Material. Pretty much like material overwrite function but into passes. And also with some additional passes by default as Point pass (world and object), and normal pass (world, object & camera).

Masks (roto)

I still can’t even imagine how we can do compositing without masks. If there is one feature above all I’m ALWAYS using in ALL my comps its masks. And to be honest not having it, is one of the main reason I’m not using Blender as primary compositing tool today.

I know someone did talk about some fancy/geeky/gadget mask tool he would love to have in Blender instead of simple/basic vector shape with handle. My opinion on that ? Keep it basic at first ! Simple tools are always good to have first. And even if it might be tedious to do, at least we know it works in all cases.
In case you really want to go with the geeky automatic paint / rotobrush tool, take a look at GraphCut & SnapCut algorithm. That’s a paper from siggraph which is used in AE now know as “roto brush”. Which is a great tool by the way, but from my experience, not yet better than a good old vector shape 🙂

Image Cache / Dynamic Proxy / Pre-cache preview

VERY IMPORTANT !!! you’ll never be able to do some good VFX compositing w/o it.

DJV does that pretty well actually. you can render the sequence into RAM to play the preview in realtime (Again I still don’t understand how we can survive w/o it)

Downgrade the viewer (full, half,quater,.. resolution)  to save some memory or put more frame into RAM would be great too.

Press a button > render to RAM > play it !

Interact with Elements in the viewer

Being able to select one of the “layers/node” and rotate it, scale it, move it,…. directly in the viewer. (like you would do in photoshop for instance (yes yes it is possible with node base too of course 🙂 )

2D Tracker

I’m a matchmover, so trust me I know how cool it could be to have a camera tracker in Blender. I’m following the libmv project very closely and talk a lot with Pierre Moulon, one of the active developer on the project (since Keir is kind of busy those time).

But the fact is I don’t really need a 3d camera solver. First it would take a long time to have something which works great and be production ready. As for today I do have syntheyes, which is not expensive, and that I can use.

BUT, having a simple 2D tracker in the compositor to do things like that for instance : http://videocopilot.net/tutorial/set_extensions/ that would be PERFECT !

I did talk about it to the LibMV team and they would be glad to help, but they would need somebody experience with Blender C code to implement it quickly. They asked me for a proposal a year ago which is here :

http://wiki.blender.org/index.php/Dev:Source/Development/Projects/Motion_tracking/MotionTracker2D

http://www.vimeo.com/5542528

Conclusion

It’s not everything, but it is quite a lot for now. Some of those request are more important than other, and I would be glad to discuss this topic with anybody.

One tutorial that does sum up all those features is this one http://www.videocopilot.net/tutorial/blast_wave. Bring the missing tools which will let you do that, and you’ll be much closer to be a “production ready” VFX tool… IMO :).

I would also encourage you to take a look at Nuke PLE (personal learning edition), it’s free and complete (only rendering are watermark.
Also take a look at the work done on Ramen VFX, Estaban did some great things there
And finally, make FxGuide.com your favorite website and watch a lot of breakdow 🙂

Nice VFX Breakdown to watch :

[Bonus] Hair Simulation

This is not directly VFX related but as I understand, you had trouble with hair simulation on Sintel. I did talk about it with Daniel Genrich at the time, because I think he is the dynamics guru who could nailed this kind of thing. So I gave him a few papers to look at which he thoughts were very interesting and not very hard to implement, but as I recall it was too late to implement at that time and you guys didn’t have enough time to do it or even take the risk to test it. But I guess it is not too late for next projects.

When I was working on face tracking/recognition at Ubisoft, I met someone (Basile Audoly) working on mathematically modeling the dynamics of hair. He gave us a demo which was very impressive. And if I’m not mistaking, he has been working for l’orĂ©al research after that 🙂

His program was able to model curly, straight, dry, wet, growing,… hair, but also dynamic cutting, all those features model in his research. I have personally seen his demo, and I have never seen something so real, see by yourself with the video !

Here are the papers and videos :

Super-Helices for Predicting the Dynamics of Natural Hair

How to convert a Lightprobe into the 9 spherical harmonics coefficients

Here is a tutorial on how to convert your own Light Probe into the 9 coefficients needed into my ft-SSIBL plug-in (Screen Space Image Base Lighting).





Links :




I’m sorry for the poor sound, I hope you will still be able to understand !

Pixel Bender : ft-UVPass shader for After Effects


Introduction

This shader let you re-texture your rendering directly in After Effects. It use a UV pass as default input and a texture as second input. UV pass could be render from pretty much any 3d package.


Download & Sources



How to install it ?


Just copy the .pbk file into your “Support Files” folder in your AE install directory


How to use it ?




Donate


Color Grading with Blender 2.5


Introduction

Yeahh Matt Ebb just commit my patch (SVN r27733) for the “Color Balance” node in Blender 2.5 Compositing node !!! Now it should be much easier to work with.

You can get a version of blender at Graphicall.org (any version above revision 27733)

There is still some precision issue on the color wheels, I guess some day it will be possible to move the color picker slower.



How to use it ?

First I would recommend you to un-check the “color management” setting in Blender 2.5 or it will make the blacks really hard to control.

If you are not so familiar with color grading, and push-pull techniques, I would really recommend you to watch Stu Maschwitz’s (Prolost) video tutorial using Magic Bullet Colorista. The settings won’t be exactly the same, but the approach quite the same though !

Red Giant TV Episode 22: Creating a Summer Blockbuster Film Look from Stu Maschwitz on Vimeo.


How does it works ?

I described the Lift/Gamma/Gain in a previous post, and mostly this node is based on the formulas specified there. We just slightly modified it so the 3 defaults values parameters are equal to 1.0 just like in Colorista. Which makes it much easier to control the blacks.
Actually the “Color Balance” node before this revision was the same formula but with lift default value equal to 0.0


Some presets !

While making some comparison test between Colorista in After Fx and the “color balance” node in blender, I tried to mimic some of Colorista’s presets.
You can download the “Blender Color Grading Presets” here : http://code.google.com/p/ft-projects/downloads/list

There is the following presets :

Bleach preset w/ Color Balance node
Cold preset w/ Color Balance node
Cool to Warm preset w/ Color Balance node
Day to Night preset w/ Color Balance node
Punchy preset w/ Color Balance node

vignetting using Mike Pan's approach


Understanding Gamma and Linear workflow

Even If I’m aware of what gamma and Linear workflow is, I’m not quite sure I’m using it always in the correct way. So I decide to dive into documentations and forums again to refresh my memory about it and at the same time closing a few gap.
Since so many people, even in the industry, still don’t know what it is and how it works, I thought I would make kind of a dairy of what I found on with my research those couple days.



Introduction


To get started, there is this great example from AEtuts+.com talking about Linear workflow in AE. It is not a the deepest explanation out there, but it will give you a nice overview with simple words and explicit example of what Linear workflow is and why it is so important !

So after that, 5 points you should keep in mind about Gamma (from Gamma 101 on mymentalray.com)

  1. Most displays have a non-linear response to pixel values.
  2. Most graphics software are written for a linear color model, i.e. they make the simple assumption that 255 is twice as bright as 128. But since the monitor is non-linear, this is not true. In fact, for most monitors (with a gamma=2.2), you need to send the pixel value (0.5^(1/2.2))*255=186 if you want 50% of the brightness of 255. The commonly used value of 128 will only produce about (128/255)^2.2 = 22% brightness.
  3. Digital cameras have a (roughly) linear response to light intensity, but since they are intended for display on computer monitors, they embed the non-linearity (gamma) in the image. (True for .jpg, whereas RAW files are just that – RAW, i.e. linear data, that gets non-linear when converted to JPG, for example)
  4. Therefore, if you input .jpg images taken with a camera into a graphics software, you need to compensate for the images gamma. (by inverse gamma. 1/2.2 = 0.455)
  5. And if you display the (linear) data generated from a graphics algorithm, you need to compensate for the display gamma. (add 2.2 gamma to the picture)



A few facts :

When creating a texture in Photoshop, you’ll see its color with 2.2 gamma applied (Because screens are big liar :p). Meaning when you think you got the good “brightness”, you actually made it twice (or more) brighter than what it’s supposed to be in real world.
When for painting, or montage it might not be import, for texture it is really important !!! Because as said above, your renderer/shader/… will assume the picture is linear and will apply math according to that.
So the only solution to bring this picture back to a “linear color space” is to set the gamma to the inverse of what the monitor shows you. As we know on PC, gamma are shown as 2.2 (I think it’s 1.8 on mac OSX). So the gamma value of your texture before saving it should be 0.455 (1/2.2).

Tips : In Photoshop, on top of your layer, add a “Level Adjustment Layer” and set the gamma value (mid-tone) to 0.455

With most today software I don’t think it is necessary to do that any more, but to be honest, this really depends on how the software you are using integrate Linear Workflow. For instance in 3Ds Max you can enable the Gamma correction in the “Gamma and LUT” tab of the preferences panel.

Because renders works in Linear space, your rendering would seems to look darker on your screen. So in case you are saving it to a 8bits type file (as JPG), you should set the output gamma parameter to 2.2. But in case you are saving  it to a floating point file (HDR, RAW, EXR, …), this parameter should remain 1.0. Because all the dynamics of your picture is saved in those raw file, you would apply the Gamma only in post process (compositing).

In the above case with After Effects, by making sure to activate the linear space workflow, it should take care of that for you, so you don’t have to change gamma to anything, just leave it.



Links :



Mental Ray Linear Workflow from chad smashley on Vimeo.





Here some nice read :

Blender Quick Tips : How to create masks in the compositing editor

Since there is no mask in the “Compositing Editor” of Blender yet, I found a simple tricks which could work pretty well especially in case of color grading.

You’ll see nothing really fancy here since the mask can only be square (or pretty close to a square shape though). But if you check out Colorista for instance, the two allowed shapes are ellipse and rectangle.

Blender Quick Tips : How to create masks in the compositing editor from François Tarlier on Vimeo.

Maya to Blender tutorials

I just found those 2 videos on vimeo (hopefully there will be some more soon) talking and helping Autodesk Maya Artist willing to use Blender.
I like the idea behind this kind of videos, and since Blender 2.5 will be more customizable with multi-windows, Hot-keys, … it will make it easier to switch from one software to another.

Those videos, especially the first one is for Blender beginners


Blender tutorial for Maya users

Blender tutorial for Maya users from Shananra on Vimeo.


Blender tutorial for Maya Animators

Blender tutorial for Maya Animators from Shananra on Vimeo.

Stuart Maschwitz’s (aka Stu) 5D Settings port to D90

I’m currently taking a class at Fxphd called “DOP210 -DSLR Cinematography” with Stu Maschwitz (prolost) & Mike Seymour as mentors!
This is an awesome class if you have a Canon 5D or a Nikon D90 and want to make movies with it and post-production/grading as well!
Either if the class is talking about the two cameras, the focus is mainly on the 5D!
Stu had a really nice post on Prolost about setting the 5D called “Flatten Your 5D“. Those settings aim to get a neutral picture (low contrast, low sharpen, low saturation,…) which would give a better control in post-production for grading. Settings which he’s using in his class of course !
Since I own a D90, I thought I would give a try to port his 5D’s settings to the D90!
Here is what I have done :

D90-port-of-stu-settings-5D

I also turn off D-Lighting, but I’m not sure it is the right move. I’m not sure it is the best settings yet but it looks close though

Update : Also I recommend you to take a look at Understanding and Optimizing the Nikon D90 D-Movie Mode Image on the DVXUser Forum, it give nice tips to trick your camera a bit

Here is a test I have done with those settings and Rebel CC on After Effects.

Mobile Version on Vimeo

Adobe Flash on Iphone – Flash Pro cs5 can compile flash movie for Iphone !

iphone flash

Finally it was about time!!!! now I’m might get an iphone !

Adobe now makes it possible to create applications for the Apple iPhone using the Adobe Flash Platform. You heard right: We’re really excited to bring this new capability to Flash designers and developers—the ability to target the iPhone with ActionScript 3 projects. You will be able to test this functionality in the forthcoming beta release of Adobe Flash Professional CS5 on Adobe Labs.

via Developping for the Iphone with Adobe Flash

Flash Professional CS5 will enable you to build applications for iPhone and iPod touch using ActionScript 3. These applications can be delivered to iPhone and iPod touch users through the Apple App Store.*

  • Demonstration Video
  • Example Applications
  • Additional Resources

via Adobe Labs – Adobe Flash Professional CS5: Applications for iPhone.


***
Sign Up for Beta here : https://www.adobe.com/cfusion/mmform/index.cfm?name=fpcs5_notify

youtube tips: how to send youtube link to a specified time

youtube-logo

It happens to me several times to share a Youtube link with friends and tell them “hey check out this video at 2min25, there is something really nice!”. It’s a bit pain in the ass though, most of the time they don’t even go straight to the right time.

Today I found a nice tip that let you start the video at the second you want. all you have to do is add #t=numberOfSecond at the end of the Youtube Video URL!

For instance, let’s say I want to show you one of my shots on my Reel which is at 15sec :

So the link should look like : http://www.youtube.com/watch?v=iouun1-D6NI#t=15