Blender VFX wish list features


EDIT : the project started at . Please donate and help the project

I have been at the Blender Conference this year & had the chance to met a lot of great guys as also talking with Ton about doing vfx with blender. I attempt the “VFX roundtable” talk… yes, I was the annoying guy who stoll the microphone the entire time :) , and yet didn’t have time to give my entire list there. So after talking about it with Ton, he asked me to send him an email of it, which I did.

I tried to make it as much detailed as I could and also to not making it too personal as well but keep it as a general need basic features.

AndrewSebastian & Jonathan asked me to post that mail on my blog to share it with the community, so here it comes…


When opening this kind of topic I know that a lot of Blender User like to throw stones at the guy writing it :) , so here are some facts before starting. I’m working on a every day basics with After Effects and sometimes I do use Nuke. I’m a TD Artist for a video game company working on off-line & realtime production, so my job is to design pipeline and workflow or either new tools to make a production more efficient and comfortable for the artist (means I’m use to work with developers a lot, and know how much a pain in the ass I can be sometimes… trust me :) ). On the side I’m also doing Matchmoving for a living also as VFX compositing & some Color Grading too. And those days I’m also developing  scripts & filters for After Effects which are distributed over at (mostly stuff to enhanced workflow -sometimes inspired by Nuke I must say :p-).

All this doesn’t mean that I know stuff better than someone else at ALL !! I just try or I’d like to try to use Blender as much as I can in my Freelance time, and sometimes I just can’t because basics features are missing, and that’s what I’m going to point out here (or at least try).

I do understand that we don’t want to make a copy of AE or Nuke in Blender and I do agree that this would be totally wrong. But since there are leaders in this field and I know them pretty well, I will mention some of there features or workflow as illustrations & ideas. So you will have to take my words as “something like that…” rather than “it has to look like that!”. Note that most of the features I’m going to talk about are present in all compositing software not matter if it is nodal or layer based.

So you wonna do VFX with Blender ?

One thought we had with Sebastian at the Blender Conference, which I believe illustrate all those wishes pretty well, is “Try to do a VFX tutorial for Nuke or AE using Blender, and if you can’t do it, it means you are missing the tool”.

A great website for this is ! And I’m NOT talking about the ones with third party plugins but actually the one that does use the basics provided tools. (I’ll mention some of them later on).

VideoCopilot’s tutorial are great in way that they are very professional looking, accessible to anyone in the field, and covers what you would do on an every day basis as a VFX Artist. And usually you would be able to do those tutorial with any Compositing software as far as you understand the logic behind it. Tools looks sometimes different, but they are all there in all the programs (except Blender for now, and that’s what I’d like to change).

I don’t need evidence that Blender could do the same by doing some tricks and finding some way around the problems. I’m always looking at an efficient, production ready approach of the problem. For instance, recently the RotoBezier Addon came out, and while this is a great and smart trick, to me, it is just a work around in the mean time, but not at all production ready for several reasons that I could mention if you asked me :) .

Ok so now let dive into the list !

VFX Feature list for Blender

Color Management

As for today the CM (color management) is not working very well and too hidden IMO.

I did talk about it with Matt Ebb a lot. Even if we have different opinions about it, I know he has good ideas and great libraries to use in mind, but not so much time to work on.

His philosophy is to have it completely hidden in background and doing all the work for you, which I deeply disagree with (but again just my opinion). CM is good for having LW (linear workflow), and LW  is important, but CM is also important for Color profile and Color models and you really have to know in which space you are currently viewing stuff (linear, sRGB, … ).

Today I can never tell if my viewer is showing me in linear or sRGB, and same for color picker.

In Nuke you always have an option in inputs, outputs or scopes to choose which color space you want to look at (ie screenshot) so you can have a complete control over it.

So having it hidden and automatic is ok in case I can overwrite it anytime I want to !

color profile selection in Nuke's reader (input)color profile selection for Nuke's viewer

color profile selection for Nuke's viewer

And as I mention at the roundtable, you don’t always need linear workflow (and thats where Matt is going to disagree with me :) ), for instance it is usually more a pain in the ass for Color Grading than anything else. And I have been talking with some professional colorist who told me there big color grading workstation are working in non-linear (ie Davinci Resolve). Which don’t give me wrong can work with linear too, but if you take something like the Color Balance Node which Matt and I develop, the formula I used works better in non-linear. Same formula used in Magic Bullet Colorista and very close to what use Color grading suite by the way. So LW, great for compositing shots and playing with Render passes, for video edition & final color grading, not so much.

EXR Full Support

Support for reading any custom passes :

At this point Blender cannot read any passes that it doesn’t support or create itself. So when I’m working with external client who works with other pieces of software I cannot read their passes (except the normal kind, diffuse, spec…)

In any Nodal based compositor I know, there is a special node called “Shuffle” (or whatever) which separate the passes. The reader (input) just gives the default pass (beauty ?), but the shuffle does the job of selecting a specific pass.

Shuffle node in Nuke

Support for writing custom passes :

Now I have to create a new RenderLayer with a material overall and save it to another file. when actually what I’d like to do, is create a new pass, name it and affect a material overall to this pass. all that in my default render layer. (this part is not just EXR but also rendering related).

Support for reading/writting attribute :

You can read or write attribute in exr. Some kind of metadatas which can be very useful even to pass datas to your compositor as for instance the actual position of the camera or values of the distortion, anything you can imagine of that can be useful (not as important as the two above, but again full support is better :) ).

Expression Py

I am so missing that !!!
Basically it means that for each params you have you can set a script. In AE it is called “Expression” as in Nuke by the way.
The idea is to right click on a params and then select “set expression”. Expression can link to everything in blender other params, maths, … so at the end you can do some really crazy stuff as constrain or operation or conditions…
We are using that ALL THE TIME
In Nuke you also have an expression node which can gets several inputs and then you can write python expression by channels. It becomes almost an image processing or kind of pixel processing kind of node. Very very powerful :) .

Nuke's expression node looks like that, but creating expression to any inputs like in AE sounds more powerful to me !

Enhanced Group

Groups are actually not that bad in Blender, but a bit too rigid !

  • One thing I’d like possible is to choose the inputs I will have in my final Collapse group UI. Today to do that it is a bit hard and not very flexible. Plus you cannot add special UI inputs which might not be a node but just a value.
  • I’d like to keep some group expand but still being able to work with nodes which are not in this group. Right now if a group is collapse you can only work on this one.

A nice recent approach on this topic :

SDK, Py, OpenFX

I did develop the Color Balance node in compositor with Matt. Actually I must say Matt did most of the job just because I couldn’t understand the way making UI node was done in C. First because I suck in C, secondly because there were so many file to change I was totally lost (register, type, processing, name, RNA, DNA, …. all that different files). So while I understood pretty well in C the part where the pixels got process, all the UI was a nightmare for me, and Matt did handle it.

I can only imagine how many people get stuck by that while it can be very easy to do some pixel processing just by typing a few lines of code. Several solutions for that :

  1. A well documented SDK (or “BDK” ^^) which as a bad coder I usually hate, too C related for me, but good for developers who wants to make plugin
  2. Python script/expression. As mention above being able to control a input via a python expression or even better an “Expression” as mention above as well
  3. Being able to load/write GLSL code for processing ? I discovered Pixel Bender for After Effect, its kind of a glsl-like for processing image in AE, VERY easy to code, VERY powerful with possibility to add user UI inputs in the shader which AE will transform as interface ! This is perfect for a crappy coder like me and so great in “production” environment. you can prototype things or solve problem in no time.
  4. OpenFX, this is my personal favorite. I have a full thread about it on my blog ! Ramen did implement it, it even did port some Blender’s nodes to OFX already. For a developer stand point of view it means he can create a plugin for blender, and make sure it will also work on any other software that does support OFX (Nuke, shake, …).

But more importantly for somebody like me, it means I can buy a plugin like Keylight (for green screen keying, because lets be honest Blender’s keying tool are a bit old) and use it directly with Blender, without the needs to buy After Effects or Nuke to use it. PLEASE read my blog post about it which clarify the point on that :

-> I’m doing some filters for After Effects ( , and mostly because I cannot do it with Blender, it’s way too complicated to achieve the same thing !!

UI Design

Ok I plead guilty on this one, it is more of a personal wishes that something else or VFX related. At least I get a point for honesty ?

  • Straight lines !!! Like Pablo called them the “spaghetti lines”. This is so rigid and messy at the same time. It makes you work only horizontally and from left to right, you cannot redirect lines like you want. I hate it ! But maybe it is just me ! Take a look at one of my comp here : ! (yeah I know, If I had “expression” I wouldn’t have so many nodes but… hey… you got my point :) ) As I recall, Digital Fusion does straight lines with angles by itself, I hate it too. Nuke (ie screenshot) does straight lines left to right, right to left, top to bottom, so on … which you can break by pressing CTRL and adding a “dot” on itThat is what I call flexibility !!!
  • Plug lines into dot socket is a bit old and rigid too. In the below example (same in Fusion), each node would have an arrow for each inputs it can get (w/ name on the arrow). Makes it more flexible to layout (it goes with the above point)
  • Do not put controls in the nodes, keep it in the panel (float or dockable panel). Blender kind of have both now, but I’d like the option that if I double click on a node the panel would show up for instance ?

Break the wall between compositor & 3D view

Today every compositor as some simple 3d capability. Blender gets both features, but it is impossible to use it together. Why would you need 3d in compositing ? (best reason here :

Let say I have this render of a street w/ 3d and matte painting stuff. And I want to put some guy shoot against green screen in this street, this is what I would have to do in Blender to get the shot done :

  • import green screen footage in compositor
  • keying it
  • render it
  • place 3d plane into my scene
  • add a material
  • create a texture with the rendered keyed greenscreen footage
  • apply with UV stuff
  • probably render those plane on seperate passes/layer than the enivronement in case I want to do some more fx/compositing on top of it.
  • import that result back in compositor
  • do all the compositing magic

Do you see the problem here ? so many steps, so many in-between render… when actually all I wanted is pretty much place those keyed plane/billboard in 3d space with a bit of orientation and getting the camera data from your 3d scene !! w/o in-between rendering !

Also know we can see more and more of what we call “Deep image compositing” which basically use a point pass (or position pass) and generate a 3d point cloud based on this pass. it helps for all kind of things (preserving edges, getting geometry information, occluding, relighting, …. )

(ie || ||

Real Camera

lenses, camera sensor size, distortion, … all the good stuff :) (Matt already started some work on that

Custom Passes

As I mention earlier, being able to create custom passes which we can assign Material. Pretty much like material overwrite function but into passes. And also with some additional passes by default as Point pass (world and object), and normal pass (world, object & camera).

Masks (roto)

I still can’t even imagine how we can do compositing without masks. If there is one feature above all I’m ALWAYS using in ALL my comps its masks. And to be honest not having it, is one of the main reason I’m not using Blender as primary compositing tool today.

I know someone did talk about some fancy/geeky/gadget mask tool he would love to have in Blender instead of simple/basic vector shape with handle. My opinion on that ? Keep it basic at first ! Simple tools are always good to have first. And even if it might be tedious to do, at least we know it works in all cases.
In case you really want to go with the geeky automatic paint / rotobrush tool, take a look at GraphCut & SnapCut algorithm. That’s a paper from siggraph which is used in AE now know as “roto brush”. Which is a great tool by the way, but from my experience, not yet better than a good old vector shape :)

Image Cache / Dynamic Proxy / Pre-cache preview

VERY IMPORTANT !!! you’ll never be able to do some good VFX compositing w/o it.

DJV does that pretty well actually. you can render the sequence into RAM to play the preview in realtime (Again I still don’t understand how we can survive w/o it)

Downgrade the viewer (full, half,quater,.. resolution)  to save some memory or put more frame into RAM would be great too.

Press a button > render to RAM > play it !

Interact with Elements in the viewer

Being able to select one of the “layers/node” and rotate it, scale it, move it,…. directly in the viewer. (like you would do in photoshop for instance (yes yes it is possible with node base too of course :) )

2D Tracker

I’m a matchmover, so trust me I know how cool it could be to have a camera tracker in Blender. I’m following the libmv project very closely and talk a lot with Pierre Moulon, one of the active developer on the project (since Keir is kind of busy those time).

But the fact is I don’t really need a 3d camera solver. First it would take a long time to have something which works great and be production ready. As for today I do have syntheyes, which is not expensive, and that I can use.

BUT, having a simple 2D tracker in the compositor to do things like that for instance : that would be PERFECT !

I did talk about it to the LibMV team and they would be glad to help, but they would need somebody experience with Blender C code to implement it quickly. They asked me for a proposal a year ago which is here :


It’s not everything, but it is quite a lot for now. Some of those request are more important than other, and I would be glad to discuss this topic with anybody.

One tutorial that does sum up all those features is this one Bring the missing tools which will let you do that, and you’ll be much closer to be a “production ready” VFX tool… IMO :) .

I would also encourage you to take a look at Nuke PLE (personal learning edition), it’s free and complete (only rendering are watermark.
Also take a look at the work done on Ramen VFX, Estaban did some great things there
And finally, make your favorite website and watch a lot of breakdow :)

Nice VFX Breakdown to watch :

[Bonus] Hair Simulation

This is not directly VFX related but as I understand, you had trouble with hair simulation on Sintel. I did talk about it with Daniel Genrich at the time, because I think he is the dynamics guru who could nailed this kind of thing. So I gave him a few papers to look at which he thoughts were very interesting and not very hard to implement, but as I recall it was too late to implement at that time and you guys didn’t have enough time to do it or even take the risk to test it. But I guess it is not too late for next projects.

When I was working on face tracking/recognition at Ubisoft, I met someone (Basile Audoly) working on mathematically modeling the dynamics of hair. He gave us a demo which was very impressive. And if I’m not mistaking, he has been working for l’oréal research after that :)

His program was able to model curly, straight, dry, wet, growing,… hair, but also dynamic cutting, all those features model in his research. I have personally seen his demo, and I have never seen something so real, see by yourself with the video !

Here are the papers and videos :

Super-Helices for Predicting the Dynamics of Natural Hair

50 thoughts on “Blender VFX wish list features

  1. These are wonderful additions! I work in VFX and have heard about Blender for years.

    If these are implemented — I might actually try it!


  2. Really great ideas you have there!

    As for the straight lines you can do this by User Preferences> Themes> Node editor> Noodle curving: 0

    It's really exciting see the direction Blender is being developed in!


  3. All true, I hope these wishes are heard. Interestingly I use Fusion 6 myself, it’s bit higher level software than Nuke/Shake (read: more functionality packed into each node – complex flows are less readable but faster to build) which is in turn a limit and an advantage. At least Fusion custom tool seems more readable than Nuke expression example there (probably my lack of experience with Nuke – doesn’t it really allow scripting in all numerical fields)?

    Naturally proper basic functionality is more important than flashy stuff, but then I already have tools with basic (and quite advanced) functionality so I don’t mind using Blender only for flashy stuff (like smoke). I somehow doubt this holds true for most of Blender’s userbase so it would make sense to concentrate on most needed, most basic tools for greater good. But I also think that many users are not even aware that such tools are missing or lacking.

    If next Blender movie is VFX one I hope they will hire someone with professional vfx credits to lead position.

  4. Nuke is linearised 32bit RGBA internally.

    The READ Node colorspaces are to tell Nuke how to interpret the incoming source you load in the read node so it knows how to transform it to linear RGB for internal consumption .

    Once you have a 'ungammed' 'standardised' 'neutralised' ie: linearised version of each source you may input and they may well vary ie: mixing ITU709, LOGspace, RED whatever in one common internal linearised space.

    You then view that internal space how you want through a selection of 'View LUT's', sRGB, REC709 whatever depending on project deliverables. You control final colourspace output in the same way via the Write Node. That is how Nuke works. Layers of abstraction.

    Da Vinchi Resolve is linearised 32bit YRGB internally. I think you misinterpret the 'non linear' term Resolve is non linear in terms of NLE, you can work on any source out of linear sequence. Nothing to do with gamma encoded workspace. Resolve is linearised colourspace behind the scenes.

    There's good reason why the common internal workspace is linearised.

    Transforming between colourspaces is far more accurate and easier going from linear to gamma to linear, rather than gamma encoded to gamma encoded which will skew hues, same for scaling, colour processing, whatever all better handled in linear colour space. As well as blending, compositing etc.

    1. Ok, so I probably wasn't clear on that, my bad. I do know that most engine dealing with math works in linear space internaly, but as you mentionned this is behind the scene, and I'm always talking about front-end user. As a user, I wouldn't give a sh*t on how you would do your math internaly as soon as I know what I'm viewing in my output viewer trough LUT viewer for instance or at least something which tells me which output I'm currently viewing. There is really two different issues here, and sorry if I wasn't clear enough on that.

      As for Resolve, never had the chance to check myself, I only have Stu's word on that matter. Maybe we misunderstood each other, or maybe he didn't gave a good information, but again, I think the reasons are plausible.

      04/08/10 08:18
      @francoisgfx : hi Stu, just wondering if LGG is not meant to work in linear, what does use Davinci ? CDL ? something else ? thx
      @5tu : None of those systems use linear.
      @francoisgfx : you mean the formulas or davinci ? I guess the formulas :p. Do you know where I could find good papers/materials on those math/color stuff?
      @5tu : The formulas and the davinci. If you want to see a linear 3-way, you have to look at Magic Bullet Looks.
      @francoisgfx : ok thx for the tips ! I thought colorists were always trying to work in linear space ! good to know then
      @5tu : Nope, just VFX artists.
      @francoisgfx : makes since after all! needed to recomp passes together, but for final grading as far as it looks as you want who cares. thx a lot Stu.

      1. Hi I've been thiinking about your comment on struggling with the blacks when grading in linear in blender

        is it just blender that presents the poblem and is that with images from a camera ie sRGB gamma curve or HD video ie ITU 601/709 gamma curve they are different

        Blender applies simple maths to try flattening the curve applying a inverse sRGB but thats not what is recorded with video

        Also in a 32bit float environment negative valus below 0 1 on the abstract scale should be accomodated and not clipped same for levels over 1 it should be a sliding scale allowing free movement of values in and out of 0 1 range

        what if blender is putting your blacks so low there is little affect on them and they are getting clipped off as they go negative and back again

        I'm no expert and probably gave the whole thing mixed up :- ) judt shaking the tree to see what falls out :-)

        1. yeah I guess that could be one of the reason. Still there is a difference between 32bits & linear though. so that's again another issue if this is the case. If you ask me, values should never be clamp. Let the user clamp it if he wishes, and if he got weird blending between inputs, then teach own to clamp before blending, but don't do it for him.
          I do hope it is the way it works now.

  5. I carefully read your post, and I agree with you in every part.
    You said that shake strongly influenced the blender compositor….. maybe it's true but IMHO it's 1000 light years far from that piece of software (just to reply to a couple of comments) :)

    I'd like to go further in a couple of your proposal:
    you've spoken about cache and… what about adding a sort of cache node where store on disk (in sequence of files) part of the compositing branches ?
    I think it could be very useful in speeding up the compositing process and also will enable the use of the render layers that have not any more to be substituted by any file in.

    My biggest concern with the blender compositor is about its usability:
    the old style left to right should be changed to a more flexible one as well as commands shouldn't be any more into the node, I think the active node panel (in the properties) should change in a full properties panel completed with the in/out description and the ability to add extra data like in any other part of blender (custom properties). I've seen I can add a driver but it seems it's not working (at least until now), and also node names are unreachable when the project starts growing.

    Mine could be an insane idea but I think the way to break the wall between the 2D and 3D compositor starts when you don't imagine the compositing at the end of the pipeline but just a part of the possibility you have in your 3D suite.
    Imagine this:
    you want to import a 2D-green screen footage in your project where you have 3D elements and other stuff…
    well the 2D layer should start directly being a 3D shadeless plane placed in front of the camera in the 3D environment and you should be able to use all the compositing nodes over the texture (the 2D green screen clip) and then you could use the "camera output" as a new layer of a top composition …. (could be intended a sort of nesting).
    In this way you could have infinite branches and compositing in a complete 3D environment mixing the best of each "workspace" (meaning 2D and 3D)
    In this way cache nodes will be a big plus to reduce processing time with the ability of change everything at every time without moving between thousand of different project files.

    Hope I've been clear,

  6. I love the fact that the compositor is being looked at as it is such an integral part of many people (who cannot afford to purchase AE or others). I am with those who want the basics fixed first and then add features, although GPU compositing will be AWESOME.

    I would like to see some cosmetic fixes, such as grid snapping, or other snaps to make it look cleaner and organized. Also little things like a more defined "handle" for nodes so when you grab them you don't close settings or change something.

    I would like to see also a plug-in node for people to add in scripts for other processing, with a basic outline for a UI (input, output, settings/values), this would allow people to write their own or send an image to another program such as imagemagick for inline FX.

    One thing that would be a little more adventurous would be to have a scale/translate/rotate node integrated into the inputs or a separate node that would allow you to use the G, R, S commands on the backdrop image for more efficient placement of layers. (more like a photo editor).

    Cannot wait to see what transpires from all the hard work!

  7. In my opinion, one of the compositor's biggest weaknesses is the fact that there is no infinite workspace!
    If you use the transformation nodes like scale, move or rotate, everything that moves out of the picture will get lost.

  8. One more: About color management, although it's not perfect yet and we don't have screen correction, it's a good compromise to have sRGB instead of nothing at all.
    Lighting is much more efficient with 2.5 and color management than before and you can de-activate cm if you don't want to use it.
    Of course when it's on there are some precautions to take (I found out about grading and color management in the hard way… :-p) , but I think it's better to have it like it is now than not to have it.

  9. Good points. I think the most important thing to do asap is to blur the limit between the compositor and 3D space. What we need is to be able to use realtime (or something near that) elements instead of relying on the render output.
    I'd love to see graphic elements and imported footage in 3D space as planes and be able to use them as compositing inputs.
    ZanQdo has a great idea that I think it is worth to discuss: the possibility to assign a "no-render" flag to RLs, so what the camara sees in 3D space goes to the compositor, without the need to render them in final resolution and have a fast and compositor with the ability to navigate the timeline without needing to re-render what is in 3D space.
    Of course, throwing this from a the place of a mere user makes it look like one thinks it's a trivial feature to implement. I know it's a deep change and it's not as easy as just saying it. :-)

    Other prioritary things are, in my oppinion, to make the premul/unpremul procedures the most transparent possible to the user (so the linear/log conversions) and to multiplex the sockets in a single line to simplify the composites.

    And speaking of premultiplied alpha and color management: Blender has an ugly bug right now: when you choose premultiplied alpha for your composite, the last node applies gamma correction to the image data with the premultiplied alpha together, which is clearly a bug. Unless you unpremultiply the output before the composite node, the resulting images will be wrong making it impossible to use them in any other software package (and even the same Blender's VSE)

    1. After Effect has the premult/unpremult transparent, and in my last production it was a nightmare when you want to play with some compositing and doing some advanced stuff (I don't have example in mind right now) you sometimes really need to premult or unpremult by yourself. especially with node base when you have a whole pipe which you need to work on premult and then at some output or mix unpremult again. We had to find some crazy tricks to bypass or force it in AE. it was a real nightmare !
      I would prefer to teach people how to use a premult/unpremult node the good way rather than making it hidden. But maybe it is just me.

  10. Hi Francois!
    Great and really interesting article. As you suggested i tried to recreate a tutorial from videocopilot. Here you can see the final results -&gt; <a href="http://;” target=”_blank”>;
    And here you can see a tutorial about -&gt; <a href="http://;” target=”_blank”>;
    For now, I have not encountered great difficulties, maybe is because I certainly chose a very simple tutorial without a real video shot to edit. Maybe I'll try something more complicated.
    In this case, however, the great lack that I found was the RAM preview. Surcharges for the animation of the light rays and the effect of blur, I had to make everything "a feeling", without a real preview in real-time.
    Thank you again for your detailed article!

  11. This isn't an area where I have much knowledge. Having said that I'll now go on to share my ignorance.

    If Blender is switching colour spaces automatically then I think it's reasonable to indicate this to the user. I don't see an issue with putting the colour space next to the vertex count or similar.

    It's funny that you bring up Magic Bullet Colorista when talking about grading in non-linear. Stu Maschwitz the man behind it, is a strong proponent of working and grading in linear space.

    1. yeah… about that :

      04/08/10 08:18
      @francoisgfx : hi Stu, just wondering if LGG is not meant to work in linear, what does use Davinci ? CDL ? something else ? thx
      @5tu : None of those systems use linear.
      @francoisgfx : you mean the formulas or davinci ? I guess the formulas :p. Do you know where I could find good papers/materials on those math/color stuff?
      @5tu : The formulas and the davinci. If you want to see a linear 3-way, you have to look at Magic Bullet Looks.
      @francoisgfx : ok thx for the tips ! I thought colorists were always trying to work in linear space ! good to know then
      @5tu : Nope, just VFX artists.
      @francoisgfx : makes since after all! needed to recomp passes together, but for final grading as far as it looks as you want who cares. thx a lot Stu.

      again, I might not have been clear in my intention in regards of linear. I do know the importance of linear ! In my every day work, I try to work as much as possible in 32bits w/ linear space in After Effects, and push people around me to do it. I'm not against it at all ! I even develop a filter called ft-Clamp for After Effects to make my life easier and fix issues when working in those spaces (cannot work linear w/ 8bit :p).
      But you also need to understand really what's the use and need of linear, and yes it is better to always work w/ linear, but for some stuff it is not THAT important ! That's all I'm saying. And until we get a full implementation and issues solve like the ones Matt mentioned, to me CM is not suitable at least for all the task in Blender.
      As for Colorista, try to do a grading with it in Linear in AE, you'll see ;) !
      Plus, my guess ? not so many people is using 32 bits comps in AE so even less are using linear ! You know why ? because if so, I bet they would have ask for a Clamp filter as the one I develop a few month ago, much much sooner. But that is not an argument at all, doesn't mean we shouldn't change things, and actually they should change ! Plus one of the most reason they probably don't use it in AE it's because it too hidden, or too complicated to deal with and there is no good information about it. Or maybe not "Internal" enough like Matt would like to do for Blender. But just to say, many people are talking about it those days and says they absolutely need it all the time every where, without really knowing the benefits and also the pain in the ass it could be sometimes.


      1. I'm also a proponent of working in linear and good color management overall.
        I've been wanting to propose this:

        It is being used in a lot of big studio's already and will be incorporated in nuke in one of the next releases. It will make colormanagement a lot easier.

        I fully agree on the rest of the notes, get the basics right!
        Fix the messy nodes.
        Get decent roto working. (including feather and motionblur!)
        Get a 2d tracker and means to link its values to other values!
        Fix the layering system for more flexibility. (I actually had to write a program to assign the first layer to RGBA as blender uses some weird naming convention.)

        I'm not usually someone to say look at xxxx and take example of it.
        But I have to say that Nuke really is a great great compositor and we should not be ashamed to take great features from nuke and implement them in blender in our own way.

        If i had more time (and knew C++ instead of python) i'd help out but i have to work and i use the tools that get the job done, sadly it isn't blender at this time at least not for compositing.

        btw I am thinking of implementing a plugin for blender so it can deal with Alembic ( If done right it will give blender a much stronger footing as a VFX pipeline tool everywhere.

  12. * One important thing, which has been a failure so far in Blender's compositor, is to get rid of all the idiotic uncertainty regarding premultiplication. I can't tell you how much time I've wasted trying to figure out what needs to be pre-multiplied, what doesn't, re-rendering, and *still* getting edge artefacts. Some nodes seem to want premultiplied alpha, some don't, it's a complete mess and very frustrating to use – this should have been solved in the 90s.

    All channels in the compositor should be defined to be separated RGB/alpha channels (i.e. *not* premultiplied), and any inputs coming in should be converted (divided) to end up in that format. having co-dependence between RGB and alpha channels contradicts with the kind of flexibility you want, to be able to modify, swap, and process alpha channels individually at will.

    … and there's probably several other problems in blender's compositor I can think of, but that'll do for now. In any case, before any fancy new stuff like tracking, *the basics need to be fixed first*. I can't emphasise this enough. Though given the recent history of blender open project development, flashy new things takes precedence over having usable basic day to day tools, so who knows…

    1. agreed !! I believe what I did mention are not "flashy" features so far, even if they are more front-user related, rather you pointing out low-level dev related. which at some point are connected too.

      Dude, seriously, get a job @ the Foundation :)

      1. Well, although I'm talking about low level stuff here I suppose because I'm familiar with it, the reason for bringing it up is as a user. I've been using Blender's compositor professionally full time for the majority of the period since it was developed in 2005, and these are the sorts of issues you run into in day to day production use.

        Anyway, IMO it's stupid to be adding in new stuff (like openCL, or brand new tracking features etc) when the basics are dysfunctional.

        1. I understand that and agree with, but in my case and for what I'm doing every day, I would prefer to have a simple 2D tracker rather an ocean simu. One is project sensitive, the other one is … pipeline/workflow/every day tool sensitive. at least to me :)
          All I'm saying is that there is shiny new stuff, and "new stuff".

          1. I totally agree, blender has so many basic, logical, needed things to fix and implement, but people are asking for ocean sim, smoke and other advanced stuff. Ok.. I have to admit I like that stuff too, but how often I need those effects in comparison to bezier masks and simple animating of planes like in after effects.. ? There are of course more things I would want as animator, designer.
            I think problem is, that blender has sooo many amateur fans, who actioaly don't work with these things professionally and don't face real-life situations.. they only want to see beautiful tests in youtube.. "wooou.. what a great fire!.. ooou, that looks so cool!"

  13. A few comments:

    I've been thinking about this for a while – many of these issues come down to structural limitations that have been in Blender's compositor since the beginning. It was designed during Elephants Dream in 2005 (influenced heavily by Shake), and hasn't changed much since then. The compositing landscape certainly has though.

    A lot could be improved by some changes internally (yes, to make it more like Nuke). Blender internally has a few kinds of buffer formats, from 1 to 4 channels (eg. RGBA), which are stored interleaved, eg. RGBARGBARGBA. These buffers are passed along individually as separate inputs and outputs. This is based on an assumption from a previous time in history where you were mostly concerned with rgba data, and not any extra channels, though nowadays those additional channels are far more useful.

    So this means:
    a) Any time you want to do things with more than simple RGBA (like a vector pass, normals, etc) it involves lots and lots of wires, tediously connecting them up. It makes simple things like vector blur etc quite annoying to work with.
    b) Because the channels are interleaved, there is no easy way to isolate/rearrange individual channels while you're working with them. Either the node itself has to have code specifically to handle such cases (eg. the invert node) or you have to create ridiculously complex node setups for basic tasks – eg. try blurring an alpha channel – it requires like 4 nodes.

    If the channels were not stored side by side, not interleaved, it would be much easier to have 'shuffle' style functionality built in to every node, determining which channels the node will work on, and which channels it will output. It's just a matter of sending the write individual channels in for processing.

    1. Just saw a typo reading that again – should be:
      "If the channels were stored side by side, not interleaved, it would be much easier … "

  14. * "Also know we can see more and more of what we call “Deep image compositing” which basically use a point pass (or position pass) and generate a 3d point cloud based on this pass."

    No, deep image compositing is a term used for compositing with deep image buffers – i.e. buffers that store more than one sample per pixel (like a deep shadow map). It's more than assigning a 3D coord/normal per pixel. I've seen demos from guys here in Sydney at AL and Dr. D doing very cool stuff with it, but it's a long way off anything useful in Blender.

    * About blender's 'pass' system, it's pretty worthless IMO. needs to be re-done to be much more dynamic, and without the awful hardcoded output variables in blender's current crap materials.

    1. yeah I know, I summarize it very quickly. but some how everybody understand it.
      A friend of mine did show me some stuff they were doing, that I couldn't even understand with it, but that were looking like the future of compositing. I don't even know which kind of datas were store exactly, the only one I recognize was this kind of point cloud stuff, but he also some kind of total volumetric, that was crazy. anyway…

  15. Additional to the channels, similar to Nuke, the buffers should be stored side by side too, so one input can contain RGB, Alpha, normal, position (etc.) data all in one wire. Not only would this simplify the setups, but it would also help on a UI level. Currently, with all it's little inputs and outputs, you really have to zoom in close to Blender's nodes to be able to work with it. It gets annoying, with lots of scrolling, and it's much harder to get a good overview of the network. A UI more like 99% of other node based apps where the nodes are small, and don't have lots of different inputs and outputs is much more practical to work with.

    Metadata can come along with these singular inputs too, for nodes to interpret at will.

    * As for the compositor buffers, they need a few other changes too. They should store full 3d transformation matrices per buffer, which can be modified by translate/rotate/etc nodes, rather than actually processing and sampling the pixels themselves. The current transformation nodes in blender lose quality and speed quickly after multiple transformations, and are duplicating a lot of pixel sampling code. Really, rather than accessing buffers directly with pointers, all nodes should be using a consistent 'getpixel' interface, which only actually samples the image if necessary – i.e. if you have a blur node at the end of a chain of translate/rotate nodes, the blur node just samples the input image via the combined transformation matrix of the previous transformation nodes. Such sampling can be using consistent code, and with options for higher quality filtering than just re-implementations of the same old bilinear/bicubic interpolation.

  16. Very good points. I don't share the one on UI design, though. i really like the current one, which is a very clean approach even though it is not as flexible as one could wish.

    Did you check Lukas Toenne's "all nodable" approach with particle nodes and stuff?

    Also even if you don't need a 3D tracker I'd really like one in Blender ;)

    1. thanks !!

      though I don't know why you would disagree with the UI design I mention. Especially for the lines. With my proposal, you would still be able to do the same layout as now if you want to. But it will also allow people who would like to work in another way do so too. Everybody would be happy !!! :)
      And if its just about straight lines against curve lines, well I guess an option should be possible for that :p

      But dude seriously there is nothing clean about the actual one :
      - this is not clean :
      - this is (kind of) clean :

      and yet one of those is way more complicated than the other. try to do it the other way around, and you'll see what I'm talking about :p

  17. Hi Francois, I think you are trying to force Blender to be a software that is not. I don't know what have Ton in mind, but Blender apparently attempt to be a 3D package(like Maya) with amazing composition capabilities.
    I think what you are actually looking for a compositor like Ramen.
    I think Blender is like After Effects, can be used to do outstanding Motion Graphics, but not a complex composition (and with a compositor is hard to do a nice Motion Graphic).

    [I vote the Masks on Blender compositor!, physical Camera, Hair simulation, edition in compositor viewer. ;) ]
    When you talk about "create custom passes" I think you are talking about Renderman (RIBMosaic).

    My motivation to make this comment is the Unix Philosophy: Write programs that do one thing and do it well.

    I'm really glad to see efforts on the Blender VFX capabilities :) :)

    1. Except if I'm mistaking, the goal of the next movie is to make a VFX movie with Blender. So I don't think it's forcing it to be something it is not.
      Plus Blender did stop to be just a 3d package a long time ago, when it started to do some editing with sequencer and then compositor. it just need to be enhanced.
      I'm agree with you on the Ramen part, but as far as I'm concern, the project is closed for now. There is an attempt to save it, but for now no big plans are made on the developpment side. So I cannot rely on this for now.
      I believe Ton is looking to bring compositing, and color grading capabilities to Blender.

      I guess you meant "Blender is like C4D" ? because After Effects is not a 3d package at all :)

    2. Blender does not follow the Unix Philosophy but is about *blending* everything you need for 3D animation into one package. And with the VFX project it will expand even more.

    3. I like that blender does more, but it does so through scripts and add-ons, so you can pick which features you want, and people can code features that "do one thing and do it well". It is nice to have a "blendvironment" to do everything in.

  18. hey,

    that's a really great article. i totally agree with all your wishes (but i think a good 3d tracker would also be really nice to have right inside of blender, because i don't have syntheyes at the moment). unfortunately i'm not a developer. i've no idea how to code, so i can't blame anyone for not implemeting these things.

    another nice feature would be a tight integration with other software packages like nuke, after effects, ramen hdr or even the new released open source editing application lightworks from edit share.

    i know there are some scripts for ae but you always have to look around the web to find the newest one.

    best regards

    1. yeah I know :)
      Now the Foundry integrated there new Camera Tracker plugin directly in Nuke (and it works also with AE), and I must say it is very nice to have it directly there.
      But again, it is hard to put something like that together and while the first core algorithm will be implemented pretty fast I guess, all the workflow and pipeline of the tool to use it efficiently in production will take a long time.
      3D tracking is hard and it is not magic, doesn't work out of the box. It is not meant for everybody. Having handeld moving shot to solve properly is not an easy task, and can be very time consuming. While anybody can do 2D even for fun, it is very easy
      While 2D trackers will be much easier, and you will be able to make some very nice looking stuff with only 2D. Trust me !

      Look at those tutorials, all done with simple 2D trackers :
      - (this one use mocha, but I ensure you, its possible to do the same with the build in 2D Tracker)

      Have fun :)

  19. I love the suggestions. think after 2.6 is released key focus should be on Google Summer of Code projects to be implemented. after that all these stuff.

    * roto is already done by addon
    * cache images to ram! really needed. the reason why you jam in 8gb+ of ram in a new computer.
    i have also no idea why this isn't there yet! it's really … weird. could it be due to memory handling difficulties? it should be there. you need it for compositing, VSE and to have background image sequences in 3d camera.

    * openfx
    I've downloaded the 1.2.1 sdk, with examples. the OFX 2.0 seems to even get better! they're talking about using ocl to have gpu accellerated comp effects.

    blender needs to be on the support list on that project, it's huge. shame noone can really take up the ball and run on this project.

    1. yeaaahhh roto, not even close. I'll give you 10 shots to roto with this addon and then composite them, for the end of the week… trust me even if you get them in time with a perfect quality. at the end you'll want a more build in compositor feature.
      Again as I said in my post, just because there is a work around, doesn't mean it is production ready to me :)
      So IMO, no Roto is far to be done and is one of the most basic tool which really needs to be robust and flexible

      1. I think the script is a very good start :) maybe it gets ported to C and could be maintained more closely to the source and become robuster.

        I'm quite happy to blur my roto animation w/ compositer, you can have several bezier roto shapes to roto out a person/object …

        but roto should be tied more closely to a tracker project, you kinda want a workflow where you simply 2d track footage and can attach a bezier roto shape's origin point to the 2d tracked data.

        hope to see something like that in the future with libmv , 2d and 3d tracking.

    2. I think OFX plugins is a must have thing after masks, and ram cache.
      Masks and ram cache, are the main reasons i dont use blender for compositing.
      Also an easy way to output passes in different folders and get them organized.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>