This node setup allow you to offset your normal pass on the X,Y,Z axis. It could be use for several cases (relighting is one of them for sure, so I thought I would share this node. All you have to do, is to “Append” this node in your blend file and your are done. It use a normal pass as input, and would output the new normal pass.
“An Efficient Representation For Irradiance Environment Maps”
This paper cover how to do Environment Lighting based on Angular Map (or light probes, or HDRI map, whatever you want to call it) but without using those map for computation. Odd you said ? Yes sure !
They develop a technique using Spherical Harmonics to simplify an HDR Angular Map to only 9 coefficients !! That means after their process, you only have to do a few multiplication with those 9 coefficients and then you are done ! Check out the comparison :
left is a standard rendering, right is using the 9 parameters. Grace Cathedral Light Probe was used for this rendering http://www.debevec.org/Probes/
If I’m not mistaking, there is less than 1% of error between both rendering, and of course you can image that the second one goes much much faster
Please visit there website and presentation for more information about it :
In his demo, Roy shows the use of the irradiance only of a Angular Map. This means he will get all the HDR Luma intensity of the map (no color variation). Once you have the 9 coefficient, the operation is pretty easy and so it compute very fast. The operation looks something like this :
Where c1, c2, c3, c4 and c5 are 5 constant number given in the paper. X, Y, Z are float numbers from your normal pass (X2 means x*x ; xy means x*y ; so on … ). All the L22, L20, …. variables are the 9 coefficients)
So as you can see, this is not a really complicated operation and it does compute really fast. Running this on each pixels would return a kind of light map (irradiance) which you can use to ADD to your original color map.
What did I do ?
As I did for the Lift/Gamma/Gain the first time, I tried to reproduce this formula with the “Math Node” of Blender. So for that matter I did use the EXR file (render passes) provided by Roy in his demo, and only kept the Normal and AO passes.
This is not as fast as it could be, the render time for a 1920×1080 is around 1.5 second (well for HDR environment lighting we have seen worst ^^). There is several reasons for this to be slow, but I’ll come to that later. Note that for this example, I did use the Grace Cathedral Light Probe value and not Roy’s light probe.
I was kind of happy of the rendering though, but a bit disappointed to only get Luma value when environment maps have so much information about colors as well ! (you thought the previous example was a mess with nodes, wait for the following one ) UPDATE : I was totally wrong !!! the only reason why I only get Luma (or actually greyscale) is because I used Math node. I thought it was able to do the operation on any kind of input, but actually it does it on only one composant. So the vector operation never happen :p. I just figure it was possible by trying this same formula in another shader language (pixel bender) and see color happening ^^ . So my bad, Color works too, and I’m not sure to know the difference between the vectors or the Matrices in this case, except using the formula with vectors is much faster ! (I’ll change the blend file later)
So I took a closer eye to the paper, and especially to the example they provide, and I found out that their filter wasn’t only generated coefficients, but also Matrices !!! This means you can do the operation with 9 coefficients to just get the irradiance of the environment or do a similar (but a bit more complex) operation with 3 4×4 matrices (red,green,blue) ! I guess the obvious reason Roy didn’t go for this solution was because the computation is more slower, and he didn’t really need it since he is doing a kind of 3 points lighting in his example.
As I said the math are a bit different ! Here is the formula by using the 3 matrices :
Ok so while this might look more simpler on paper, remember the matrices are 4×4 and if a dot product is quite simple, it is not the costless operation too . Here is what it looks like in Blender with “Math node” as well :
Blend File here
Again, due to the heavily use of Math node, I believe that 3 second is not too bad, but I’ll come back to that later. Also the node I use to rotate the normal pass, is using some math that might slow the render a bit too and not absolutely necessary
This shows that the technique is working pretty well, but probably not production ready as it is in Blender since we are missing a few thing to make this work smoothly.
What would we need in Blender for this to be more efficient?
More “Input” nodes : this might be one of the reasons why the rendering is slow down a bit. Because actual “Input node” only works between 0-1, and the matrix number were between -n and n, I had to find a trick. For this I used the “Math node” for each number of a matrix. Setting it to “Add”, enter the value in the first input, and setting the second input to 0.0. So the output would be equal to the first input. I only did that because I couldn’t figure another way to have negative and positive values in Blender compositing in another way. But this also mean that the compositor tell blender to do an operation for each input, can’t say it’s much optimized
Input Value that goes between -n and n
Vector4 Input (right now you can only use Vector3, but you usually need to work with alpha)
Matrix Input (could even be useful to do some quick and dirty convolution)
Expression node : ok now, I’m dreaming for this one ! And this is IMO probably the main reason why this is so slow. I believe that each time I’m using a Math node, Blender does treat them individually. Which makes sense though, but it probably make a lot of exchange with inputs, outputs, inputs again, outputs again, ….
I would believe that sending the entire operation at once to the CPU (or whatever) and getting it back at once would make things different and much faster (but I might be wrong on this one !?)
Anyway the other reason for this node would be … well seriously, have you seen the mess with all the nodes ?!? So a simple field, even just interpreted by python would be great !!!!
Math with Vector : Maybe I did something wrong, but I couldn’t do a “dot product” between vectors, which is one of the reason why I have all those nodes. I’m doing the entire “dot product” by hand and this is heavy.
I wish Math could be use with any kind of input. But again, maybe I’m doing something wrong here
Passes : we need more passes! For this example we need an Object Normal Pass rather than a World Normal Pass. Probably not to much to do though, the only problem I have with the Passes system today is that they are all hardcoded in Blender, which makes it complicated to create a custom one like you would have in Maya.
I’d like to be able to assign a material overall to a define passes, but yet I would probably need to write shaders, which implicate the needs of writing shaders as well for the renderer. I guess OSL will fix that in a future if it gets implemented one day
Better support for EXR : beside this really annoying, flip node you have to add when working with other packages (I know flip node is nothing, but when working with 2K or 4K it is not the same deal, especially when the composite gets complex, you want to save any operation you could) , but I believe anyone is aware of this now, the other lack of EXR in Blender is the passes support. it doesn’t support custom passes coming from other package. Roy provided his Nuke script, with all the EXR file so you could play with it. But when I tried to load it in Blender, the input node couldn’t find all the passes inside, beside the usual color, Z (and maybe another one can’t remember exactly) it couldn’t find any. So I had to open the EXR in Djv, select the pass I wanted, and save it to another file as the RGB value. Really painful process
Yeahh Matt Ebb just commit my patch (SVN r27733) for the “Color Balance” node in Blender 2.5 Compositing node !!! Now it should be much easier to work with.
You can get a version of blender at Graphicall.org (any version above revision 27733)
There is still some precision issue on the color wheels, I guess some day it will be possible to move the color picker slower.
How to use it ?
First I would recommend you to un-check the “color management” setting in Blender 2.5 or it will make the blacks really hard to control.
If you are not so familiar with color grading, and push-pull techniques, I would really recommend you to watch Stu Maschwitz’s (Prolost) video tutorial using Magic Bullet Colorista. The settings won’t be exactly the same, but the approach quite the same though !
I described the Lift/Gamma/Gain in a previous post, and mostly this node is based on the formulas specified there. We just slightly modified it so the 3 defaults values parameters are equal to 1.0 just like in Colorista. Which makes it much easier to control the blacks. Actually the “Color Balance” node before this revision was the same formula but with lift default value equal to 0.0
Some presets !
While making some comparison test between Colorista in After Fx and the “color balance” node in blender, I tried to mimic some of Colorista’s presets.
You can download the “Blender Color Grading Presets” here : http://code.google.com/p/ft-projects/downloads/list
Since there is no mask in the “Compositing Editor” of Blender yet, I found a simple tricks which could work pretty well especially in case of color grading.
You’ll see nothing really fancy here since the mask can only be square (or pretty close to a square shape though). But if you check out Colorista for instance, the two allowed shapes are ellipse and rectangle.
I did bump into this website a few days ago, it looks pretty old, but it’s the first time I heard about it, and I think it is worth looking at it for open source software development (Ramen already implemented it).
From there website :
“OpenFX is an open standard for visual effects plug-ins. It allows plug-ins written to the standard to work on any application that supports the standard. This avoids the current per application fragmentation of plug-in development and support, which causes much heartache to everyone, plug-in developers, application developers and end users alike”
Who use it ?
Well this is the interesting part ! Big major plug-in development company use it as a few in the following list :
So beside the bullshit talks about “Blender should remain freedom and shouldn’t mix with closed source third party apps or commercial one”, this could be really useful for some of us who still want to use Blender as main frame, but still be able to use great external (sometimes commercial) plug-ins !
Here for instance, The Foundry Keylight, which in my opinion is one of the best Keying plug-ins ever (and the first one who tells me blender’s can do the same job, I give him 12 shots to do in a week and expect it to be perfect ).
If you want to use it today, you’ll need to buy the plug-ins (175€) + on of the software compatible withasNuke, Shake or so (which I believe is around 2000-3000 €) or even buy an After Effects licence because it come bundle with it now (about 700€). Pretty expensive just to do Keying don’t you think ?
When you can actually only spend 175€ on the plug-in and use it with your favourite apps (even your own if you’d like to).
Well done article on how to improve Blender to make it fit the industry.
I would add another point to this post is an external SDK/API to make communication easier between application or being able to develop plug-in for it more easly (Blender team if you read this .. cheers )
For the OpenEXR / Nuke point … please please do something about it !
I’ve been fighting with so many people about the subject of “how should you orient your cameras for perfect 3d feeling”. I always was against the idea that both cameras should be parallel, because it’s not how our eyes work at all!
I was pretty happy when I learned that James Cameron ask for developing a new kind of 3d camera where both ocular are looking to the focal point and not to the infinite. So far I couldn’t explain better my self than this video, all the facts are here :
I just found those 2 videos on vimeo (hopefully there will be some more soon) talking and helping Autodesk Maya Artist willing to use Blender.
I like the idea behind this kind of videos, and since Blender 2.5 will be more customizable with multi-windows, Hot-keys, … it will make it easier to switch from one software to another.
Those videos, especially the first one is for Blender beginners
I received a mail the other day from Martins Upitis, who asked me really nicely if he could use my Cubic Lens Distortion Shader code into one of his GLSL shader. He was asking me if he could copy & tweak it a little bit. That was really funny to me since Martins is probably the guy who made me wants to learn shader in the first place after seeing one of his first shader he posted on blenderArtist, and also because he didn’t figure I was the same person asking him thousand of noobs questions on that same forum :p .
Of course I was so proud that I said yes, and he made it look so coooolll !!!
His shader create vignetting, chromatic aberration, cubic distortion, depth of field, and a really smart animation texture based on 1bit alpha store in each RGBA channels (which gives a lot of frames – so smart)… it looks really great !
I’ve been waiting for this since ever ! Blender 2.5 is managing linear workflow (gamma correction, … ).
Matt Ebb just did a great update where the UI (color picker, …) give you feedback of linear color and present the linear workflow.
Here is the comment of the SVN update (revision 25065) :
Changes to Color Management
After testing and feedback, I’ve decided to slightly modify the way color
management works internally. While the previous method worked well for
rendering, was a smaller transition and had some advantages over this
new method, it was a bit more ambiguous, and was making things difficult
for other areas such as compositing.
This implementation now considers all color data (with only a couple of
exceptions such as brush colors) to be stored in linear RGB color space,
rather than sRGB as previously. This brings it in line with Nuke, which also
operates this way, quite successfully. Color swatches, pickers, color ramp
display are now gamma corrected to display gamma so you can see what
you’re doing, but the numbers themselves are considered linear. This
makes understanding blending modes more clear (a 0.5 value on overlay
will not change the result now) as well as making color swatches act more
predictably in the compositor, however bringing over color values from
applications like photoshop or gimp, that operate in a gamma space,
will give identical results.
This commit will convert over existing files saved by earlier 2.5 versions to
work generally the same, though there may be some slight differences with
things like textures. Now that we’re set on changing other areas of shading,
this won’t be too disruptive overall.