ft-SSIBL for “Screen Space Image Base Lighting” is based on a topic I covered in a previous post about Roy Stelzer’s “2.5D Relighting inside of Nuke”. In this shader I tried to reproduced a few approach found in his Nuke script. So with a Normal pass (object or world), you will be able to do some relighting with a HDR map. The shader won’t compute the 9 coefficients (spherical harmonics) needed for you as describe in this paper : http://graphics.stanford.edu/papers/envmap.
This node setup allow you to offset your normal pass on the X,Y,Z axis. It could be use for several cases (relighting is one of them for sure, so I thought I would share this node. All you have to do, is to “Append” this node in your blend file and your are done. It use a normal pass as input, and would output the new normal pass.
“An Efficient Representation For Irradiance Environment Maps”
This paper cover how to do Environment Lighting based on Angular Map (or light probes, or HDRI map, whatever you want to call it) but without using those map for computation. Odd you said ? Yes sure !
They develop a technique using Spherical Harmonics to simplify an HDR Angular Map to only 9 coefficients !! That means after their process, you only have to do a few multiplication with those 9 coefficients and then you are done ! Check out the comparison :
If I’m not mistaking, there is less than 1% of error between both rendering, and of course you can image that the second one goes much much faster
Please visit there website and presentation for more information about it :
In his demo, Roy shows the use of the irradiance only of a Angular Map. This means he will get all the HDR Luma intensity of the map (no color variation). Once you have the 9 coefficient, the operation is pretty easy and so it compute very fast. The operation looks something like this :
Where c1, c2, c3, c4 and c5 are 5 constant number given in the paper. X, Y, Z are float numbers from your normal pass (X2 means x*x ; xy means x*y ; so on … ). All the L22, L20, …. variables are the 9 coefficients)
So as you can see, this is not a really complicated operation and it does compute really fast. Running this on each pixels would return a kind of light map (irradiance) which you can use to ADD to your original color map.
What did I do ?
As I did for the Lift/Gamma/Gain the first time, I tried to reproduce this formula with the “Math Node” of Blender. So for that matter I did use the EXR file (render passes) provided by Roy in his demo, and only kept the Normal and AO passes.
This is not as fast as it could be, the render time for a 1920×1080 is around 1.5 second (well for HDR environment lighting we have seen worst ^^). There is several reasons for this to be slow, but I’ll come to that later. Note that for this example, I did use the Grace Cathedral Light Probe value and not Roy’s light probe.
I was kind of happy of the rendering though, but a bit disappointed to only get Luma value when environment maps have so much information about colors as well ! (you thought the previous example was a mess with nodes, wait for the following one ) UPDATE : I was totally wrong !!! the only reason why I only get Luma (or actually greyscale) is because I used Math node. I thought it was able to do the operation on any kind of input, but actually it does it on only one composant. So the vector operation never happen :p. I just figure it was possible by trying this same formula in another shader language (pixel bender) and see color happening ^^ . So my bad, Color works too, and I’m not sure to know the difference between the vectors or the Matrices in this case, except using the formula with vectors is much faster ! (I’ll change the blend file later)
So I took a closer eye to the paper, and especially to the example they provide, and I found out that their filter wasn’t only generated coefficients, but also Matrices !!! This means you can do the operation with 9 coefficients to just get the irradiance of the environment or do a similar (but a bit more complex) operation with 3 4×4 matrices (red,green,blue) ! I guess the obvious reason Roy didn’t go for this solution was because the computation is more slower, and he didn’t really need it since he is doing a kind of 3 points lighting in his example.
As I said the math are a bit different ! Here is the formula by using the 3 matrices :
Ok so while this might look more simpler on paper, remember the matrices are 4×4 and if a dot product is quite simple, it is not the costless operation too :). Here is what it looks like in Blender with “Math node” as well :
Blend File here
Again, due to the heavily use of Math node, I believe that 3 second is not too bad, but I’ll come back to that later. Also the node I use to rotate the normal pass, is using some math that might slow the render a bit too and not absolutely necessary
This shows that the technique is working pretty well, but probably not production ready as it is in Blender since we are missing a few thing to make this work smoothly.
What would we need in Blender for this to be more efficient?
More “Input” nodes : this might be one of the reasons why the rendering is slow down a bit. Because actual “Input node” only works between 0-1, and the matrix number were between -n and n, I had to find a trick. For this I used the “Math node” for each number of a matrix. Setting it to “Add”, enter the value in the first input, and setting the second input to 0.0. So the output would be equal to the first input. I only did that because I couldn’t figure another way to have negative and positive values in Blender compositing in another way. But this also mean that the compositor tell blender to do an operation for each input, can’t say it’s much optimized
Input Value that goes between -n and n
Vector4 Input (right now you can only use Vector3, but you usually need to work with alpha)
Matrix Input (could even be useful to do some quick and dirty convolution)
Expression node : ok now, I’m dreaming for this one ! And this is IMO probably the main reason why this is so slow. I believe that each time I’m using a Math node, Blender does treat them individually. Which makes sense though, but it probably make a lot of exchange with inputs, outputs, inputs again, outputs again, ….
I would believe that sending the entire operation at once to the CPU (or whatever) and getting it back at once would make things different and much faster (but I might be wrong on this one !?)
Anyway the other reason for this node would be … well seriously, have you seen the mess with all the nodes ?!? So a simple field, even just interpreted by python would be great !!!!
Math with Vector : Maybe I did something wrong, but I couldn’t do a “dot product” between vectors, which is one of the reason why I have all those nodes. I’m doing the entire “dot product” by hand and this is heavy.
I wish Math could be use with any kind of input. But again, maybe I’m doing something wrong here
Passes : we need more passes! For this example we need an Object Normal Pass rather than a World Normal Pass. Probably not to much to do though, the only problem I have with the Passes system today is that they are all hardcoded in Blender, which makes it complicated to create a custom one like you would have in Maya.
I’d like to be able to assign a material overall to a define passes, but yet I would probably need to write shaders, which implicate the needs of writing shaders as well for the renderer. I guess OSL will fix that in a future if it gets implemented one day
Better support for EXR : beside this really annoying, flip node you have to add when working with other packages (I know flip node is nothing, but when working with 2K or 4K it is not the same deal, especially when the composite gets complex, you want to save any operation you could) , but I believe anyone is aware of this now, the other lack of EXR in Blender is the passes support. it doesn’t support custom passes coming from other package. Roy provided his Nuke script, with all the EXR file so you could play with it. But when I tried to load it in Blender, the input node couldn’t find all the passes inside, beside the usual color, Z (and maybe another one can’t remember exactly) it couldn’t find any. So I had to open the EXR in Djv, select the pass I wanted, and save it to another file as the RGB value. Really painful process
If you are doing Matchmove, you probably bumped into Lens work-flow issue, where you have to un-distort the footage in your matchmove software, then track it, and export a new undistorted footage, so your client can compose the 3d rendering on top of it and then distort it back.
I don’t really like this work-flow, since for instance AfterFx do not have Cubic Lens Distortion FX and it would be really hard for the client trying to match the distortion back.
Thanks to Jerzy Drozda Jr (aka Maltaannon) for his great tips about Pixel Bender.
So now, you can create a new comp with your distorted footage > pre-comp it > undistorted it with the shader > track it in syntheyes > export the camera to a 3d package > render the scene > import the render into your pre-comp > desactivate the shader. Should match perfectly
(yeah I know, PFTrack grid with Syntheyes … not cool ! :p )
Even If I’m aware of what gamma and Linear workflow is, I’m not quite sure I’m using it always in the correct way. So I decide to dive into documentations and forums again to refresh my memory about it and at the same time closing a few gap.
Since so many people, even in the industry, still don’t know what it is and how it works, I thought I would make kind of a dairy of what I found on with my research those couple days.
To get started, there is this great example from AEtuts+.com talking about Linear workflow in AE. It is not a the deepest explanation out there, but it will give you a nice overview with simple words and explicit example of what Linear workflow is and why it is so important !
Most displays have a non-linear response to pixel values.
Most graphics software are written for a linear color model, i.e. they make the simple assumption that 255 is twice as bright as 128. But since the monitor is non-linear, this is not true. In fact, for most monitors (with a gamma=2.2), you need to send the pixel value (0.5^(1/2.2))*255=186 if you want 50% of the brightness of 255. The commonly used value of 128 will only produce about (128/255)^2.2 = 22% brightness.
Digital cameras have a (roughly) linear response to light intensity, but since they are intended for display on computer monitors, they embed the non-linearity (gamma) in the image. (True for .jpg, whereas RAW files are just that – RAW, i.e. linear data, that gets non-linear when converted to JPG, for example)
Therefore, if you input .jpg imagestaken with a camera into a graphics software, you need to compensate for the images gamma. (by inverse gamma. 1/2.2 = 0.455)
And if you display the (linear) data generated from a graphics algorithm, you need to compensate for the display gamma. (add 2.2 gamma to the picture)
A few facts :
When creating a texture in Photoshop, you’ll see its color with 2.2 gamma applied (Because screens are big liar :p). Meaning when you think you got the good “brightness”, you actually made it twice (or more) brighter than what it’s supposed to be in real world.
When for painting, or montage it might not be import, for texture it is really important !!! Because as said above, your renderer/shader/… will assume the picture is linear and will apply math according to that.
So the only solution to bring this picture back to a “linear color space” is to set the gamma to the inverse of what the monitor shows you. As we know on PC, gamma are shown as 2.2 (I think it’s 1.8 on mac OSX). So the gamma value of your texture before saving it should be 0.455 (1/2.2).
Tips : In Photoshop, on top of your layer, add a “Level Adjustment Layer” and set the gamma value (mid-tone) to 0.455
With most today software I don’t think it is necessary to do that any more, but to be honest, this really depends on how the software you are using integrate Linear Workflow. For instance in 3Ds Max you can enable the Gamma correction in the “Gamma and LUT” tab of the preferences panel.
Because renders works in Linear space, your rendering would seems to look darker on your screen. So in case you are saving it to a 8bits type file (as JPG), you should set the output gamma parameter to 2.2. But in case you are saving it to a floating point file (HDR, RAW, EXR, …), this parameter should remain 1.0. Because all the dynamics of your picture is saved in those raw file, you would apply the Gamma only in post process (compositing).
In the above case with After Effects, by making sure to activate the linear space workflow, it should take care of that for you, so you don’t have to change gamma to anything, just leave it.
I did bump into this website a few days ago, it looks pretty old, but it’s the first time I heard about it, and I think it is worth looking at it for open source software development (Ramen already implemented it).
From there website :
“OpenFX is an open standard for visual effects plug-ins. It allows plug-ins written to the standard to work on any application that supports the standard. This avoids the current per application fragmentation of plug-in development and support, which causes much heartache to everyone, plug-in developers, application developers and end users alike”
Who use it ?
Well this is the interesting part ! Big major plug-in development company use it as a few in the following list :
So beside the bullshit talks about “Blender should remain freedom and shouldn’t mix with closed source third party apps or commercial one”, this could be really useful for some of us who still want to use Blender as main frame, but still be able to use great external (sometimes commercial) plug-ins !
Here for instance, The Foundry Keylight, which in my opinion is one of the best Keying plug-ins ever (and the first one who tells me blender’s can do the same job, I give him 12 shots to do in a week and expect it to be perfect ).
If you want to use it today, you’ll need to buy the plug-ins (175€) + on of the software compatible withasNuke, Shake or so (which I believe is around 2000-3000 €) or even buy an After Effects licence because it come bundle with it now (about 700€). Pretty expensive just to do Keying don’t you think ?
When you can actually only spend 175€ on the plug-in and use it with your favourite apps (even your own if you’d like to).
I received a mail the other day from Martins Upitis, who asked me really nicely if he could use my Cubic Lens Distortion Shader code into one of his GLSL shader. He was asking me if he could copy & tweak it a little bit. That was really funny to me since Martins is probably the guy who made me wants to learn shader in the first place after seeing one of his first shader he posted on blenderArtist, and also because he didn’t figure I was the same person asking him thousand of noobs questions on that same forum :p .
Of course I was so proud that I said yes, and he made it look so coooolll !!!
His shader create vignetting, chromatic aberration, cubic distortion, depth of field, and a really smart animation texture based on 1bit alpha store in each RGBA channels (which gives a lot of frames – so smart)… it looks really great !