Tag Archives: shaders

Pixel Bender : ft-SSIBL (Screen Space Image Base Lighting) shader for After Effects


Introduction

ft-SSIBL for “Screen Space Image Base Lighting” is based on a topic I covered in a previous post about Roy Stelzer’s “2.5D Relighting inside of Nuke”.  In this shader I tried to reproduced a few approach found in his Nuke script. So with a Normal pass (object or world), you will be able to do some relighting with a HDR map. The shader won’t compute the 9 coefficients (spherical harmonics) needed for you as describe in this paper : http://graphics.stanford.edu/papers/envmap.

The default value are from the Grace Cathedral, San Francisco Lightprobe http://www.debevec.org/Probes/


Download & Sources

I will for sure add more feature to this one in the future, so come back check it out


How to install it ?


Just copy the .pbk file into your “Support Files” folder in your AE install directory


How to use it ?


How to convert a Light probe into the 9 spherical harmonics coefficients ?



Here is the tutorial


Donate


Pixel Bender : ft-UVPass shader for After Effects


Introduction

This shader let you re-texture your rendering directly in After Effects. It use a UV pass as default input and a texture as second input. UV pass could be render from pretty much any 3d package.


Download & Sources



How to install it ?


Just copy the .pbk file into your “Support Files” folder in your AE install directory


How to use it ?




Donate


Pixel Bender : ft-2-strip technicolor & ft-3-strip technicolor shader for After Effects


Introduction

Those two shaders try to mimic Martin Scorsese’s The Aviator movie effects. The codes are based on snippet found on VFXTalk.com, Blender source code & Aviator VFX behind the scene website.

UPDATE : Now ft-2-strip technicolor & ft-3-strip technicolor as merged to a one filter called “ft-Technicolor”

Download & Sources


How to install it ?


Just copy the .pbk file into your “Support Files” folder in your AE install directory


Donate


Rotating Normals from a Normal pass

RotateNormal is a  group node inspired from Roy Stelzer (2D TD at Double Negative) “2.5D Relighting inside of Nuke” presentation.

This node setup allow you to offset your normal pass on the X,Y,Z axis. It could be use for several cases (relighting is one of them for sure, so I thought I would share this node. All you have to do, is to “Append” this node in your blend file and your are done. It use a normal pass as input, and would output the new normal pass.

Feel free to use it as much as you want :)

Download RotateNormal Blend File

2.5D relighting in compositing with Blender (using spherical harmonics)

Introduction

About a year ago, I was watching a great presentation at the Nuke Master Class by Roy Stelzer (2D TD at Double Negative) about “2.5D Relighting inside of Nuke”. There is 3 videos which you can download from The Foundry and also files which goes with. I recommend you to watch those videos especially if you are not familiar with relighting in post-production.

What Roy did, is using a paper from Siggraph 2001 called “An Efficient Representation For Irradiance Environment Maps“.

“An Efficient Representation For Irradiance Environment Maps”

This paper cover how to do Environment Lighting based on Angular Map (or light probes, or HDRI map, whatever you want to call it) but without using those map for computation. Odd you said ? Yes sure !
They develop a technique using Spherical Harmonics to simplify an HDR Angular Map to only 9 coefficients !! That means after their process, you only have to do a few multiplication with those 9 coefficients and then you are done !  Check out the comparison  :

left is a standard rendering, right is using the 9 parameters. Grace Cathedral Light Probe was used for this rendering http://www.debevec.org/Probes/

If I’m not mistaking, there is less than 1% of error between both rendering, and of course you can image that the second one goes much much faster :)

Please visit there website and presentation for more information about it :

What did Roy ?

In his demo, Roy shows the use of  the irradiance only of a Angular Map. This means he will get all the HDR Luma intensity of the map (no color variation). Once you have the 9 coefficient, the operation is pretty easy and so it compute very fast. The operation looks something like this :

color = c1 * L22 * (x2-y2) + c3 * L20 * z2 + c4 * L00 – c5*L20 + 2 * c1 * (L2_2 * xy + L21 * xz + L2_1 * yz) + 2 * c2 * (L11 * x + L1_1 * y + L10 * z)

Where c1, c2, c3, c4 and c5 are 5 constant number given in the paper. X, Y, Z are float numbers from your normal pass (X2 means x*x ; xy means x*y ; so on … ). All the L22, L20, …. variables are the 9 coefficients)

So as you can see, this is not a really complicated operation and it does compute really fast. Running this on each pixels would return a kind of light map (irradiance) which you can use to ADD to your original color map.

What did I do ?

As I did for the Lift/Gamma/Gain the first time, I tried to reproduce this formula with the “Math Node” of Blender. So for that matter I did use the EXR file (render passes) provided by Roy in his demo, and only kept the Normal and AO passes.

Blend File here

This is not as fast as it could be, the render time for a 1920×1080 is around 1.5 second (well for HDR environment lighting we have seen worst ^^). There is several reasons for this to be slow, but I’ll come to that later.
Note that for this example, I did use the Grace Cathedral Light Probe value and not Roy’s light probe.

I was kind of happy of the rendering though, but a bit disappointed to only get Luma value when environment maps have so much information about colors as well ! (you thought the previous example was a mess with nodes, wait for the following one ;) )
UPDATE : I was totally wrong :) !!! the only reason why I only get Luma (or actually greyscale) is because I used Math node. I thought it was able to do the operation on any kind of input, but actually it does it on only one composant. So the vector operation never happen :p. I just figure it was possible by trying this same formula in another shader language (pixel bender) and see color happening ^^ . So my bad, Color works too, and I’m not sure to know the difference between the vectors or the Matrices in this case, except using the formula with vectors is much faster ! (I’ll change the blend file later)

So I took a closer eye to the paper, and especially to the example they provide, and I found out that their filter wasn’t only generated coefficients, but also Matrices !!! This means you can do the operation with 9 coefficients to just get the irradiance of the environment or do a similar (but a bit more complex) operation with 3 4×4 matrices (red,green,blue) !
I guess the obvious reason Roy didn’t go for this solution was because the computation is more slower, and he didn’t really need it since he is doing a kind of 3 points lighting in his example.

As I said the math are a bit different ! Here is the formula by using the 3 matrices :

n = worldNormal;
color.red = dot(n , matriceRed);
color.green = dot(n , matriceGreen);
color.blue = dot(n , matriceBlue);

Ok so while this might look more simpler on paper, remember the matrices are 4×4 and if a dot product is quite simple, it is not the costless operation too :) . Here is what it looks like in Blender with “Math node” as well :


Blend File here
Again, due to the heavily use of Math node, I believe that 3 second is not too bad, but I’ll come back to that later. Also the node I use to rotate the normal pass, is using some math that might slow the render a bit too and not absolutely necessary

This shows that the technique is working pretty well, but probably not production ready as it is in Blender since we are missing a few thing to make this work smoothly.

What would we need in Blender for this to be more efficient ?

  • More “Input” nodes : this might be one of the reasons why the rendering is slow down a bit. Because actual “Input node” only works between 0-1, and the matrix number were between -n and n, I had to find a trick. For this I used the “Math node” for each number of a matrix. Setting it to “Add”, enter the value in the first input, and setting the second input to 0.0. So the output would be equal to the first input. I only did that because I couldn’t figure another way to have negative and positive values in Blender compositing in another way. But this also mean that the compositor tell blender to do an operation for each input, can’t say it’s much optimized :)
    • Input Value that goes between -n and n
    • Vector4 Input (right now you can only use Vector3, but you usually need to work with alpha)
    • Matrix Input (could even be useful to do some quick and dirty convolution)
  • Expression node : ok now, I’m dreaming for this one ! And this is IMO probably the main reason why this is so slow. I believe that each time I’m using a Math node, Blender does treat them individually. Which makes sense though, but it probably make a lot of exchange with inputs, outputs, inputs again, outputs again, ….
    I would believe that sending the entire operation at once to the CPU (or whatever) and getting it back at once would make things different and much faster (but I might be wrong on this one !?)
    Anyway the other reason for this node would be … well seriously, have you seen the mess with all the nodes ?!?
    So a simple field, even just interpreted by python would be great !!!!
  • Math with Vector : Maybe I did something wrong, but I couldn’t do a “dot product” between vectors, which is one of the reason why I have all those nodes. I’m doing the entire “dot product” by hand and this is heavy.
    I wish Math could be use with any kind of input. But again, maybe I’m doing something wrong here
  • Passes : we need more passes! For this example we need an Object Normal Pass rather than a World Normal Pass. Probably not to much to do though, the only problem I have with the Passes system today is that they are all hardcoded in Blender, which makes it complicated to create a custom one like you would have in Maya.
    I’d like to be able to assign a material overall to a define passes, but yet I would probably need to write shaders, which implicate the needs of writing shaders as well for the renderer. I guess OSL will fix that in a future if it gets implemented one day
  • Better support for EXR : beside this really annoying, flip node you have to add when working with other packages (I know flip node is nothing, but when working with 2K or 4K it is not the same deal, especially when the composite gets complex, you want to save any operation you could) , but I believe anyone is aware of this now, the other lack of EXR in Blender is the passes support. it doesn’t support custom passes coming from other package. Roy provided his Nuke script, with all the EXR file so you could play with it. But when I tried to load it in Blender, the input node couldn’t find all the passes inside, beside the usual color, Z (and maybe another one can’t remember exactly) it couldn’t find any. So I had to open the EXR in Djv, select the pass I wanted, and save it to another file as the RGB value. Really painful process

Hopefully someone hear me out there ^^

UPDATE – Cubic Lens Distortion Pixel Bender shader for AE (with scale & chromatic aberration

cubic lens distortion & chromatic aberration



If you haven’t seen my previous post yet, here is the Syntheyes’ Cubic Lens Distortion algorithm ported to Pixel Bender.


New Features

  1. Scale factor : works exactly as Syntheyes Scale Lens Workflow (v2)
  2. Chromatic Aberration : based on Martins Upitis‘s GLSL posted here (v2)
  3. Blue/Yellow Chromatic Aberration based on Dorian modification (V3)



Download

  1. Download ft-CubicDistortion.pbk here : http://aescripts.com/ft-cubic-lens-distortion/
  2. Place it in your “Support Files” folder (which is in your AE install folder)
  3. Launch AE
  4. Look for the effect called Cubic Distortion

Source Code

just download the file at http://aescripts.com/ft-cubic-lens-distortion and open it with your notepad application

Donate

still if you wish ^^




PixelBender Cubic Lens Distortion for After Effects


Introduction


If you are doing Matchmove, you probably bumped into Lens work-flow issue, where you have to un-distort the footage in your matchmove software, then track it, and export a new undistorted footage, so your client can compose the 3d rendering on top of it and then distort it back.
I don’t really like this work-flow, since for instance AfterFx do not have Cubic Lens Distortion FX and it would be really hard for the client trying to match the distortion back.

After watching Victor Wolansky’s FXPHD Class on SYN202 (syntheyes) about Lens work-flow, I thought : “hey why not porting the lens distortion algorithm ?”. Pretty easy to do, since I already did it for HLSL & Martins Upitis did port my shader to GLSL !
Thanks to SSonTech for sharing there alogrithm


Pixel Bender Cubic Lens Distortion :

Not much to say, it does what it suppose to do ! You can copy & paste values from Syntheyes and it will match perfectly (or it should at least). See the screenshot below.

  1. Download CubicDistortion.pbk here : http://aescripts.com/ft-cubic-lens-distortion/
  2. Place it in your “Support Files” folder (which is in your AE install folder)
  3. Launch AE
  4. Look for the effect called Cubic Distortion

UPDATE : now with scale factor & chromatic aberration, see the post here

Thanks to Jerzy Drozda Jr (aka Maltaannon) for his great tips about Pixel Bender.

So now, you can create a new comp with your distorted footage > pre-comp it > undistorted it with the shader > track it in syntheyes > export the camera to a 3d package > render the scene > import the render into your pre-comp > desactivate the shader. Should match perfectly :)



Screenshots

(yeah I know, PFTrack grid with Syntheyes … not cool ! :p )

Distorted Grid

Syntheyes Cubic Undistortion

AE with Cubic Lens Distortion shader

compare AE & Syntheyes Lens distortion


Donation

If you wish




Understanding Gamma and Linear workflow

Even If I’m aware of what gamma and Linear workflow is, I’m not quite sure I’m using it always in the correct way. So I decide to dive into documentations and forums again to refresh my memory about it and at the same time closing a few gap.
Since so many people, even in the industry, still don’t know what it is and how it works, I thought I would make kind of a dairy of what I found on with my research those couple days.



Introduction


To get started, there is this great example from AEtuts+.com talking about Linear workflow in AE. It is not a the deepest explanation out there, but it will give you a nice overview with simple words and explicit example of what Linear workflow is and why it is so important !

So after that, 5 points you should keep in mind about Gamma (from Gamma 101 on mymentalray.com)

  1. Most displays have a non-linear response to pixel values.
  2. Most graphics software are written for a linear color model, i.e. they make the simple assumption that 255 is twice as bright as 128. But since the monitor is non-linear, this is not true. In fact, for most monitors (with a gamma=2.2), you need to send the pixel value (0.5^(1/2.2))*255=186 if you want 50% of the brightness of 255. The commonly used value of 128 will only produce about (128/255)^2.2 = 22% brightness.
  3. Digital cameras have a (roughly) linear response to light intensity, but since they are intended for display on computer monitors, they embed the non-linearity (gamma) in the image. (True for .jpg, whereas RAW files are just that – RAW, i.e. linear data, that gets non-linear when converted to JPG, for example)
  4. Therefore, if you input .jpg images taken with a camera into a graphics software, you need to compensate for the images gamma. (by inverse gamma. 1/2.2 = 0.455)
  5. And if you display the (linear) data generated from a graphics algorithm, you need to compensate for the display gamma. (add 2.2 gamma to the picture)



A few facts :

When creating a texture in Photoshop, you’ll see its color with 2.2 gamma applied (Because screens are big liar :p). Meaning when you think you got the good “brightness”, you actually made it twice (or more) brighter than what it’s supposed to be in real world.
When for painting, or montage it might not be import, for texture it is really important !!! Because as said above, your renderer/shader/… will assume the picture is linear and will apply math according to that.
So the only solution to bring this picture back to a “linear color space” is to set the gamma to the inverse of what the monitor shows you. As we know on PC, gamma are shown as 2.2 (I think it’s 1.8 on mac OSX). So the gamma value of your texture before saving it should be 0.455 (1/2.2).

Tips : In Photoshop, on top of your layer, add a “Level Adjustment Layer” and set the gamma value (mid-tone) to 0.455

With most today software I don’t think it is necessary to do that any more, but to be honest, this really depends on how the software you are using integrate Linear Workflow. For instance in 3Ds Max you can enable the Gamma correction in the “Gamma and LUT” tab of the preferences panel.

Because renders works in Linear space, your rendering would seems to look darker on your screen. So in case you are saving it to a 8bits type file (as JPG), you should set the output gamma parameter to 2.2. But in case you are saving  it to a floating point file (HDR, RAW, EXR, …), this parameter should remain 1.0. Because all the dynamics of your picture is saved in those raw file, you would apply the Gamma only in post process (compositing).

In the above case with After Effects, by making sure to activate the linear space workflow, it should take care of that for you, so you don’t have to change gamma to anything, just leave it.



Links :



Mental Ray Linear Workflow from chad smashley on Vimeo.





Here some nice read :

OpenFX (OFX) – An Open Plug-in API for 2D Visual Effects

Open FX


I did bump into this website a few days ago, it looks pretty old, but it’s the first time I heard about it, and I think it is worth looking at it for open source software development (Ramen already implemented it).



Introduction

From there website :

OpenFX is an open standard for visual effects plug-ins. It allows plug-ins written to the standard to work on any application that supports the standard. This avoids the current per application fragmentation of plug-in development and support, which causes much heartache to everyone, plug-in developers, application developers and end users alike”



Who use it ?

Well this is the interesting part ! Big major plug-in development company use it as a few in the following list :



Why is it interesting for open source softwares ?

I know many people, especially from the Blender community would disagree with me (even more since the big “fight” fans had on 3dsoul’s blog “5 things Blender should do to be successful in the industry”), but actually I think this could be part of a solution about the point I added to his list asking for an external SDK/API. At least for the special effect part.


So beside the bullshit talks about “Blender should remain freedom and shouldn’t mix with closed source third party apps or commercial one”, this could be really useful for some of us who still want to use Blender as main frame, but still be able to use great external (sometimes commercial) plug-ins !


Here for instance, The Foundry Keylight, which in my opinion is one of the best Keying plug-ins ever (and the first one who tells me blender’s can do the same job, I give him 12 shots to do in a week and expect it to be perfect :) ).
If you want to use it today, you’ll need to buy the plug-ins (175€) + on of the software compatible with as Nuke, Shake or so (which I believe is around 2000-3000 €) or even buy an After Effects licence because it come bundle with it now (about 700€). Pretty expensive just to do Keying don’t you think ?

When you can actually only spend 175€ on the plug-in and use it with your favourite apps (even your own if you’d like to).



Conclusion

RamenHD’s developer (Ramen is a open source nodal compositing software in development) is already working on the subject (checkout his blog for ofx). He already start to have good results with The Foundry Keylight and the Sapphire suite Plug-ins.

I believe it is a really smart move, and I hope Blender would do it in the future as well. It might even make simpler the way to develop new filter for it.



A few links :

GLSL Cubic Lens Distortion with chromatic aberration in Blender Game Engine

Introduction

I received a mail the other day from Martins Upitis, who asked me really nicely if he could use my Cubic Lens Distortion Shader code into one of his GLSL shader. He was asking me if he could copy & tweak it a little bit. That was really funny to me since Martins is probably the guy who made me wants to learn shader in the first place after seeing one of his first shader he posted on blenderArtist, and also because he didn’t figure I was the same person asking him thousand of noobs questions on that same forum :p .
Of course I was so proud that I said yes, and he made it look so coooolll !!!

Source

Anyway I encourage you to go check out his post on BlenderArstists.org here : http://blenderartists.org/forum/showthread.php?t=175155&highlight=lens+distortion

His shader create vignetting, chromatic aberration, cubic distortion, depth of field, and a really smart animation texture based on 1bit alpha store in each RGBA channels (which gives a lot of frames – so smart)… it looks really great !

Blender File : http://www.pasteall.org/blend/1425 or mirror on this blog

controls:
buttons 1 and 2 enables post-process effects (vignetting, noise, edge-blur, and a new and awesome lens distortion filter, made by Francois Tarlier, and slightly modified by me).

Up/Down – shrink/grow the snowflake
mouse wheel up/down – chromatic dispersion amount

numpad:
7/4 – lens distortion coefficient
8/5 – cubic distortion amount
9/6 – image scaling amount