Tag Archives: realtime

Blender VFX wish list features

Introduction

EDIT : the project started at http://ocl.atmind.nl/doku.php?id=design:proposal:compositor-redesign . Please donate and help the project

I have been at the Blender Conference this year & had the chance to met a lot of great guys as also talking with Ton about doing vfx with blender. I attempt the “VFX roundtable” talk… yes, I was the annoying guy who stoll the microphone the entire time :) , and yet didn’t have time to give my entire list there. So after talking about it with Ton, he asked me to send him an email of it, which I did.

I tried to make it as much detailed as I could and also to not making it too personal as well but keep it as a general need basic features.

AndrewSebastian & Jonathan asked me to post that mail on my blog to share it with the community, so here it comes…

Disclaimer

When opening this kind of topic I know that a lot of Blender User like to throw stones at the guy writing it :), so here are some facts before starting. I’m working on a every day basics with After Effects and sometimes I do use Nuke. I’m a TD Artist for a video game company working on off-line & realtime production, so my job is to design pipeline and workflow or either new tools to make a production more efficient and comfortable for the artist (means I’m use to work with developers a lot, and know how much a pain in the ass I can be sometimes… trust me :) ). On the side I’m also doing Matchmoving for a living also as VFX compositing & some Color Grading too. And those days I’m also developing  scripts & filters for After Effects which are distributed over at AEScripts.com (mostly stuff to enhanced workflow -sometimes inspired by Nuke I must say :p-).

All this doesn’t mean that I know stuff better than someone else at ALL !! I just try or I’d like to try to use Blender as much as I can in my Freelance time, and sometimes I just can’t because basics features are missing, and that’s what I’m going to point out here (or at least try).

I do understand that we don’t want to make a copy of AE or Nuke in Blender and I do agree that this would be totally wrong. But since there are leaders in this field and I know them pretty well, I will mention some of there features or workflow as illustrations & ideas. So you will have to take my words as “something like that…” rather than “it has to look like that!”. Note that most of the features I’m going to talk about are present in all compositing software not matter if it is nodal or layer based.

So you wonna do VFX with Blender ?

One thought we had with Sebastian at the Blender Conference, which I believe illustrate all those wishes pretty well, is “Try to do a VFX tutorial for Nuke or AE using Blender, and if you can’t do it, it means you are missing the tool”.

A great website for this is VideoCopilot.net ! And I’m NOT talking about the ones with third party plugins but actually the one that does use the basics provided tools. (I’ll mention some of them later on).

VideoCopilot’s tutorial are great in way that they are very professional looking, accessible to anyone in the field, and covers what you would do on an every day basis as a VFX Artist. And usually you would be able to do those tutorial with any Compositing software as far as you understand the logic behind it. Tools looks sometimes different, but they are all there in all the programs (except Blender for now, and that’s what I’d like to change).

I don’t need evidence that Blender could do the same by doing some tricks and finding some way around the problems. I’m always looking at an efficient, production ready approach of the problem. For instance, recently the RotoBezier Addon came out, and while this is a great and smart trick, to me, it is just a work around in the mean time, but not at all production ready for several reasons that I could mention if you asked me :).

Ok so now let dive into the list !

VFX Feature list for Blender

Color Management

As for today the CM (color management) is not working very well and too hidden IMO.

I did talk about it with Matt Ebb a lot. Even if we have different opinions about it, I know he has good ideas and great libraries to use in mind, but not so much time to work on.

His philosophy is to have it completely hidden in background and doing all the work for you, which I deeply disagree with (but again just my opinion). CM is good for having LW (linear workflow), and LW  is important, but CM is also important for Color profile and Color models and you really have to know in which space you are currently viewing stuff (linear, sRGB, … ).

Today I can never tell if my viewer is showing me in linear or sRGB, and same for color picker.

In Nuke you always have an option in inputs, outputs or scopes to choose which color space you want to look at (ie screenshot) so you can have a complete control over it.

So having it hidden and automatic is ok in case I can overwrite it anytime I want to !

color profile selection in Nuke's reader (input)color profile selection for Nuke's viewer

color profile selection for Nuke's viewer

And as I mention at the roundtable, you don’t always need linear workflow (and thats where Matt is going to disagree with me :) ), for instance it is usually more a pain in the ass for Color Grading than anything else. And I have been talking with some professional colorist who told me there big color grading workstation are working in non-linear (ie Davinci Resolve). Which don’t give me wrong can work with linear too, but if you take something like the Color Balance Node which Matt and I develop, the formula I used works better in non-linear. Same formula used in Magic Bullet Colorista and very close to what use Color grading suite by the way. So LW, great for compositing shots and playing with Render passes, for video edition & final color grading, not so much.

EXR Full Support

Support for reading any custom passes :

At this point Blender cannot read any passes that it doesn’t support or create itself. So when I’m working with external client who works with other pieces of software I cannot read their passes (except the normal kind, diffuse, spec…)

In any Nodal based compositor I know, there is a special node called “Shuffle” (or whatever) which separate the passes. The reader (input) just gives the default pass (beauty ?), but the shuffle does the job of selecting a specific pass.

Shuffle node in Nuke

Support for writing custom passes :

Now I have to create a new RenderLayer with a material overall and save it to another file. when actually what I’d like to do, is create a new pass, name it and affect a material overall to this pass. all that in my default render layer. (this part is not just EXR but also rendering related).

Support for reading/writting attribute :

You can read or write attribute in exr. Some kind of metadatas which can be very useful even to pass datas to your compositor as for instance the actual position of the camera or values of the distortion, anything you can imagine of that can be useful (not as important as the two above, but again full support is better :) ).

Expression Py

I am so missing that !!!
Basically it means that for each params you have you can set a script. In AE it is called “Expression” as in Nuke by the way.
The idea is to right click on a params and then select “set expression”. Expression can link to everything in blender other params, maths, … so at the end you can do some really crazy stuff as constrain or operation or conditions…
We are using that ALL THE TIME
In Nuke you also have an expression node which can gets several inputs and then you can write python expression by channels. It becomes almost an image processing or kind of pixel processing kind of node. Very very powerful :).

Nuke's expression node looks like that, but creating expression to any inputs like in AE sounds more powerful to me !

Enhanced Group

Groups are actually not that bad in Blender, but a bit too rigid !

  • One thing I’d like possible is to choose the inputs I will have in my final Collapse group UI. Today to do that it is a bit hard and not very flexible. Plus you cannot add special UI inputs which might not be a node but just a value.
  • I’d like to keep some group expand but still being able to work with nodes which are not in this group. Right now if a group is collapse you can only work on this one.

A nice recent approach on this topic : http://www.youtube.com/watch?v=OUidTgzy8zo

SDK, Py, OpenFX

I did develop the Color Balance node in compositor with Matt. Actually I must say Matt did most of the job just because I couldn’t understand the way making UI node was done in C. First because I suck in C, secondly because there were so many file to change I was totally lost (register, type, processing, name, RNA, DNA, …. all that different files). So while I understood pretty well in C the part where the pixels got process, all the UI was a nightmare for me, and Matt did handle it.

I can only imagine how many people get stuck by that while it can be very easy to do some pixel processing just by typing a few lines of code. Several solutions for that :

  1. A well documented SDK (or “BDK” ^^) which as a bad coder I usually hate, too C related for me, but good for developers who wants to make plugin
  2. Python script/expression. As mention above being able to control a input via a python expression or even better an “Expression” as mention above as well
  3. Being able to load/write GLSL code for processing ? I discovered Pixel Bender for After Effect, its kind of a glsl-like for processing image in AE, VERY easy to code, VERY powerful with possibility to add user UI inputs in the shader which AE will transform as interface ! This is perfect for a crappy coder like me and so great in “production” environment. you can prototype things or solve problem in no time.
  4. OpenFX, this is my personal favorite. I have a full thread about it on my blog ! Ramen did implement it, it even did port some Blender’s nodes to OFX already. For a developer stand point of view it means he can create a plugin for blender, and make sure it will also work on any other software that does support OFX (Nuke, shake, …).

But more importantly for somebody like me, it means I can buy a plugin like Keylight (for green screen keying, because lets be honest Blender’s keying tool are a bit old) and use it directly with Blender, without the needs to buy After Effects or Nuke to use it. PLEASE read my blog post about it which clarify the point on that : http://www.francois-tarlier.com/blog/index.php/2010/01/openfx-ofx-an-open-plug-in-api-for-2d-visual-effects/

-> I’m doing some filters for After Effects (http://aescripts.com/category/plugins/francois-tarlier-plugins/) , and mostly because I cannot do it with Blender, it’s way too complicated to achieve the same thing !!

UI Design

Ok I plead guilty on this one, it is more of a personal wishes that something else or VFX related. At least I get a point for honesty ?

  • Straight lines !!! Like Pablo called them the “spaghetti lines”. This is so rigid and messy at the same time. It makes you work only horizontally and from left to right, you cannot redirect lines like you want. I hate it ! But maybe it is just me ! Take a look at one of my comp here : http://www.vimeo.com/11279314 ! (yeah I know, If I had “expression” I wouldn’t have so many nodes but… hey… you got my point :) ) As I recall, Digital Fusion does straight lines with angles by itself, I hate it too. Nuke (ie screenshot) does straight lines left to right, right to left, top to bottom, so on … which you can break by pressing CTRL and adding a “dot” on itThat is what I call flexibility !!!
  • Plug lines into dot socket is a bit old and rigid too. In the below example (same in Fusion), each node would have an arrow for each inputs it can get (w/ name on the arrow). Makes it more flexible to layout (it goes with the above point)
  • Do not put controls in the nodes, keep it in the panel (float or dockable panel). Blender kind of have both now, but I’d like the option that if I double click on a node the panel would show up for instance ?

Break the wall between compositor & 3D view

Today every compositor as some simple 3d capability. Blender gets both features, but it is impossible to use it together. Why would you need 3d in compositing ? (best reason here : http://www.youtube.com/watch?v=upG81s75UD4#t=7m13)

Let say I have this render of a street w/ 3d and matte painting stuff. And I want to put some guy shoot against green screen in this street, this is what I would have to do in Blender to get the shot done :

  • import green screen footage in compositor
  • keying it
  • render it
  • place 3d plane into my scene
  • add a material
  • create a texture with the rendered keyed greenscreen footage
  • apply with UV stuff
  • probably render those plane on seperate passes/layer than the enivronement in case I want to do some more fx/compositing on top of it.
  • import that result back in compositor
  • do all the compositing magic

Do you see the problem here ? so many steps, so many in-between render… when actually all I wanted is pretty much place those keyed plane/billboard in 3d space with a bit of orientation and getting the camera data from your 3d scene !! w/o in-between rendering !

Also know we can see more and more of what we call “Deep image compositing” which basically use a point pass (or position pass) and generate a 3d point cloud based on this pass. it helps for all kind of things (preserving edges, getting geometry information, occluding, relighting, …. )

(ie http://www.youtube.com/watch?v=upG81s75UD4 ||  http://en.wikipedia.org/wiki/Deep_image_compositing || http://www.fxguide.com/fxguidetv/fxguidetv_095/)

Real Camera

lenses, camera sensor size, distortion, … all the good stuff :) (Matt already started some work on that http://mke3.net/weblog/blender-sensor-sizes)

Custom Passes

As I mention earlier, being able to create custom passes which we can assign Material. Pretty much like material overwrite function but into passes. And also with some additional passes by default as Point pass (world and object), and normal pass (world, object & camera).

Masks (roto)

I still can’t even imagine how we can do compositing without masks. If there is one feature above all I’m ALWAYS using in ALL my comps its masks. And to be honest not having it, is one of the main reason I’m not using Blender as primary compositing tool today.

I know someone did talk about some fancy/geeky/gadget mask tool he would love to have in Blender instead of simple/basic vector shape with handle. My opinion on that ? Keep it basic at first ! Simple tools are always good to have first. And even if it might be tedious to do, at least we know it works in all cases.
In case you really want to go with the geeky automatic paint / rotobrush tool, take a look at GraphCut & SnapCut algorithm. That’s a paper from siggraph which is used in AE now know as “roto brush”. Which is a great tool by the way, but from my experience, not yet better than a good old vector shape :)

Image Cache / Dynamic Proxy / Pre-cache preview

VERY IMPORTANT !!! you’ll never be able to do some good VFX compositing w/o it.

DJV does that pretty well actually. you can render the sequence into RAM to play the preview in realtime (Again I still don’t understand how we can survive w/o it)

Downgrade the viewer (full, half,quater,.. resolution)  to save some memory or put more frame into RAM would be great too.

Press a button > render to RAM > play it !

Interact with Elements in the viewer

Being able to select one of the “layers/node” and rotate it, scale it, move it,…. directly in the viewer. (like you would do in photoshop for instance (yes yes it is possible with node base too of course :) )

2D Tracker

I’m a matchmover, so trust me I know how cool it could be to have a camera tracker in Blender. I’m following the libmv project very closely and talk a lot with Pierre Moulon, one of the active developer on the project (since Keir is kind of busy those time).

But the fact is I don’t really need a 3d camera solver. First it would take a long time to have something which works great and be production ready. As for today I do have syntheyes, which is not expensive, and that I can use.

BUT, having a simple 2D tracker in the compositor to do things like that for instance : http://videocopilot.net/tutorial/set_extensions/ that would be PERFECT !

I did talk about it to the LibMV team and they would be glad to help, but they would need somebody experience with Blender C code to implement it quickly. They asked me for a proposal a year ago which is here :

http://wiki.blender.org/index.php/Dev:Source/Development/Projects/Motion_tracking/MotionTracker2D

http://www.vimeo.com/5542528

Conclusion

It’s not everything, but it is quite a lot for now. Some of those request are more important than other, and I would be glad to discuss this topic with anybody.

One tutorial that does sum up all those features is this one http://www.videocopilot.net/tutorial/blast_wave. Bring the missing tools which will let you do that, and you’ll be much closer to be a “production ready” VFX tool… IMO :).

I would also encourage you to take a look at Nuke PLE (personal learning edition), it’s free and complete (only rendering are watermark.
Also take a look at the work done on Ramen VFX, Estaban did some great things there
And finally, make FxGuide.com your favorite website and watch a lot of breakdow :)

Nice VFX Breakdown to watch :

[Bonus] Hair Simulation

This is not directly VFX related but as I understand, you had trouble with hair simulation on Sintel. I did talk about it with Daniel Genrich at the time, because I think he is the dynamics guru who could nailed this kind of thing. So I gave him a few papers to look at which he thoughts were very interesting and not very hard to implement, but as I recall it was too late to implement at that time and you guys didn’t have enough time to do it or even take the risk to test it. But I guess it is not too late for next projects.

When I was working on face tracking/recognition at Ubisoft, I met someone (Basile Audoly) working on mathematically modeling the dynamics of hair. He gave us a demo which was very impressive. And if I’m not mistaking, he has been working for l’oréal research after that :)

His program was able to model curly, straight, dry, wet, growing,… hair, but also dynamic cutting, all those features model in his research. I have personally seen his demo, and I have never seen something so real, see by yourself with the video !

Here are the papers and videos :

Super-Helices for Predicting the Dynamics of Natural Hair

How to convert a Lightprobe into the 9 spherical harmonics coefficients

Here is a tutorial on how to convert your own Light Probe into the 9 coefficients needed into my ft-SSIBL plug-in (Screen Space Image Base Lighting).





Links :




I’m sorry for the poor sound, I hope you will still be able to understand !

UPDATE – Cubic Lens Distortion Pixel Bender shader for AE (with scale & chromatic aberration

cubic lens distortion & chromatic aberration



If you haven’t seen my previous post yet, here is the Syntheyes’ Cubic Lens Distortion algorithm ported to Pixel Bender.


New Features

  1. Scale factor : works exactly as Syntheyes Scale Lens Workflow (v2)
  2. Chromatic Aberration : based on Martins Upitis‘s GLSL posted here (v2)
  3. Blue/Yellow Chromatic Aberration based on Dorian modification (V3)



Download

  1. Download ft-CubicDistortion.pbk here : http://aescripts.com/ft-cubic-lens-distortion/
  2. Place it in your “Support Files” folder (which is in your AE install folder)
  3. Launch AE
  4. Look for the effect called Cubic Distortion

Source Code

just download the file at http://aescripts.com/ft-cubic-lens-distortion and open it with your notepad application

Donate

still if you wish ^^




PixelBender Cubic Lens Distortion for After Effects


Introduction


If you are doing Matchmove, you probably bumped into Lens work-flow issue, where you have to un-distort the footage in your matchmove software, then track it, and export a new undistorted footage, so your client can compose the 3d rendering on top of it and then distort it back.
I don’t really like this work-flow, since for instance AfterFx do not have Cubic Lens Distortion FX and it would be really hard for the client trying to match the distortion back.

After watching Victor Wolansky’s FXPHD Class on SYN202 (syntheyes) about Lens work-flow, I thought : “hey why not porting the lens distortion algorithm ?”. Pretty easy to do, since I already did it for HLSL & Martins Upitis did port my shader to GLSL !
Thanks to SSonTech for sharing there alogrithm


Pixel Bender Cubic Lens Distortion :

Not much to say, it does what it suppose to do ! You can copy & paste values from Syntheyes and it will match perfectly (or it should at least). See the screenshot below.

  1. Download CubicDistortion.pbk here : http://aescripts.com/ft-cubic-lens-distortion/
  2. Place it in your “Support Files” folder (which is in your AE install folder)
  3. Launch AE
  4. Look for the effect called Cubic Distortion

UPDATE : now with scale factor & chromatic aberration, see the post here

Thanks to Jerzy Drozda Jr (aka Maltaannon) for his great tips about Pixel Bender.

So now, you can create a new comp with your distorted footage > pre-comp it > undistorted it with the shader > track it in syntheyes > export the camera to a 3d package > render the scene > import the render into your pre-comp > desactivate the shader. Should match perfectly :)



Screenshots

(yeah I know, PFTrack grid with Syntheyes … not cool ! :p )

Distorted Grid

Syntheyes Cubic Undistortion

AE with Cubic Lens Distortion shader

compare AE & Syntheyes Lens distortion


Donation

If you wish




GLSL Cubic Lens Distortion with chromatic aberration in Blender Game Engine

Introduction

I received a mail the other day from Martins Upitis, who asked me really nicely if he could use my Cubic Lens Distortion Shader code into one of his GLSL shader. He was asking me if he could copy & tweak it a little bit. That was really funny to me since Martins is probably the guy who made me wants to learn shader in the first place after seeing one of his first shader he posted on blenderArtist, and also because he didn’t figure I was the same person asking him thousand of noobs questions on that same forum :p .
Of course I was so proud that I said yes, and he made it look so coooolll !!!

Source

Anyway I encourage you to go check out his post on BlenderArstists.org here : http://blenderartists.org/forum/showthread.php?t=175155&highlight=lens+distortion

His shader create vignetting, chromatic aberration, cubic distortion, depth of field, and a really smart animation texture based on 1bit alpha store in each RGBA channels (which gives a lot of frames – so smart)… it looks really great !

Blender File : http://www.pasteall.org/blend/1425 or mirror on this blog

controls:
buttons 1 and 2 enables post-process effects (vignetting, noise, edge-blur, and a new and awesome lens distortion filter, made by Francois Tarlier, and slightly modified by me).

Up/Down – shrink/grow the snowflake
mouse wheel up/down – chromatic dispersion amount

numpad:
7/4 – lens distortion coefficient
8/5 – cubic distortion amount
9/6 – image scaling amount

RGBto3D Space application made with processing

Introduction

I wanted to play with Processing the other night. I thought about representing RGB pixels value of a video in a 3D space based on there RGB value, where RGB stand for XYZ.


Mobile Version on Vimeo

00:00 : using “Avatar” Trailer
01:07 : using webcam feed


How, why, cool ?

So really basic programming, but I thought it would looks cool. Actually, what was going to be a cool looking animation  turn out to become a cool visualisation tool !
I found out that by just showing those pixels in a 3d space based on there RGB values you could see several dimensions at once :

  • Red value : X axis
  • Green value : Y axis
  • Blue value : Z axis
  • Luma value : is the vector between the black color (0,0,0)  -> white color (255,255,255). It means that if the point cloud is closer to the white corner, brighter the picture is (… no kidding :) )
  • Saturation value : it is the vector perpendicular to the luma vector. it means if the picture is saturated wider the point cloud would be, and of course more it is desaturated finer the point cloud will be. A black & white picture would only show particles on the luma value.
    This one was the less obvious to me (but I’m not really smart :p)



Examples

Saturate picture = wider point cloud

Saturate picture = wider point cloud

Desaturate = finer point cloud

Desaturate = finer point cloud

black&white = point cloud only show as a straight line on the luma vector

black&white = point cloud only show as a straight line on the luma vector

brighter = point cloud closer to the white corner

brighter = point cloud closer to the white corner

Darker = point cloud will be closer to the black corner

Darker = point cloud will be closer to the black corner



Source Files :

You will have to add a quicktime movie (.mov) in the “data” folder called “vid.mov”


Conclusion :

For sure all this sounds pretty obvious, and I’m pretty sure I’ve seen people doing this kind of stuff before, but I’m surprised I haven’t seen it in any “video editing” software before (or maybe I miss it).

I think it could be a really helpful tool to have a quick over look on your picture and just in a snap being able to tell if its too saturate, too red, or too blue, too bright or too dark…

Feel free to leave any comments about that or if you know something similar, just drop a line in the comments. By the way, this is my really first complete project with processing, so I probably did things the wrong way, you are welcome to correct me :)

HalloweenPic : Facebook App based on the Marilena (opencv port to flash) face detection into Flash

halloweenpic-screenshot

Juan Bermudez did a cool facebook app called HalloweenPic based on the small Flash application I did a while ago.

I’m really happy (and proud :) ) that sharing the sources could help someone creating some cool stuff.

That said, just go and test it here : http://apps.facebook.com/halloweenpic

Reblog this post [with Zemanta]

Blender 2.5 Tour (via blenderlabrat) – UPDATED

blenderlogo1

You can now find 3 new videos on blenderlabrat’s blog about his Blender 2.5 Tour.

There as been a lot of changes since his first tour of 2.5 and I think it’s worth taking a look at this new tour now because the new version of Blender is getting closer and closer to its final look and I think that’s a good starting point for people to have a real and nice sneak peak of what it’s going to be.

The 1rst part covers (user interface) :

  • Context! what it is and how to understand it
  • UI manipulation, split, join, rip, swap areas
  • 3dview tools pane and repeat last operator
  • threading, constant responsive UI
  • new button types
  • misc stuff, like quickly themes colors and such

http://blenderlabrat.blogspot.com/2009/10/25-tour-9-part-1.html

The 2nd part covers (animation) :

  • Basic Keyframing
  • Keying sets
  • F-curve basics
  • Modifiers
  • Drivers ( didn’t want to work but i showed the procedure)
  • Quat V Eulars
  • Non-Linear Animation Editor

http://blenderlabrat.blogspot.com/2009/10/25-tour-9-part-2.html

The 3rd part covers (materials/shading) :

  • Tour of new buttons layout and how to use it
  • Multiple material
  • Textures
  • New bump mapping and colour managment
  • Volumetrics
  • Volumetric Textures

http://blenderlabrat.blogspot.com/2009/10/25-tour-9-part-3-materialsshading.html

The 4th part covers (particles & simulations) :

  • where things are now
  • new cache system
  • effectors
  • Smoke
  • point density texture

http://blenderlabrat.blogspot.com/2009/10/25-tour-9-part-4-particles-and.html

The 5th part covers (Cache Editing) :

  • how does simulations are cached on Harddrive for realtime playback

http://blenderlabrat.blogspot.com/2009/10/25-tour-9-part5-cache-editing.html

Computer Vision : High-Speed Robot Hand Demonstrates Dexterity and Skillful Manipulation | Hizook

Before your say anything or think that this is not interesting check out this part of the video : http://www.youtube.com/watch?v=-KxjVlaLBmk#t=158 !!! Somebody said spooky o_O ?

Robot_Hand_Grasp_Grain_Rice_Tweezers[1]

If you want to know more about this research check out :

High-Speed Robot Hand Demonstrates Dexterity and Skillful Manipulation | Hizook.

A closer look at Project Natal on Pixelsumo

Intresting article about Microsoft Project Natal via Pixelsumo .

natal3-camera

“Microsoft bought 3DV Systems. I instantly thought they were going to use this camera and their own sdk after seeing the demos, however Eurogamer reports:
“Aaron Greenberg was even more direct. Asked whether Natal was derived from 3DV technology, he told Eurogamer: ‘No, we built this in house.’”

“Kim admitted ‘it’s a combination of partners and our own software’,…’ ”

There are rumors that Microsoft are working with & licensing patents from Prime Sense, another Israel company like 3DV …..

Full Article here : http://www.pixelsumo.com/post/project-natal