Follow Me

Dropbox. ?Spread the word.  The first 2GB is free!

Journal Index
Wednesday
Sep142011

Starfucker Music Video - Bury Us Alive

I helped my friend Josh Cox with a shot or two on his debut music video.  The song is called "Bury Us Alive" by STRFKR.  Thiago Costa did the massive particle simulations at the end.

Monday
Aug222011

Sellwood Bridge Collapse

I took some redcam footage of the sellwood bridge fromt he east side, and I removed most of the bridge so i could have it collapse, caused by an inial explosion at the concrete pilon.  here are a couple of the tests.

First test where everything just falls all at once.

Here I've now got things timed out, but I need to remove the bounciness of the bridge and make it fall apart more.

 Worked on the sim more and started working with Fume to see what I get. A good start, but much more work is needed.

Thursday
Aug182011

The Ant Attack!

Something I was messing around with.  Have to thank my friend Austin for the animation.

Wednesday
Aug102011

SIGGRAPH 2011

Siggraph was in Vancouver BC this year, so me and fellow Ex-Autodesk employee, Jon Bell, decided to drive up and check out the exibition hall.

I think the highlight of the show for me was seeing the guys at Thinkbox Software.  They weren't on the show floor, but they had a suite at a hotel next door and they were giving demos of Krakatoa, Frost and more.

A few snapshots from the show floor at Siggraph.

 

Friday
Jul222011

Stretchy Bones

Someone asked me the other day how to make a stretchy bone in 3dsmax.  I tried to explain it quickly and I made it overly confusing.  I then stopped and made a simple scene to explain the concept so i thought I'd share that with all of you.

Starting with a fresh scene, make a bone and finish it with an end bone. The End Bone: The end bone confuses some people.  I was told by the Autodesk engineering team that to have a bone, you need a start AND and end.  Otherwise, you don't know where that bone ends.  Even Maya's joints work the same way.  Each bone is made up of two joints.  This works great for making a bone stretchy since the distance between the 2 joints will be used to make a bone stretch and squash.

Add 2 point helpers. Using your Align tool, align one to each of the bones. One will simply act as a root for the whole system, the other the animated position for the stretching.  It's good practice to make a root bone for everything you do.  Link the first bone to this point helper.  It's always good to do this otherwise your system is a parent of the world, which cannot be moved.  Turn on auto key and animate the other helper so that we can test our stretchy bone as we go.

Now, Position Constrain the end bone to the moving helper. At this point the end bone should be following the point helper.  The root bone is not doing anything though.  To fix this, add a Look at Constraint to the root bone, and have it look at the moving point helper. This gives it an up axis that you can control, also making it like a simple IK system.

Finally, open up the BoneTools dialog and scroll all the way down to the Object Properties rollout and un-check Freeze-Length. If you want, you can set the squash option so the bone apears to deform with the stretch.  Squashy bones will work in conjunction with the Skin modifier and will help simulate effects like flexing muscles.

 

Thursday
Jul142011

Search Engine Queries Summary

I looked at what people are searching for when they end up at my blog and I found one of todays top search terms to be insightful.

I think it was this guy I work with.  I could see him typing that in his browser.

Thursday
Jul072011

Phys-X Dynamics in 3dsmax

So, Autodesk throws another plugin into the 3dsmax package. (as of 3dsmax 2011 Subscription)  Let's understand it, rant about it and move on...

This is what they do.  Don't be mad at Autodesk.  They just buy things now.   Then again, how could you not be a little upset.  I used to be there, working and talking with programmers to see how we could make a great toolset. But making original tools is expensive. Design, specs, testing, documentation, it tallys up quick. If we just keep buying these other peoples plugins, it'll make for a better tool in the end? Right? No.  Wait, what they really mean is, "if we keep buying these plugins, people will think they are real features and buy this product" all so the shareholders make money.  Wait. People don't really buy it becasue of those transparent attempts at features do they?  No, they buy it to stay current.  I want to be current all things like Open EXR, HDR files, FBX compatibility, maybe if were lucky some breakthrough in texture mapping.  We mainly upgrade only not to be left behind. 

Im all for them buying plugins and calling them features.  Max would still be clothless if it weren't for buying plugins.  I have to say.  That cloth is still very usable in production. Caching cloth to point caches helps too, but it still has a pretty good basic toolset.  Hair and Fur however (Shave and a Haircut origin) was a great idea, but the lack of max like user interface just killed it.  That with some seperate undo system and a hit or miss renderer and it left the market open to newer tools like Hair Farm, which is integrated very well.

Getting back on point, Phys-X is a bit different since it was developed by nVidia.  I could only speculate why they integrated it, probably because a developer over there thought it might be cool.  After autodesk thought it only natural to push this dynamics technology on the max users just like they did with Havok many years back. Phys-X is eveywhere and im sure its a plan to strong arm the video card\game  market.  Dont get me wrong, Im ok with all this. Matter of face this little physics demonstration can be quite usefull. (and I still own ADSK stock) 

The results I've seen from Phys-X already in Particle Flow Box#2, and Rayfire, it looks pretty usable.  (After buying a copy of Rayfire, we were told to download this plugin from Nvidia's developer site to use it with Rayfire!)  Many of you are probably skeptical about using game physics for anything other than realtime, but It does have it's place.  Im only writing this post since I recently needed to drop a shipping container, and have all 4 walls slam down and Phys-X worked for me like a charm. 

The 3dsmax Phys-X Workflow

Lets just spell out the intended worklfow of this plugin.  We have a scene we want to simulate dynamics for.  We take a set of objects, set up some dynamics properties and constraints and run a simulation. Once we have what we like, were done. Right?  The part many developers forget to think about is working with clients.  The client may like 2 different simulations many file versions apart.  You might be asked to "split the difference" and other tricky requests. I use point cache modifiers for this with cloth, but for objects? (Point Cache Space Warp modifier maybe?)  Maybe the animation layer system would work for creating different "takes" of the shot, but I ain't got time to test out that crap. Someone let me know if you can save dynamics animation to animation layers.

The Toolbar

Phys-X is pretty straight forward.  You use the toolbar buttons to assist you in adding constraints and property modifiers to your objects. As far as I can see, dynamics can only be turned on at a certain frame.  There doesn't appear to be a way to have an object stand still until its gets hit which was my first instinct of what it should do. 

Constraints

Constraints are powerful.  Make 2 boxes with a constraint and start messing with the parameters.  You will quickly find out that the coordinate space of you boxes is important.  And with max, that coordinate system is based on which viewport you created the object in. (Make a dummy in each of the 4 viewports and compare the local Z directions)  I always try to make "rig type" and object in the perspective view since it matches the world coordinate system.  Objects build in left and top viewports will be rotated 90 degrees to each other. (first 2 mouse inputs give you x and y, the last gives you the z.)  All I can say is to take care and think through the constraints and their coordinate.  The limts also work from the coodinates point of view so don't expect the transform gizmo arrow to stay within the constraints pie graph in the viewport.  It takes a little fiddling to ge the right axis limited.  Watch out for mirrored objects too.  The axis might appear flipped.

 The Timeline Issue

This is where a little design intervention could go along way.  The timeline in Phys-X is seperate from the 3dsmax animation timeline.  I'm sure there is a reason, but it throws off the common max user.  Makes the tool feel "broken" even if it's acting as intended.  I always feel like it hit the "back to start" button on the max timeline, and then have to do it again on the Phys-X toolbar.  

Dropping a Shipping Container

So I had an opportunity to use this just last month on a canadian lottery spot.  It needed a shipping container to be dropped on a guys lawn.  With it limited set of features Phys-X was still a good choice for this type of effect.   It makes keyframes. You can slide them around, re-time them and even save them off with 3dsmaxs  save animation tool.   Getting the scale was the first thing to do.  A small box dropped on teh floor moves faster than a gigantic box, landing on a lawn.  I linked all my objects to a dummy and scaled it til I got a good fall within the frame count I was given.  Dropping a box that's flat with the ground is boring and very symetrical.  I tilted the shipping container in all three axis to let it's corner hit first and got a great result.  The sim I used in the end wasn't very different from the first one I did. And of course I did this on simple boxes and linked the real mesh to it.

 I didn't get a chance to finish the shipping container drop for the commercial.  It was yet another job where not enough time was really given to finish the effects the way I really wanted them to look. When I started writing this post, I made another version of the dropping effect, but this time I tried to finish the job by denting the shipping container and adding a "flex" effect to make it look like real metal hitting and bending.  That version will see the light of day only through this post.

 

 

Monday
Jun272011

www.visualart.be - Particle Flow Water Examples

Just stubled across a cool blog with some great particle flow water examples.  Check it out his blog here.

http://www.visualart.be/?p=324

 

 

Tuesday
Jun142011

What the LUT?

Since we're all now working in a linear workflow with RAW data, (right?) you might start hearing more about something called a LUT.  

A LUT is a color profile that can be applied to footage to see it as it should appear in your final result.  Think of a LUT as a "lens" which you look at your work through.  LUT stands for Look Up Table. There are 2 types of LUTS.  1D LUTs and 3D LUTs.  Since we don't want this color space baked into the file, we use a LUT to see the image as we will deliver it.  Therefore, It's important for 3d rendering and compositing to see the final look of the footage as it is intended to be seen, with the LUT on top. 

A 1D LUT can be thought of like the "curves" tool in photoshop.  It can affect the brightness and contrast, but doesn't really shift colors around. It bends the luminance information of an image.  sRGB or Gamma 2.2 is a 1D LUT.   rec709 is another 1D LUT and is very similar to sRGB but the blacks are lifted slightly. Cineon files also use 1D LUT to get the logorithmic data to something viewable. 

Now for the 3D lut.  Imagine you have an image, and you plot the colors of that image in 3D space using Red for Z, Green for Y and Blue for Z. Now, imagine that you apply corrections to shift those 3d points in space, affecting red green and blue differently.  Now, if you took the original colors, and compared it to the shifted colors, and got the difference, that would be a 3D LUT. Now, of course it doesn't capture every color since that would be impossible.  instead, it captures a "look up table" of those values so it can be applied to any other image and give similar results. The data in between is interpolated.

 

3d max can load a 1D lut into the viewer so that the renderer and material editor can be viewed properly.  This file format is a .lut file and is an Autodesk file format that comes from their hi end finishing systems like Smoke and Flame.  Vray can load a 3D lut into it's VFB (Virtual frame buffer) and it supports .cube files. Bring up the color correction controls by pressing the left most button in the Vray VFB.  From there you can load a .cube file.  Make sure to turn off the sRGB button and turn on the LUT button to see the correct results.

 LUT Examples

Here are a few examples of LUTs and what each does to the image.  The left is from Nuke and the right is from Vray- just to show how the results are the same.

RAW R3D

First lets start with a RAW linear image to remind everyone what it looks like before any adjustment. This image is from the RED camera in it's original R3D format. 

 

 

 

sRGB

This image has the typical sRGB 2.2 gamma applied.

 

 

 

 

AlexaV3_EI0800_LogC2Video_Rec709_LL

This LUT can be downloaded from the Arri Digital web site and is designed for use with the Arri Alexa camera.

 http://www.arridigital.com/technical/luts

 

 

HD Vid to Print

This LUT i downloaded from the Light Illusion web site.

 http://www.lightillusion.com/freeluts.htm

 

 

 

Log to Print

Another LUT I grabbed from the Light Illusion web site.

 http://www.lightillusion.com/freeluts.htm

 

 

 

 The hard thing about some of these is knowing what the input color space was when it was made.  The last one from Light Illusions might not be using the proper input color space since the RED camera  doesn't give you a cineon source.  I changed the input source in Nuke until I got something I liked.

When I write a blog, I usually know a lot about the topic.  This topic is different.  Since I started reasearching LUTs I've gained a lot of knowledge, but I still have more to learn.  First of all, finding LUTs is pretty hard.  There is also a lot of LUT formats out there.  Many of them don't work in 3dsmax or Nuke even though the file extension is supported.  I feel like Im writing this pre-maturely, but I've learned enough to pass on a working knowledge of LUTs and that what this blog is here for.

Friday
Jun032011

Respect the Process

Every once in a while, I end up on a job where we sort of "re-invent" the wheel. People ask that you move forward without approved character designs and try to "figure out" designs in 3D space. Then they want animation tests before the rig is even developed? Fuck that. I say Respect the Process. There is a reason for doing things in an organized fashion and if you follow the process, your job will run smoother and your work will look better.

Design and Plan

The most important stage. Not that everything needs to be figured out ahead of time, but the animatic and edit should be 95% approved before moving forward. You have to know what your characters are gonna do. Also, characters should be fully explored and agreed on by director, agency and client.

Model and Rig

Modeling Texturing and rigging should come next. Animators should be brought in for discussions, testing and rig development.

Animate and simulate

Animators should be working on animation while the TD is either dealing with simulations, water, fire, particles, or working on lighting setup. A TD can get a lot done while the client and director on working on animation. Get your lighting done in this stage.

Render and Composite

Beg for time to render and use it to render again and again. Flush out all the glitches and make sure your lighting looks great. Start your render settings low, and turn them up to slowly to get final results. Work closely with composite to make sure they are treating your images correctly, and give them what they need to finesse the end in post.

Final Color Correct and Delivery
Don't bail on it now. Make sure you see your work before it's posted. In the end, it's your responsibility to make sure it looks good as it goes out the door. At this point you might even want to create more passes for someone to use during the final color correct.

We go through these stages for a reason. Walking through these stages makes the whole process smoother, which in turn allows you to make your work look better. Isn't great looking CG what everyone is looking for?

Friday
May132011

Realistic Fruity Pebbles Treats

 What does it take to make a computer generated Fruit Pebbles Treat bar?  I'll show you.  Last month I was tasked with creating two computer generated snack bar that would pulled apart by a stop motion animated puppet.

The first thing I did on this job was to find out exactly what I was going for.  I was told, it has to look like the real bar, but even better, and it has to look chewy and delicious.  The first thing I did was decide where it was going to break.  From here, I built a bar that had two parts that matched up nicly but unevenly. I then skinned the bar to two helpers so that I could animate it pulling apart.  This worked fine, but it broke like concrete, not like something chewy and delicious. To fix this, I added a couple morph shapes so that when it broke, I could have it pull back slowly.  Here's an animation test of that.

The Goo

To do the goo that pulls apart between the two halves, I modeled some stretched marshmellow and skinned them up between 3 point helpers.  One on each end, and a middle one that was constrained between the outter two.  This middle helper had a list controller allowing me to animate it on top of the contraint.  

 

The Cream on Top

Another important part of this product is the cream on the top.  To do this, I used a trick I came up with many years ago.  Follow closely.  Create a box, convert to an editable poly object.  Delete all the faces.  Turn on snapping and make sure face snapping is on.  Now under the vert sub-object start placing verticies ontop of the bar.  This will give you verts exactly on top of the bar.  No before going any farther, create a blob mesh object, and pick your empty face object. You can dial in the blobby size to something that works for you.  Go back to the faceless mesh and build more vert in realtime and see the cream generate as you go.  I originally used this technique for splattering mud on vehicles, but it works for this too.

 Rendering

For rendering, I set up a few passes with RPManager. I used a SSS shader for most of the edibale parts of the bar. The wrapper was mapped with a normal map to give it a crinkled look.  I also has an extra layer of marshmellow on top that was added to the final look.

I think the trickiest part of this that I don't cover is getting the look of the product.  Making things look realistic is one thing, but making realistic things look tasty is another.

Monday
May092011

Fred Ruff 2011 Showreel

I just updated my showreel and posted it to Vimeo.  Check it out.

Monday
Mar282011

"Grimm" Pilot Shoots in Portland

I can't say much more than they are shooting Grimm in Portland, and a couple Portland studios are doing the effects for it.  Here's a couple articles on the pilot getting greenlighted and some of the cast.

 

 

http://www.nwcn.com/news/oregon?fId=118546069&fPath=%2Fhome&fDomain=10202

http://www.tvline.com/2011/02/grimm-news-pilot-adds-prison-break-alum/

http://scifi.about.com/b/2011/03/04/cast-added-to-grimm-pilot.htm

Wednesday
Mar162011

Working with Mirrored Ball HDR Images - Part 2

I already talked about creating chrome balls and parnoramic HDR images, now let's talk about how to use them in max. This article covers just the mirrored ball technique.  We'll talk about panoramic images  in a follow up article. If you're looking to buy a gazing ball, click here.

Step 1 - Environment Map - Background

Let's set up the background image so we can see our render, pre-composited, on the background.  I do this with an background environment map (8)  and then set the viewport background (Alt-B) to use the environment for the viewport display.  Now when you render you'll see the same image in the render as in the viewport background.  

Throw a sphere into the scene and apply a fresh Vray material to it.  Add a camera and set up Vray as the renderer.  Don't turn GI on yet.  Turn off default lighting in the renderer so you can do a test rendering.  With no lights and default lights off, your rendering should be black.   Make sure it is.  If you now turn on GI and render, your sphere will blend right into the background like some predator effect.  This is because Vray assumes the environment\background slot is what you are lighting with.  Since the background is in screen space, it applies a very artificial effect to the scene. Your background image is acting as the GI light, but it's also projected in screen space so the results are strange.  If you do use GI, you'll have to turn on the GI environment override within the and use the color swatch or the texture slot to add skylight GI to the scene.  For now, leave it off.  Its less confusing. (I always end up using the override anyway, so might as well turn it on, and set the multiplier to 0 for now.)

Step 2 - Setup the HDR Light

Now that we have a background with no light coming into the scene, let's set up your light.  Make a Vray light, and set it to be a dome light.  Set the multiplier to 1.0, and turn on spherical (full dome).  Drop a VrayHDRI map into the image slot. Set the resolution to whatever your chrome ball is set to.  In our case 1024.  Drag that map over to the material editor as an instance so you can work with it some more.  Browse for your hdr chrome ball.  Choose "Mirrored Ball" in the radiobutton  Do a test render and you'll start getting something. But is it right? Probably not. It might look something like this.

Step 3 - Deal with the Grain and Gamma

So if your image looks like the one above, the first thing you'll notice is the grain.  Some of this is because my gamma is set to 2.2.  When using 2.2 gamma, your blacks will be boosted and appear grainy. Vray calculates some things by contrast.  When you gamma correct, you change the overall contrast. You could try turning up all your Vray settings to fix this, but instead, set the color mapping gamme to 2.2, and turn on "Don't Affect Colors (Adaptation Only)" This "adapts" Vray and tells it that all the settings of anti aliasing and DMC noise are now working in a gamma 2.2 space. More on this in this article.  If you render again, alot of the grain will go away.  But not all of it.  The rest is due to the HDR dome light we created.  By default, Vray lights are set to only 8 samples.  This works for smaller area lights, but when you spread all that over an entire dome of a scene, you'll need more samples.  You'll probably have to go up to 32 or even 64 samples, depending on your scene. I leave it at 8 for speed and tune that later in final rendering.

Step 4- Make a virtual mirrored ball

The only way to really know if your HDRI is working right is to make your own mirrored ball.  Throw a fully reflective material on that sphere and do a test render.  Depending on where your camera is located your mirrored ball will look different, but if your anything like me, your camera is more often placed to the south, pointing north like in this image.  I think I do this because I like my camera pointing from the "Front" direction.

Hey, whos that handsome fella? Wait, shouldn't I be in the ball, not the background?  Wait, we're looking at the BACK of the mirrored ball!  For some odd reason, the default orientation for a mirrored ball is facing north?  It assumes the camera is pointing from the back view. (Seems wrong with my logic, but maybe there is a reason?) So... to fix this, we can just  rotate the Horiz. Rotation 180 degrees. 


That's the right way, with the streaks in the background, and the chrome ball looks like the original chrome ball with me reflecting in it.  We have now digitally captured that lighting scenario, and re-created it virtually. (Sorta)  I don't think that's all thought.  If you can make it look better, do it.  Go farther. Although we captured the basic lighting, we don't get the dappled light through the trees.  We don't get a darkening below the ball, as if the ground were there.  We're only half way there. This is your starting point for your ambient light.

Let's set up a plane for catching shadows.  make a plane, set the vray properties to matte, and -1 and check the shadow options.  Put that material back to basic gray. You'll never see light on the chrome ball. Now you should get a render with a soft shadow under the ball.


Let's make a key light. Let's make it based on our HDRI!  If you've read up to this point good for you.  If you feel you knew all this, great!  Here's the little tid bit you might want to take away from all this.   Now that we have our environment, Let's break it up and work with it.  I take the mirrored ball image and clip out a section to use as my sun.  I then put that, as a projector, and turn up it to make it look like the sun poking through the trees.  The result looks something like this.


Tuesday
Mar152011

A Visit to Pixar


I visted San Francisco this weekend and got a chance to meet up with an old friend at Pixar

Pixar is in Emeryville, the east side of the San Francisco bay area.  I used to live in the bay area before I moved to Portland back in 2005.  Although I'm originally from New York, this was my old stomping grounds for many years while working for Autodesk.  It was good to take a long weekend and go back to vist the area.  I got to eat in some of my favorite resturants and visit some old friends.

It's been about 6 years since I visited the Pixar campus.  It's really funny pulling up to the campus since it's tucked away in Emeryville, right on the edge of Oakland. It's not exactly the best part of town.  I saw lots of new construction going on and confirmed that they have a second building going up that might be populated very soon.  Aparently, the first building got filled up pretty quickly and they had many people working off the main campus.  

As you walk in, you'll notice a large amphitheater in front of the building.  They will often have larger company meetings and parties out here in the nice weather. They also have a new Luxo Jr statue out front.  That was a nice touch too.  (New to me, they could have put it up any time since I was there last.)



The lobby has a life size replica of Sully and Mike from Monsters Inc., life size models ofLuigi and Guido from Cars, and a life size lego statue of Buzz and Woody. (I bet you could use some voxel 3d model generator for making a lego blueprint.  Knowing that, I'm not as impressed by large scale lego sculptures.  I just feel sorry for who ever has to put them together.)

 

 

The main area consists of the cafeteria, the giftshop and a game room.  The doors at the other end of the main room lead to the main screening room.  I didn't go in there this time, but I did get to see Spirted Away, introduced by John Lasseter after he got back from visiting Hayao Miyazaki.  He said he hand carried this print on the plane himself, but I dunno, maybe he's just a good story teller.  ;)

Lunch at Pixar is awesome. Back when I lived in SF I jumped at any chance to go have lunch over there. The food is great and it's real inexpensive. They always have daily specials and they make pasta and pizzas to order.  

They also had a great display of the art of Toy Story 3 in the Upper East side of the building.  Everything from concept art, to model sheets, sculptures and lighting designs.  I couldn't take any photos, rightly so, but it was very inspiring to see.

Pixar has a really great culture.  You can feel it when you're there.  The pay a reasonable wage, provide healthcare and retirement assistance for their employees.  They provide training through their Pixar University program, and encourage people to learn much more then just their position in the company.  I hope the day I run my own studio, I remember the things that make a great culture, and do my best to create my own culture.  John Lasseter and Ed Catmull are still the leading force at Pixar and it shows.  They inspire their employees to do great work.  I hope I can do the same with the people that work with me, now and in the future.

Monday
Mar142011

Making Mirrored Ball and Panoramic HDR Images - Part 1

 If your looking for a cheap mirrored ball you can buy an 8" Stainless Steel Gazing Ball here at Amazon.com for about $22. They also have smaller 6 " Silver Gazing Globe  for about $14. It can easily be drilled and mounted to pipe or dowel so that it can be clamped to a c stand otr a tripod.

First, Im no Paul Debevec.  Im not even smart enough to be his retarted intern. But I thought I'd share my technique for making HDRI's. My point being, I might not be getting this all 100% correct, but I like my results.  Please don't get all over my shit if I say something a little off.  Matter of fact, if you find a better way to do something, please let me know.  I write this to share with the CG community and I would hope that people will share back with constructive criticism. 

First, let's start by clearing something up.  HDRIs are like ATMs.   You don't have an ATM machine.  That would be an Automatic Teller Machine Machine (ATMM?)  See, there's two machines in that sentence now.  The same is true of an HDRI.  You can't have an HDRI Image.  That would be an High Dynamic Range Image Image.   But you can have an HDR image.  Or many HDRIs. If your gonna talk like a geek, at least respect the acronym.

I use the mirrored ball technique and a panoramic technique for capturing HDRI's. Which one really depends on the situation and the equiptment you have. Mirrored balls are a great HDR tool, but a panoramic hdr is that much better since it captures all 360 degress of the lighting.  However panoramic lens and mounts aren't cheap, and mirrored balls are very cheap.

Shooting Mirrored Ball Images

I've been shooting mirrored balls for many years now.  Mirrored balls work pretty damn good for capturing much of a scene's lighting.  Many times on a set, all the lights are comming from behind the camera, and the mirrored ball will capture these lights nicely. Where a mirrored ball starts to break down is when lighting is comming from behind the subject.  It will capture some lights behind the ball, but since that light is only showing up in the very deformed edges of the mirrored ball, it's not gonna be that acurate for doing good rim lighting.

However, mirrored balls are cheap and easily available.  You can get a garden gazing ball, or just a chrome christmas ornament and start taking HDR images today.  Our smallest mirrored ball is one othe those chinese meditation balls with the bells in it. (You can get all zen playing with your HDRI balls.)

  • The Size of Your Balls

My balls are different sizes, (Isn't everyones?) and I have 3.  I use several different sizes depending on the situation.  I have a 12" ball for large set live action shoots, and a 1.5" ball for very small miniatures. With small stop motion sets, there isn't alot of room to work. The small 1.5" ball works great for that reason.  I'll usually clamp it to a C-stand and hand it out over the set where the character will be.  I also have a 4" ball that i can use for larger miniatures, or smaller live sets.

  • Taking the Photos

With eveything set up like we've mentioned, I like to find the darkest exposure I can and expose it all the way up to being blown out.  I often skip 2-3 exposure brackets in between shots to keep the number of files down.  You can make an HDRI from only 3 exposures, but I like to get anywhere from 5-8 different exposures.

  • Chrome Ball Location

When shooting a mirrored ball, shoot the ball as close to where the CG subject will be.  Don't freak out about it if you can't get exactly where the CG will be, but try to get it as close as you can. If the CG subject will move around a lot in the shot, than place the ball in an average position. 

  • Camera Placement

Shoot the ball from the movie plate camera angle.  Meaning set up your mirrored ball camera where the original plate camera was.  This way, you'll always know that the HDRI will align to the back plate.  Usually, on set, they will start moving the main camera out of the way as soon as the shot is done.  I've learned to ask the lights to be left on for 3 minutes. (5 min sounds too long on a live set, I love asking for 3 minutes since it sounds like less.) Take your mirrored ball photos right after the shot is done.  Make nice nice with the director of photography on set.  Tell him how great it looks and that you really hope to capture all the lighting that's been done.

  • Don't Worry About...

Don't get caught up with little scratches being on your ball. They won't show up in your final image.  Also don't worry about your own reflection being in the ball. You give off so little bounce light you won't even register in the final scene.  (Unless your blocking a major light on the set from the ball.)

  • File Format

We use a Nikon D90 as our HDR camera. It saves raw NEF files and JPG files simultaneously which I use sort of as thumbnail images of the raw files.  I'm on the fence about using raw NEF files over the JPG's since you end up blending 6-8 of them together.  I wonder if it really matters to use the raw files, but I always use them just in case it does really matter.

  • Processing

To process your mirrored ball HDR image, you can use a bunch of dirrerent programs, but I just stick wtih any recent version of Photoshop.  I'm not holding your hand on this step.  Photoshop and Bridge have an automated tool for proceesing files to make an HDR.  Follow those procedures and you'l be fine. You could also use HDR Shop 1.0 to make your HDR images.  It's still out there for free and is a very useful tool.  I talk about it later when making the panoramic HDRI's.

Shooting Panoramic Images

The other technique is the panoramic HDRI. This is a little more involved and requires some equiptment.  With this method, I shoot 360 degrees from the CG subject's location with a fish-eye lens, and use that to get a cylindrical panoramic view of the scene. With this set up you get a more complete picture of the lighting since you can now see 360 degrees without major distortions.   However, it's not practicle to put a big panoramic swivel head on a miniature set.  I usually use small meditation balls for that.  Panoramic HDRI's are better for live action locations where you have the room for the tripod and spherical mount.  To make a full panoramic image you'll need two things.  A fisheye lens and a swivel mount.

  • The Lens

First, you'll need something that can take a very wide angle image. For this I use the Sigma 4.5mm f/2.8 EX DC HSM Circular Fisheye Lens for Nikon Digital SLR Cameras  ($899)  Images taken with this lens will be small and round and capture about 180 degrees. (A cheaper option might be something like a converter fish-eye lenses, but you'll have to do your own research on those before buying one.)

 

  • The Tripod Mount

 You'll need a way to take several of these images in a circle, pivoted about the lens.  We want to pivot around the lens so that there will be minimum paralax distortion.  With a very wide lens, moving the slightest bit can make the images very different and not align for our HDRI later.  To do this, I bought a Manfrotto 303SPH QTVR Spherical Panoramic Pro Head (Black)  that we can mount to any tripod. This head can swivel almost 360 degrees.  A step down from this is the Manfrotto 303PLUS QTVR Precision Panoramic Pro Head (Black)  which doesn't allow 360 degrees of swivel, but with the 4.5 fish eye lens, I found you don't really need to tilt up or down to get the sky and ground, you'll get it by just panning the head around.

Once you got all that, time to shoot your panoramic location.  You'll want to set up the head so that the center of the lens is floating right over the center of the mount.  Now in theory, this lens can take a 180 degree image so you only need front and back right?  Wrong. You'll want some overlap so take 3 sets of images for our panorama each 120 degrees apart. 0, 120, and 240. That will give us the coverage we need to stitch up the image later. 

  • Alignment

Just like the mirrored ball, I like to shoot the the image back at the direction of the plate camera. Set up the tripod so that 120 degrees at pointing towards the original camera position.  Then rotate back to 0 and start taking your mutiple exposures.  Once 0 is taken, rotate to 120, and again to 240 degrees.  When we stitch this all together, the 120 position will be in the center of the image and the seam of the image will be at the back where 0 and 240 blend. 

  •  Don't Worry About...

People walking through  your images.  Especially on a live action set.  There is no time on set to wait for perfect conditions.  By the time you blend all your exposures together that person will disapear. Check out the Forest_Ball.hdr image. You can see me taking the photos, and a ghost in a yellow shirt on the right side.

Processing The Panoramic Images

To build the paroramic from your images, you'll need to go through three steps. 1. Make the HDR images (just like for the mirrored ball).  2. Transform the round fish-eye images to square latitude/logitude images, and  3. Stitch it all back together in a cylindrical panoramic image.

  • Merge to HDR

Like we talked about before, Adobe Bridge can easily take a set of different exposures and make an HDR out of them. Grab a set, and go to the menu under Tools/Photoshop/Merge to HDR.  Do this for each of your 0, 120 and 240 degree images and save them out.

  • Transform to Lat/Long

Photoshop doesn't have any tool for distorting a fish-eye lens to a Lat/Long image.  There are some programs that I investigated, but they all cost money.  I like free.  So to do this, grab a copy of HDR Shop 1.0 Open up each image inside HDRshop and go to menu Image/Panorama/Panoramic Transformations.  We set the source image to Mirrored Ball Closeup and the Destination Image to Latitude/Longitude. Then set the resolution in height to something close to the original height. 

  •  Stitch It Together Using Photomerge

OK. You now have three square images that you have to stitch back together.  Go back and open the three Lat/Long images in Photoshop.  From here, you can stitch them together with Menu-Automate/Photomerge using the "Interactive Layout" option. This next window will place all three images into an area where you can re-arrange them how you want. Once you have something that looks ok, press OK and it will make a layered Photoshop file with each layer having an automatically created mask.  Next, I adjusted the exposure of one of the images, and you can make changes to the masks also.  As you can see with my image on the right, when they stiched up, each one was a little lower than the next.  This tells me that my tripod was not totally level when I took my pictures.  I finallized my image by collapsing it all ot one layer, and rotating it a few degrees so the horizon was back to being level.  For the seam in the back, you can do a quick offset and a clone stamp, or just leave it alone.

 This topic is huge and I can only cover so much in this first post.  Next week, I'll finish this off by talking about how I use HDR images within my vRay workflow and how to customize your HDR images so that you can tweak the final render to exactly what you want.  Keep in mind that HDR images are just a tool for you to make great looking CG. For now here are two HDRI's and a background plate that I've posted in my download section.

Park_Panorama.hdr          Forest_Ball.hdr          Forest_Plate.jpg

If your looking for a cheap mirrored ball you can get one here at Amazon.com. It's only like 7$!

Tuesday
Mar082011

Thanks to Max Plugins.de for Keeping Old Plugins Alive

I have to throw a shout out to David over at MaxPlugins.de.  If you didn't know about the site already, He has an extensive list of older 3dsmax plugins that he keeps re-compiling for the latest version.  

If you've ever tried to make a talking animal you would want to use an old plugin from Peter Watje called "Camera Map Animated".  You can use it to re-project footage onto a model, then use skin and deformers to pose the face, and render the distored camera mapped object out. If you tried to go to Peters old site for pluigns you'll be out of luck.  Go visit MaxPlugins.de and make a small pay pal donation for this selfless contribution to the community.

Fred

Monday
Feb212011

Video - Spring Simulations with the Flex Modifier

Hey everybody, I decided that I would try doing some video blog entries on some of the topics I've been going over.  In this first one, Im going over the stuff I talked about with spring simulations.  I was tired when I recorded this so cut me some slack.  I'm keeping these quick and dirty, designed to just get you going with a concept or feature, not hold your hand the whole way through.

The original posted article on spring simulations with Flex is posted here.

Friday
Feb182011

Pitfalls of Linear Workflow

I thought I'd follow up the Linear Workflow article with some of the pitfalls that I run into often with this workflow.  All of them can easily be avoided.

Double Gamma

If you've ever saved an exr that was washed out, this is what I would call a double gamma'd image.  (I love the use of this word "gamma" as a verb.) You've applied gamma to the image twice. To fix this, set the output gamma to 1.0 in max's preferences.

 

Ever Heard of a Gamma .45 Workflow?

No. No such thing. That's why you shouldn't be reversing a double gamma'd image. But you've probably tried to reverse it by setting the gamma to .45 or something close to that in Photoshop or After Effects. Actually, If you just rendered a big sequence I'd be fine turning the double gamma back down.  The exr is 16 bit, so you have plenty of data to work with to fix that mistake.  But try to get it right the next time you render.  Check your output gamma in max's preferences. It should be set to 1.0 (or linear.)

Linear Textures

All your textures applied to surfaces will get gamma on the other side.  With no lighting or computer graphics really involved, you can see that the texture will move as a texture through the system, getting gamma's on the other end, and looking washed out. The input textures need to be linear without gamma.  In theory, an 8 bit images that is crushed down to a linear image would not "un-crunch"very well since its in 8 bit format. However, I beleive that max can handle this since it's 32 bit.  Im pretty sure all images that come into 3d max are converteted into an internal 32 bit format.  (Hence the Bitmap texture loads a texture while Noise makes a noise function texture, all of these are floating point "texture types".)  So when max loads images, it can easily handle the incomming images as sRGB inputs, turning the gamma back down before rendering them. So, Set your input gamma to 2.2.  Meaning, deal with all my 2.2 textures.

Normal Maps

So all your textures are coming in now and getting crushed to be darker, then added final gamma pushes it all back up to what we see.  Now lets talk about that normal map your using.  It's a map of vector info.  So with this map, it's getting pushed darker with our 2.2 input gamma and it lookes jacked the fuck up.  So, when loading that particular texture, use the bitmap load dialog to override the gamma to 1.0.  It will remember it for this just this map.

 Displacement and Specular Maps

Ok, so if my theory about data maps is correct then all data maps will technically be wrong.  However, Im usually cranking maps to make my spec maps so does it really matter?  To be honest, I'd love to hear thoughts on that one. (Post to the forum) I just know normal maps seem to show the problem the most.  Maybe it's because of color RGB is XYZ vectors and the un-gamma curve just kills the effect?

Saving a JPG while Your Working

Im often saving off versions of my render work for producers and clients, and I don't always want a linear exr.  I work with the vray frame buffer and I usually remind myself by turning of the sRGB button.  If it's dark, i know that I will need to brighten it when saving.  So when saving, just use the bitmap save dialog and make the gamma 2.2 and then you can save a jpg or whatever you want.

Tuesday
Feb152011

Save your Valentines Day with a Romantic French Film

Ok, Valentines Day was yesterday and maybe you blew it.  Maybe you bought her chocolates and ate all of them, or maybe you jusrt picked the neighbors flowers before walking in the door.  Either way, this will save you.  Get this movie Priceless and watch it with your girl friend. She'll love you for it. 

  1. She'll be impressed becasue it's a French film.  
  2. Chicks get hot when reading sub-titles.  I swear, It's true.  I read it somewhere.
  3. It's really a good movie.  Funny and charming.
  4. The actress (AudreyTautou) in it is Hot as hell!  She playes Amelie in Jean Pierre Jeunet's Amelie . (Which is one of the best films of all time.) So you'll enjoy it too.

Good luck, hope you score one for me.