CaseStudy: Hairy Stuff (4 Snow Bride)

Snow Bride is a commercially available set of clothes for the well-known Vicky4 character. The clothes are fur, and look like this.

Let’s analyze the parameters, and learn from them.

Structure

The hat as well as the muff have an Underfur as well as a Top-fur hair growth group, that’s four in total. and The other object parts have no hair groups. This is found out by selecting each part of each object looking in panel 1 which tells me about the available hair groups.

Clicking [Edit Growth Group] tells me that the inner bands, and the medallion on the hat are neatly excluded from the groups.

Guide hairs

Switching the preview to Flat Lined or Lit Wireframe (Ctrl+5) reveals the 3D mesh geometry, and clearly shows 1 guide hair per vertex. Or better: two guide hairs in this case as the Underfur and Top-fur groups overlap, so each vertex is in two groups.

For each hair growth group selected in panel 1, I looked at panels 2 and 3 and noted the numbers. The three Pull parameters were 0,0 in all cases, and indeed all hairs grow right up from the surface.

&ndsp; Hat-topfur Hat-underfur Muff-topfur Muff-underfur
Polygon size 2,5×1,6=4cm2   1,4×1,4=2cm2  
Nr of polygons Ca 250   Ca 500  
Hair length 0,030 cm 0,030 0,030 0,023
Length variance 0,000 0,000 0,015 0,011
Density (per cm2) 22000 (2.2) 88000 (8.8) 33000 (3.3) 88000 (8.8)
Tip width 0,4 mm      
Root width 2,0 mm      
Clumpiness 0,25 0,10 0,25 0,10
Kink Strength 3,0 7,0 3,0 7,0
Kink Scale 10,0 10,0 10,0 10,0
Kink Delay 0,1 (10%) 0,0 0,1 (10%) 0,0
Verts per hair 4 4 4 4

Analysis
Looking at the hat first, the hairs from both groups have equal length (3 cm) and no variance, are rather thick at the root and thin at the tips (typical for animal hair: thick roots for better protection, and thin tips due to natural wear and as they don’t do haircuts). The underfur is much denser (88 compared to 22 hairs per cm2. The hat measures 250 polys of 4cm2 each so the 88000 resp 22000 hair find their place at 1000cm2 hat). But even the total density of 88+22=110 is still half the coverage of the human head (about 200 hairs per cm2), which by itself is about half the coverage of real life animals.

So the objects have a furry impression, will need appropriate coloring of the cloth as well, but don’t resemble a full real animal coat. The underfur is denser, more even (the clumpiness of the topfur increases the furry impression) and more curly which makes is look denser. The topfur is not only less curly, the curls start somewhat (10%) away from the object surface.

The muff measures 500 polys x 2cm2 = 1000cm2 as well, so I see that the topfur is somewhat denser that the hat. I also see that the hairlengths are somewhat different from the hat as well, but especially they vary 50% (1,5 cm on 3,0 and 1,1 cm on 2,3) which will give it a more natural look. Again, the dense underfur is curlier to enhance a fuller look.

Since the hairs are short and we’re not expected to do dynamics onto them, only a few verts per hair are required and all dynamic parameters (panel 4) are kept at their default values.

Materials and more
The hair material is the same in all cases, white at the root, bluish grey at the tips to get the icy feeling and specular is a darker variety of that.

The cloth of the hat and muff are more complicated but essentially have the bluish grey tone as well, which is why the hair root material has quite a high Root Softness setting (0,5 instead of the default 0,1). The softness makes the hair roots transparent so they blend in well with the scull or surface, in this case with the cloth. If there is shadow on the cloth we might like the hair to stand out more, so the Opaque in Shadow is checked as well.

In Material Room is becomes clear that the 3D mesh (the cloth) of hat and muff are used to deliver the polys and vertices for the roots of the hairs, but the hairs themselves make a Prop to assign materials to.

Note that this prop cannot be saved, we are expected to save the hair growth surface with the (guide) hair definitions. This is what the hair collection in the Library expects.

CaseStudy: Hairy Stuff (5 Toon Animal)

Next to clothing, animals are popular to assign fur too. To avoid debates on realism – which is mainly an issue of endless tuning of the grouping details, the hair parameters and the material properties – and since I’m a dog person, I’ll go for the Toon Puppy. For the cat persons amongst you, just take the Toon Kitten instead, or anything else.
The collar came as an additional prop, and I hide it for the moment.

There are two different ways to approach this:

  • Make a lot of small hair growth groups with different length and pull settings, and apply minor adjustments to them later
  • Make a few large growth groups without pull settings, and apply a lot of editing to the hairs themselves.

Making a choice upfront is relevant, because any attempt to alter a length or pull setting (Hair Room panel 2) erases all editing of hairs in that group. Since the Puppy is rather low poly and well curved, which implies: not that many guide hairs and a lot of detailed work required anyway, I take the second way. You might choose different for a highres Horse mesh.

Analysis

All areas that are worthwhile to be furrified are textured by mainly one (“Fur”) material, driven by a single image map. The Head consisted of a Fur, Muzzle and Nose material group, I picked the first two and left the latter out.

The puppy mesh offers the regular Head \ Neck \ Chest \ Abdomen \ Hip structure, and the four-part leg and arm structure. The fourth part is foot or hand, the puppy has no separate toes or fingers. The three-part ear-structure however is an addition to a normal (human) mesh, as is the ten-part tail. Especially this last element might do well with the straightforward Hair length and Pull settings, without additional editing.

Preparation
All this makes a 5+4*4+2*3+10=37 part animal, and each part gets its own hair growth group and hair material setting. Each hair growth group is a routine: select, [New Hair Group], [Edit Hair Group] using [Add All], [Grow Guide Hairs], set all values in panel 2 and 3, and [Grow Guide Hairs] again to make the preview properly show what I just did. Then next part.

The material is just a matter of setting up one correctly, and the copy / pasting as described in the Jacket chapter.


The skin image map is multiplied with the usual noise node, and fed into the Root color at a slightly darker tone than into the Tip color.

So after a tedious sequence of paste and [Remove Detached Nodes] in Material Room, I’ve got my Toon Puppy with guide hairs standing straight out on all places that should have hair. I excluded the bottom of feet and hand, excluded the nose on the head, and ensured that the inner parts of the ears were saved from hair by using the Fur material in my selections. And all hairs are colored properly. Time to save.

Styling
Now – except from hair density – our puppy or kitten only need his/her hair styled in the proper direction. As all surfaces are rather curved, the Pull values don’t serve that well and so I’ve to click [Style Guide Hairs] to use the Hair Editor. It’s just a matter of practice, and this animal offers almost 40 areas to practice with. The easiest parts are at the back (straight tail), the most varied and curved part is at the front (head). So, to gain experience along the way, I’ll work from the tail forward.

My main tools in the editor are the three ones up-left; select, deselect and move – which just pulls the hairs in a specific direction, plus the up-right one: scale – which tapers the hair ends. There is nothing to rotate, transfer or curl. I’ll keep all hairs at constant length while editing, all edits start at the hair roots (otherwise the hairs stand out for some part), and I won’t need to alter length, let alone exceptions at the ears or nose I guess. Each time I am dissatisfied (or just screw up), I can click [Grow Guide Hairs] in panel 2, and I can start all over again for that piece of hair.

Intermezzo
Since the puppy is composed from a lot of elements, it’s relevant that all have appropriate and matching parameters.

  • Length and variance are in user units. That is, if your general preferences (Interface tab) are set to inches, then 1.0 means; 1 inch. When you save your file, send it to me (I’ve my units set to meters), the same hair will show but the setting will read: 0,0254. Internally, Poser saves all lengths and distances in Poser Native Units (PNU). The translation is done in the user interface.
  • Pull is some angle between adjacent edges that make up a hair. The first edge always stand right up, then the bending starts. The more Verts per hair, the more edges, the more the hair will finally bend. And the smoother.
  • Hair density: the number next to the dial represents the amount of hairs per square user unit. If you (using inches) set it to 10 (hairs per square inch) and I open that file (using meters), I will see 16,000 in that field as 1 meter is about 40 inches (and 10x 40x 40 = 16,000). Note that human and animal hairs are about 100 hairs / cm2 = 645 hairs / inch2 = 1,000,000 hairs/m2. The number between brackets tells the amount of hairs to be generated in the population.
    The hair objects generated by Hair Room might become really huge for realistic settings at a high Verts per hair count. A Gorilla, having a body surface of say 2 m2, at 1 million hairs per m2, and 8 verts per hair will generate 16 million vertices which is 200 times as much as a highres Vicky character. You and your PC need the resources (memory, CPU power, render time) to handle that.
    Note that the default reads 32,000 hairs per square PNU = 4300 per m2 = 3 per square inch, that’s way below the natural values.
    Anyway, when all elements of the puppy needs matching settings, so needs Hair Density to be similar for all of them.
  • Width, at the tip and the root, can be considered in mm. For natural hair, 0.1 is a decent (root) value.
    (thought experiment: take 1 cm2 of surface, give it 100 hairs in a matrix at equal distance, and look straight at it (Front camera or alike). Then 10% of the area looks filled up, as 100 = 10×10, and 10 hairs of 0.1 mm make 1mm = 10% of 1 cm. So a setting of 1 (mm) is really thick.
    But when you choose – for performance reasons – for a lower Density setting you can take higher width values to compensate, until things start looking unnatural.
  • Kink makes curls in a hair, the Scale sets the number of them over a full length, so 4.0 makes 3 full waves as 1.0 makes no wave at all. A delay of 0.2 makes the first 20% from the root up without curl, only the 80% to the tip are effected. More curls per hair require a sufficient setting for the Verts per hair, 4 verts per kink is a minimum to experience effects of some quality (so scale =4 makes about 16 verts minimum), below that the hair gets fuzzy. Strength makes just stronger curls.
  • Verts per hair was discussed already, the more you use the more PC power you need to render the result, the smoother and more natural it looks (always that same trade-off J ), the more effect you’ll get from the same Pull values, and the better the quality from Kinks.

Getting Results
Back to Toon Puppy. When I set all hair groups to 1,000,000 hairs per m2, and counted all hairs generated from that, I found that my Toon Puppy was going to end up with about 1,370,000 hairs @16 verts/hair = 22,000,000 vertices. Pressing Render revealed that the memory requirements for even a shadow map went towards 20Gb RAM user memory, causing my PC to start disk-swapping. Not good. Reducing the density a tenfold to 100,000 / m2 reduced the RAM requirements to say 2Gb, easy on my machine but in my opinion a bit too low on quality at close-up camera distance.

So you can make a compromise depending on your machine specs. See the images, 500,000 hairs/m2, head unstyled, tail styled). Watch the (Windows) TaskManager to flag issues and chances.
By the way: after the tenfold reduction, Toon Puppy had as many hairs as a human blonde (150,000) so Poser is well equipped to handle nice portraits etc at a quality representing nature.

And of course, I can bring down the Verts/hair from 16 to 12 (or 8?). but that is something I can do best before creating styles with the Hair Editor. So, mind the numbers (or just watch TaskManagers when things tend to go wrong), and make some test renders to find the quality balance. High end (large size, high quality) results require appropriate machines, sorry for that.

Styling tip: in many cases, there is no One Style that fits all hairs in the group. Like this tail part, the hairs towards the tip end are a bit more styled than the hairs towards the body end, which are more upright. I handle this by first selecting all, do some styling, deselect hairs at the body side, style a bit, deselect more, style a bit, and so on. Especially the Hip (meeting tail and both legs) and the head (lots of curves) need threatments like that.

Toon Fish
I know, most fish don’t have hair, but a hairy toon fish helps me to introduce some dynamics (panel 4).

I simply select the tail, assign it a Hair Group, grow guide hairs and turn them orange for the fit. At 40cm length, some Pull back and down, with some density (20,000/m2), and no Kinks, this is it:

In panel 4, I reduce the Bend Resistance, zero the Root Stiffness and reduce the Falloff.
I click [Calculate Dynamics], gravity takes the hairs down, but the default 30 frames are not enough.

So I double them, to 60:

And I [Calculate Dynamics] again.

Nice, but a bit boring:

So I added a ForceField (menu Object . Add Wind Force). I put it about in front of the fish, and adjusted its parameters:
And I unchecked Visible in Camera, as the ForceField should look as being generated by the swimming.

And I [Calculate Dynamics] again. Better, but let’s give it another try: Turbulence up to maximum (1.0), and after the calculation I use the time slider to seek an interesting frame. Halfway, I’ve got one for render. To spice it up even more, you can try the workings of the Wave generator (menu Objects . Add Wave). Next time you make dynamic hair for your swimming mermaid, think about the hairy goldfish. Golden hairfish. Whatever.

CaseStudy: Hairy Stuff (6 Horse)

As with clothes, it’s good to learn from available implementations on animals too. So I pick the Horse, available from the Library (Figures > (poser standard runtime) > Animals > Pets). Not to be confused with the one in Poser Originals, which has the manes and tail modeled as part of the object. The new one, in Pets, is dark reddish brown and comes loaded with Dynamic Hair:

The HorseTail prop is a separate set of polys, parented to the horse, that hold the manes. Weird. I’ll do the tail later.
The hair actually refers to the prop Mane with a black material that needs some fixing: As the hair node feeds into Alt_Diffuse already there is no need for other links, into Diffuse, Specular and Highlight Size. Just break them, and nullify the values. After entering Hair Room and selecting the Mane prop, it becomes clear that the length and Pull values are set – and quite well, but all values in panel 3 (and 4) are left to default. In my opinion – see the render above as well – , the horse can do with far more, thicker, less curly hairs than default.

So with
I get
The Tail on the other hand appears to be a 10-element body part, I want to replace it by dynamic hair. So I make parts 2-10 invisible (to preview, camera and raytracing) and I use bodypart Tail1 to assign a Hair Group, for long hairs pulled back and slightly up, using the Styling / Hair Editor.Setting the frame count to say 90, and clicking [Calculate Dynamics] brings the tail down. But I’m not satisfied: it’s too full at the tail root.

To split the tail in three parts, I make tail-parts 2 and 3 visible again, and assign them a Hair Group. I decide to use more Pull back on all three, and to make the tails point in the same direction at the start give the 2nd some Pull Down (-0.05) and double (-0.10) for the 3rd tail element.
To improve the result even more, I just give the 1st element 5,000 hairs, same to the 2nd (so the tail starts thin at the horse) and say 25,000 hairs for the 3rd. I [calculate Dynamics] again to make the gravity bring the hairs down, for all three elements (or: I use menu Animation > recalculate Dynamics, which will do the manes as well).

Nice result, and it will dance on all moves and wind as well.

CaseStudy: Hairy Stuff (7 Facial and Body Hair)

I’m going to work Ryan (Poser 8 figure), but anyone else is fine too. The reason I’m picking Ryan instead of Android Andy is that Ryan comes with image maps for its textures, while Andy just has color settings only. I’m going to use these.

Preparation
Although Ryan himself comes with Hair Props (Scull Cap, Beard, Brows etc) in the Library, I just make them myself for the sake of this tutorial. As I’m going to do Ryan’s head and front, I make the Growth Groups first by selecting the bodyparts in preview (switch to Flat Lined mode for easy poly selection), and by using [New Growth Group] and [Edit Growth Group]. The Front groups are simple selections from the Chest and Abdomen body parts. To the head, I assign multiple groups. Beard, Mustage (Left and Right, so I can use opposite Pull Side settings), Brow Left and Right (same reason), and Head which I do not subdivide as I’m going to do short outstanding hair only, without styling. Otherwise, a Left, Right, Back, Upper Right and Upper Left would have been handy. For the fun of it, I did make a second Beard area around the chin (in the Hair Editor: [Add Group], select Beard and delete the unneeded polys.

Note that on the mustache I deselected every other poly just below the nose, since these polys are very small. Populating hairs on all polys will make the mustache too dense in this area.

Headhairs are easy, trimmed, not shaven so 5mm (0.005 in meters, 0.2 inch) will do. No length variation, no pull as they all stand out, 1mm (thickness 1) at tip as well as root, nu clumb or kink, and 4 verts per hair will do fine as I’m not going to style anything. So the real issues are the color and the density.

From Materials Room I learn that RyanHead.png is the texturing file, I make a copy in my project folder and measure the head color in Photoshop (or GIMP or anything). It reads about RGB=(220, 150, 115) but I like a bit more red so I set the root_color as brown HSV (10,180,80), the tip as orange HSV (10,240,120) and the specular_color as HSV (10,40,60). Darker to make is less shiny than usual, but all colors are picked in the same Hue (10 in this case). I tend to work that way. Root_softness should go up, say 0,25.

Filling In
Now I need some test renders, to improve on the hair density and the various parameters.

  • Length= 0.005 (5mm), variance = 0
  • Pull = 0.00050 (back) 0.00010 (down) 0.00000 (side)
  • Density = 400,000 (test renders) or 1,600,000 (final render)
  • Width = 2 (tip as well as root)
  • Clumpiness = 0,0
  • Kink = 1.0 (strength) 4.0 (scale) 0.0 (delay)
  • Verts = 4 (test renders) or 16 (final render, takes time)

Not too bad, except for the jagged egdes at the sides but I’ll going to deal with that. The main trick then is to remove hairs by making them transparent, using an image (transparency map).

Note that this differs completely from the classical “hair transparency map” in which streaks of hair (props, clothlike flaps) were assigned a hair texture and transparency. In the approach presented here, hairs are made totally transparent as being shaved away. This way I can make fun shaving patterns too. For this, I need a (head) seam guide or template which tells me the way the head texture is mapped.

The Ryan template can be found in (main Poser Runtime) \ textures \ Poser 8 \ Ryan \ Templates, but a load of templates for other figures can be found at www.snowsultan.com.

I open the template in Photoshop (or whatever you like) and make a working copy to a second layer. The template offers a 2D representation of the 3D polygon mesh, although very fine meshes are usually represented by a courser structure to keep things workable.

As a first step, I start to full all polys in the Photoshop image with a color, until I have a neat match to the Hair Growth Group on Ryans head. And I whiten out all the elements in the template that I won’t need anyway, but I do not alter the size of the image itself.

Next, I’ve got to answer the question; did I really pick the right portions in the image? The template can be done in a lower resolution than the 3D mesh, so one row of blocks in the 2D image might represent multiple rows of polys in 3D. And I’m a bit worried about the ear-region. So I save the template to a new name, and in the Poser Material Room I swap Ryans neat texture for the saved template. Don’t overwrite anything, and ensure that you know where to find the original texture, to get back where you came from originally. I made a copy of that in my project folder when I found out about the colors, describe above. From the result I find that I did quite well, except that at Ryans right (my left) the head-hair (red) meets the beard (blue marker) two rows lower than at Ryans left. To be corrected in the Growth Group. At both left and right, there is something at the ends of the front hairline, and you can see at the top-right in the image. A white square does contain hair. To be corrected in Photoshop. The area around the ear can be improved up, to be corrected in both.

Smoothing Up
When the 2D coloring is done, I select the (in my case: red) color, grow the selection with 1 or 2 pixels (to get rid of the non-selected grey template lines), make a new layer and paint it white within the selection, and black outside:

The white area will mark the places on the head where I do allow hair (to be visible), and the black areas will mark places that either will have no hair or will have transparent, invisible hair. Which means: now I can remove some hair by painting some black over white, and personally I prefer to do so on a separate layer – to keep the original info intact. This way I can remove the jags at the side.

But I also can use this technique to create some shaving patterns, as long as I take into account that a straight curve on the head, horizontal or vertical, is a round curve in the template, split in halves, and upside down (the higher up the image, the lower towards the neck).

There are special programs that can help me, which also help mapping tattoos, texts and logos (decals), or just paint in 3D straight on the head itself. There are various Photoshop tools and techniques to help me out (Liquify etc), and I can deploy some Photo Morphing program – which turn a lady into a man or a jaguar, but of course also can map a subdivided rectangle onto a curves template.

And of course, I can make a simple drawing on a pattern with some squares, and use the template to draw it in by hand.

So I did, note that I compensate for the polys on the head and the template not being equally in size: the ones in the neck are more square and the ones higher up are about twice as long.

To help myself I created an extra layer for the pattern on top of the anti-jags layer (I love separate layers, this way I always keep previous steps intact), plus a top layer. This is a straight copy from the original template, but with all the white masked out so I only see the grey lines from it. Using those grey lines and my sketch, I could whiten out the pattern.

I saved the pattern in a separate Head_Hair_pat (PNG) file, without the grey template lines of course.
Now I have to bring this pattern into Poser.
So in Material Room I give Ryan his head texture back, and I select the head hair from the Props.
Right click in the area gives me New Node, I pick Image_Map from the 2D textures menu, I pick my saved image as the Image Source in the node – and for those working in PoserPro: I set Custom Gamma value to 1,0 (although it does not matter much, as the image contains black and white only). Now I connect the image map node to Transparency, and I set its value to 1.0. So everywhere the image shows black I get full transparency. That is, when I suppress all edge effects that Poser offers to make neat glasses and bottles. So, Transparance_Edge and Falloff are set to 1.0 as well, otherwise I get some left overs in the shaved are (try for yourself. It’s no too bad for the pattern put it’s awful for the removed jags). This is not enough, because absent hair should not reflect light as well. So I also connect the image map to the Specular Color entry in the Hair node. Now the hair has no specularity any more in those places where the image is black. Now we’ve done this, it’s only a small extra step to the other parts. Growing hair, making, checking and adjusting the template, satisfying those who want a shaving pattern in the beard, playing the colors, etcetera. Note that in the template I did not make a Left/Right difference for brows or mustache. When there is no hair, a white area in the image won’t create it. Only when there is hair, the black area in the image can hide it. The main differences between the various areas, on face as well as chest (same principle to get the jags out), are the settings for hair length, and styling.

Head Beard Chin Brow L Brow R Must L Must R Body
Length 0.0050 0.0050 0.1000 0.0060 0.0200 0.0400
Variance 0.0000 0.0010 0.0200 0.0010 0.0020 0.0040
Pull Back 0.00050 0.00015 0.0005 0.0003 0.0000 0.0000
Pull Down 0.0001 0.00005 0.0012 -0.0002 0.00005 0.00000
Pull Side 0.0000 0.00000 0.0000 -0.0005 +0.0005 -0.0001 +0.0001 0.0000
Styling No 1) 2) 3) 4) 5)
Density 1,000,000 1,000,000 500,000 600,000 800,000 300,000
Tip W 2.0 1.0 0.4 2.0 1.0 0.8
Root W 2.0 1.0 2.0 2.0 1.0 2.0
Clumpiness 0.0 0.0 0.2 0.010 0.010 0.0
K Strength 1.0 1.0 3.0 0.0 2.0 0.0
K Scale 4.0 10.0 100.0 1.0 10.0 1.0
K Delay 0.0 0.0 0.30 0.0 0.30 0.0
Verts 4 20 40 16 20 20

1) just a bit next to the ears, to correct some Pull effects on specific hairs
2) brought the hairs together, and shortened the hairs at the sides, to make it taper
3) not done but needs some for a detailed finish
4) ditto
5) see notes below

For the body hair, I assigned hair groups to Left and Right Collar, Chest, Abdomen and a piece of the Waist. The two latter parts got shorter hairs, 3 resp 2 cm instead of the 4 cm for the other parts. Then I used the Body template for Ryan to create a neat smooth mask, just to get rid of the jagged edges of the hair group polys. If one wants to shave figures in there: be my guest J. I used thick hair (2mm root, 0.8mm tip) and I found out that I got a neat coverage when I distributed about 300,000 hairs over the total body area (3 large areas of 85,000 plus the rest).

I want a full length twist. So I zeroed all Pull values, opened the [Style hairs…] editor, selected all hairs, set the Falloff to the right (root) so all styling occurs from the root up instead of giving away an unstyled portion, I selected the Twist option and sometimes the Rotate option and moved the cursor left/right or vice versa to give each bodypart about the same amount of twist. There is no indicator or so, there is no recipe, it’s trial, error and clicking [Grow Guide Hairs] in panel 2 to rest all styling on that group.

I switched off Cast Shadows in the hair object properties (real body hair is too thin for that). In general: more hair takes longer renders, and making hair a bit reflective under Indirect Lighting conditions can be a real time killer. More on that in the next hair tutorial.

Poser Render Passes (1 Intro)

Don’t do things in 3D when they can be done in 2D.
Render passes save a zillion trial and error test-renders, and make results interactively adjustable.

Download this tutorial in PDF format (1.4 Mb).

Introduction

While working in Poser, I can build a scene, and start tweaking the lights, shadows, atmospheres, specular and reflection strength, and more. And I might like to add special effects, like depth of field (focal blur), volumetric lights, etcetera.
Especially with multiple objects, multiple lights, the use of raytracing and IDL (Indirect Lighting) and alike I have to face renders, re-renders, re-re-renders and all of them having serious waiting times even on my – pretty fast – machine.

Professionals facing a deadline, with a limit on the hours they can spend on the job, or with a customer that shows a lot of variance in his requirements, this is not very desirable. But even hobbyists might face deadlines (the image must be done before Christmas or another event), might find themselves quite flexible in the result they want to achieve, and certainly don’t have the time to spend all forthcoming hours on perfecting that single image.

This is why Render Passes are invented. Each Pass (image) addresses a specific facet of the result, then all passes are combined as layers in a single Photoshop (or GIMP or else) image handling kit, and then the layer parameters are manipulated instead of re-rendering the scene over and over again. In practice, this is not only much faster, it’s also far more interactive.

In this article, making and re-combining render passes is presented, as far as Poser and Photoshop are concerned. Vue will be added in a later stage, or in a separate article. I presume almost all Photoshop steps can be performed in other, similar image handling kit as well. Just the screens might look different.

Step 1 will deal with separating the darks: handling shadows. Step 2 will deal with separating the lights. Both steps can be made in just Poser, and can be combined in: separating darks as well as lights.

Step 3 will deal with the tools available in Poser Pro, and step 4 deals with the Advanced Renderer toolkit.

Poser Render Passes (2 Shadows)

Separating the darks

In this first elementary step I’ll present some basics: just dealing with the shadows in the image. To do so, I need one render result without shadows, and one with shadows only. This can be done in the Render Settings, the Manual department.

First: I uncheck the Cast Shadows option, render, and export the result.
Second: I check both the Cast Shadows as well as the Shadows Only options, render and export the result.

This gives me two images:

Without shadows Shadows only

Now I’ve got to combine those two in Photoshop. Actually the color pass is the base layer, but since I exported it with a transparent background (PNG, or in Poser Pro: EXR) I’ll added an extra white layer underneath. On top of that I add the shadow layer, and I set the Blend mode to Multiply (“Vermenigvuldigen”, in my Dutch version). Right now, I can adjust the shadow strength by reducing the opacity of this layer to 70%.

=>

This gives me the same result as reducing the shadow-intensity of the light in Poser, but without the re-render.

In reality, all considerations that reduce the depth (darkness) of the shadows will also lead to blurred edges. Deep shadows have hard edges, while shallower shadowing comes with blurred edges. However, it’s a bad idea to use Photoshop blur un the Shadow-layer, because this 2D approach does not take the geometry of the scene into account.

So I re-render the scene, for the shadow-pass only, with a very blurred shadow setting. Note that just tenfolding the blur radius (from default 2 to 20) might give very blodgy shadows, so I tenfold the Samples (from default 19 to 200) as well. Render and export as a new “soft shadows” image.

In the meantime, I’ve created a layer group for the shadow layers, put the already present Shadows only layer into it, and assigned the Multiply blending mode and the 70% opacity to the group while resetting the layer itself to Normal blending, and 100% opacity.

Now I can add the new Soft Shadows layer to the file (and to the shadows-group), Normal blending, 100% opacity too. Then I can start experimenting with the opacity of the top shadow layer. It’s a matter of taste whether to use the hard or the soft shadows layer on top. I prefer the hard ones, as using low opacity values for that layer gives me better control and nuances over the final result.

0% hard shadows 50% hard shadows 100% hard shadows

As a result, I’ve got one slider (group opacity) to set the shadow intensity and one (top shadow layer opacity) to set the shadow edge blur. Completely interactive, without the need for a re-render. I can stop right here, or I can use this method to establish the best settings for my final render. For instance, I might conclude that I’ll have to set shadow intensity to 70%, and shadow blur halfway (say radius 10, samples 100) for my final Poser render.

Note: if you look at the car closely, you’ll notice that there is some shadowing under the tires as well which is not dealt with. This is the result of the Ambient Occlusion setting of the light, I’ll discuss it in the next chapter.

Poser Render Passes (3 Lights)

Separating the lights

In a scene with multiple lights, I’ll have to balance those lights for color and intensity. This too can be done the Render-Pass way, to either achieve the desired result or to achieve the preferred settings for the final render.

The first thing I’ve got to do is to set my Render Settings. In the first run, I uncheck Cast Shadowsand Shadows only to get my shadow-less results. I don’t render, but just Save Settings instead.
Then my help is in the menu: Scripts \ Rendercontrol \ Renderpasses:

And all I have to do is to click the presented folder path to set a better path for the results. It’s well advised to make an empty folder for each run in order to avoid overwriting files. Then I click [OK] which gives me – for a two light scene and both the extra ambient and occlusion passes checked – with four (PNG) files.

Now I alter the Render Settings, check Cast Shadows and Shadows only, Save Settings, go for the RenderPasses script, select a new “Shadows” folder (which I’ve created first), and click [OK]. Same files, but:

  • The files call Light 1 and Light 2 are the shadows of those lights, so I might rename them to Shadow 1 and Shadow 2 respectively and add them to the previous collection with the Light files.
  • The Ambient pass and Occlusion pass files can be discarded. Even better, the y can be checked OFF in the dialog box. With shadows only there will be no meaningful Ambient result while the Occlusion result will be the same as in the previous run. I will not use Ambient in this chapter, though.

Notes:

  • For all lights, switch OFF their Occlusion settings, and set all of them to 100% intensity, full white, 100% shadow.
    I prefer raytraced shadows for better quality and I have raytracing switched ON in the Render Settings. But that’s a personal preference only, shadow maps will work as well.
  • Following the routine from the chapter on shadows, I perform the shadow-run twice.
    First, all my lights have hard shadow setting (radius 2, samples 20).
    Second, all my lights have soft shadow setting (radius 20, samples 200).
    Next to the Shadow 1 and Shadow 2 files, this gives me Soft Shadows 1 and Soft Shadows 2 as well.
  • When you’re using Poser Pro, be sure to switch OFF Gamma Correction for the first, shadowless renderpass. Otherwise you’ll end up with a too much bleached result. For the shadow passes, it’s better to leave GC ON.

Okay, now I’ll have to combine the intermediate files to Photoshop layers. Like in the previous chapter, I make a white background layer to handle the transparencies from the PNG files, and I bring in both Light layers. The first one is set to normal blending (with the background) and the second is set to Screen blending. This blending mode acts if you’re projecting a slide on a wall, which is exactly what happens when we’re adding lights. To reduce the intensity of the second light relative to the first, I can reduce the opacity of that light-layer. If I want to intensities just the other way around, I have to swap the Light layers (and their blending modes).

Then I add the shadow layers. Occlusion – or: self shadowing – has its own blending mode and opacity, the other Hard- and Soft shadow layers per light can be grouped. Each group has Multiply blending and its own opacity, and within each group hard and soft shadows can be balanced separately.

Now it becomes so easy to play.

Softer shadows (group opacity to 50%, hard shadows opacity to 15%), a brightness boost using a curves adjustment on the light-layers, and a color balance adjustment to both lights: more green to Light 1 (right/back side), more red to Light 2 (left/front side).

Poser Render Passes (4 PoserPro)

Featuring Poser Pro

Poser Pro, from version 2010 up, presents some additional features for generating render passes, and high quality results.

In Render Settings, the right bottom corner, you”ll find the Auxiliary Render Data. Just check a few, render, and export the image (as, say, Multipass.png).

In this example I’ve checked six options, and hence I get seven files:

  • My Multipass.png final render result
  • Multipass Custom 1, 2 and 3.png results
  • Multipass P, Z and objectID.png results

It does so for each image format, JPG, TIF or whatever I select. Unless I select the PSD format, this gives me one Photoshop file with seven layers. Blending modes nor opacity is set yet, layers are switched off and the final render result is presented as the background layer, with a transparency mask.

For the High Definition results (16 or 32 bit per color) file formats HDR and EXR things work exactly the same, you’ll get all files in one export. Please remind to check the “HDRI optimized output” box before rendering when you want to go the HDR route. EXR is Poser internal format, use it when you can afford it.

Let me show you what can be done now.

Custom 3
Assuming that nothing else is arranged for (*), this image or layer stands for: Shadow. So the blending mode should be set to Multiply. And as a bonus we can forget about that previous Separating the darks chapter, because this is catered for already.

Custom 2

Again, assuming that nothing else is arranged for (*), this image or layer stands for: Specular. You might be aware that each direct light is represented by a Specular and a Diffuse channel, as can be seen in the Material Room. Now the Light layer which was one single item in the previous chapter, gets split in its two components. As this is a Light layer, blending mode should be: Screen.

To make the maximum use of this, I push the specularity in materials and lights to their limits while keeping the proportions between the various materials. Then I can reduce the shine in the scene by controlling the opacity of this Specularity / Custom 2 layer.

Custom 1
Again, assuming that nothing else is arranged for (*), this image or layer stands for: Diffuse. This is the other – and effectively most relevant – portion of the lit scene. This layer should be in Screen mode too.

(*)

Each PoserSurface in the Material Room has three extra slots, where anything can be plugging in what you like. The result of that will not appear in the render result, but will end up in the appropriate output file or PSD layer to help you to establish specific masks for special effects. When not filled, Diffuse, Specular and Shadow take the place of Custom 1, 2 and 3 respectively.

ToonID or ObjectID
Each material gets its own ID (or you can assign the same ID to more materials, but why should you do that?), and in this layer each area with the same ID is represented by its own unique color. So, especially when I set the range of the Photoshop colorpicker to 1×1 (point) resolution, I can make selections in the image by just picking a color. This way, I can build masks ans working selections very efficiently. Hence, this layer is a helper, not part of any result as such.

Z-Depth
The greyscale in this image or layer represents the distance from the camera into the scene. Nearby is bright, far away is dark. This helps me to place some fog in the scene.

  • I make a working copy of the layer, and replace all transparencies by black: these areas are really far out
  • Then I invert this layer, so near becomes black and far away becomes white
  • With Ctrl A, Ctrl C is put a copy on the clipboard
  • I add a new layer, and fill it with some cloud structure (Filter, Render, Clouds)
  • I give this layer a mask, open the mask (Alt-click), and with Ctrl V I put the inverted Z-Depth into the mask
  • Voila, fog increasing with distance
  • And now I can alter the brightness and contrast of the mask to change the fog density and fall-off rate.

Or, use a dark teal color instead of the clouds, and you’re in a deep sea environment. Take your pick, instant atmospherics.

Another popular thing to do with Z-Depth is creating a Depth of Field effect, or: focal blur. I start with a single layer copy of everything else in the image, Crl+Alt+Shift E does the job in Photoshop merging all (visible) layers into a new one. I put it on top, and give it a serious amount of blur. Then I give it an empty Mask, and copy-paste and invert the Z-Depth into it exactly like I did when making fog. Or otherwise: Ctrl+click the mask of the Fog layer, this will turn that mask into a selection, and then assign a mask to the Blurred layer, which turns my selection into a mask again. Whatever suits you best.

Now the blur is strong at the back (mask is white), intermediate in the middle where I want the sharp focus (mask is grey) and absent at the front (mask is black). But I need it black in the middle (no blur) and white upfront as well, that creates the focal blur effect.

So I Alt-click the mask, and start applying the Curves adjustment. By clicking the drivers chair a dot on the curve tells me how grey it is (105 out of 255 in this example below), and then I raise the extremes and lower that 105 midpoint. That’s it.

=>

Position
This a bit like Z-Depth, but now I can find the X,Y,Z positions in the respective R,G,B channels of this Photoshop layer. You can dream up your own applications for this, one I use often is: use the green (Y, height) channel for making height depending ground fog.

I gave the P(osition) layer a black-to-white gradient background (layer 1) because I had to get rid of the transparency, created a new combi layer (Alt+Ctrl+Shift E) – layer 3, switched to channels, selected and copied to Green channel to the clipboard, pasted this into the mask of the cloudy Ground Fog layer, and inverted it as ground fog is dense (white) at the ground. With an extra blue background, this is a first result.

For a better result, the ground fog density should be distance dependent too, shouldn’t it? So the final cloud mask has to be the multiplication of this height-dependent mask, and the previous depth-dependent mask.

So I create two layers, each containing the masks, set the bottom one to Normal blending and the top one to Multiply, I combine the layers (Ctrl E), and select – copy – past the result into the cloud mask.
See the result; a brownish cloud hue puts the car in a nice sand storm.

Do note that I just demonstrated the use of the height and depth information over a simple background. Instead of this background, the Diffuse, Specularity and Shadow layers should be combined in proper proportions. Then the fog layer should be on top of them, in Normal mode.

So, with this method I don’t need the first Separate the Darks – method. But I still can use the second Separate the Lights approach. Since this method produces seven files per render (as I ticked six layers plus the total render), and makes a render for each light (I’ve got to in this case) plus the two options (Ambient and Occlusion) checked, I end up with (6+1) * (2+2) = 28 intermediate PNG results.

A lot of those are duplicates, or of no use, and can be discarded. The ones I need are:

  • For Light 1.. N the Custom 1,2,3 (Diffuse, Specular, Shadow) layers, and perhaps the final results as well
  • Only one set of ID, P(ositon) and Z(-depth) layers, as they are all the same for the various lights
  • The ambient pass and the occlusion pass.

And when I want a hard-shadow and a soft-shadow version, I’ve to alter the settings per light, and rerun without the need for an extra Ambient or Occlusion. And I make sure to rename the first Custom 3/ Shadow layers first, otherwise I won’t get anything new. From then on, I can put all the required layers in one Photoshop file and start coloring and dimming the diffuse, specular and shadows on a per-light basis, and so on.

Some notes:

  • Existing files are NOT overwritten, the new outputs are just not saved. So please do save in empty folders or rename existing files if you want to make any real progress.
  • As the Diffuse / Specular / etc layering aims at exporting the image in PSD format, the split is available when saving to other image formats as well, but limited to the 8-bits per color formats only. PSD, JPG, TIF etc are fine, but the split does not work when exporting to HDR or EXR (32 bit per color) format.
    Of course, switching lights on/off and blackening Diffuse and Specular channels per light can be done manually or in a new script. That’s tedious though.
  • The script which does the per-light split is written for PNG only. Perhaps someone can build a file-type selector, but in the meantime I’ve created just a few duplicates. The “png” buzzword appears three times in the script as a SaveImage parameter, and can be replaced by other buzzwords. “tif”, “jpg” and others are fine according to the PoserPython manual, but the (undocumented) “psd”, “hdr” and “exr” work fine too.
    Again, in combination with the layer-setting, choosing any 8-bit per color format gives the fine split on diffuse etc on a per light basis, while the 32-bit formats do not give the layered results but still make results on a per-light basis.

Poser Render Passes (5 Advanced RS 2)

Advanced Render Settings 2

Yes it does exist. The script that gives me instant access to all relevant Firefly parameters AND the ability to make render passes with even more options than before AND on a per-light basis AND with time-stamped filenames so I cannot overwrite previous files accidentally AND in full combination with all file types AND without the superfluous, redundant output files.

So now I can make EXR files for Diffuse, Specular and Shadow. But even for Z-Depth and Altitude.

A few notes:

  • PSD files come out as single layers results, so each pass gets its own PSD file.
  • Some obscure passes (like Wireframe) don’t combine well with HDR or EXR output format. But who needs that anyway, and the author is looking into it.
  • Options which are not supported by Poser itself might not show up. Like HDR and EXR file export, or the Render in background or Queue, or Gamma Correction, which are all Poser Pro 2010+ only.

Advanced Render Settings 2 (*) is one of those great tools created by Basil Gass (Semidieu), and can be purchased for less than $20 at www.RuntimeDNA.com or go directly to http://www.runtimedna.com/Advanced-Render-Settings-2.html .

(*) ARS (see http://www.runtimedna.com/Advanced-Render-Settings.html ) is for Poser 7 / Poser Pro, while ARS 2 is for Poser 8 / Poser 2010 and up, even supporting the new SubSurfaceScattering options in Poser 9 / Poser Pro 2012.

And it comes with some extra tools (which nowadays can be found in Poser 9 and PoserPro, at least partially, but not everyone has those already). And a good manual that finally tells you where all those render options are for, plus some good tutorials on how to use the tools in combination with Photoshop.
Very recommended, when you want to deploy render passes to the fullest, especially in combination with high resolution imaging, or with (non-Pro) Poser versions which don’t have the PSD-layering build in.

But beware…

Up till now I’ve discussed relatively straightforward scenes, with direct lights, and results concerning Diffuse, Specular, direct Shadow and indirect Shadow or: Occlusion, and Ambient for some lighting elements. Various tools and procedures might present similar results, differing mostly in the amount of effort to get the details out.
Full-fledged Poser scenes however might be more complicated. Indirect lighting (IDL), objects – like SkyDomes – explicitly used as lighting elements, texture maps assigned to reflection, and more, all can support or hamper the way each tool produces the required results effectively or not at all. This is where we get differences at the content level, in the produces passes themselves.

That is what the next chapters will be about.

Poser Render Passes (6 Material Details)

Considering Material Details

This Render Passes article is not about the details of the Poser Materials, but it is worthwhile contemplating them a bit. As you have noticed in previous chapters, aspects like Diffuse, Specular and Shadow can be diverted to separate layers or images. This gives me the possibility to handle color, highlights and darks in details, even on a per-light basis.

But Poser supports other aspects, like Ambient and Reflection, and also supports Indirect Lighting (IDL). How do these relate to the split (in Poser) and glue (in Photoshop) approach? And what are then the best ways to get the required results out efficiently?

My suggestion on doing Render Passes, which applies to all cases that PoserPro is used, is to switch off Gamma Correction. Because:

  • Rebalancing highlights and lowlights can be done in post, based on the available layers
  • When required the fine details in shadows can be secured by making a Shadows Only render, and then using Gamma Correction or – even better – exporting that result in high definition (EXR) format. But since the nuances in the dark usually appear in situations with dark colors combined with shadows, and render passes present those separately and make you the chief of the combination, it becomes questionable whether separate techniques are required to get those details in the individual layers themselves.

Just FireFly

When using just Firefly I can make a shadows-only and a shadow-less result as described in chapter 2 Separating the Darks. Unless I’ve got Poser Pro 2010+, I can select sub-passes in the Auxiliary Render Data list in Render Settings, as described in chapter 4 Featuring Poser Pro.

  • Custom 1
    presents Diffuse, Ambient, Refraction and – given the chance – Shadows. And in the case of IDL: plus color bleeding and all consequences of using too much light (like loss of shape and depth), especially when a SkyDome is introduced as an additional omnipresent light source. All in one result. Too loose the shadows I’ve got to uncheck Cast Shadows.
  • Custom 2
    presents Specular and Reflection, both in one result. Whether or not Cast Shadows is checked does not matter.
  • Custom 3
    presents Shadows, if Cast Shadows is checked, otherwise it’s blanc. In the case of IDL, shadows will get reduced by the surplus of available light, which becomes especially noticeable with a SkyDome around the scene.

The other sub-passes behave as described earlier. So, I just make a run with Cast Shadows checked without Custom 1 elected, and another run with Cast Shadows unchecked, with only Custom 1 selected. And I do have to save to an 8-bit per color format (PNG) as those sub-passes don’t apply to the high definition (HDR/EXR) formats.

The Render Pass Script

As described in chapter 3 Separating the Lights, this script generates separate passes per direct light in the scene. And then, when Poser Pro is used, is also saves these light-passes with the sub-passes described above. Then the case of using IDL and SkyDomes becomes interesting: at each light-pass, the SkyDome lighting is present too, but somewhat reduced compared to the final, total render. I don’t know how they do it, but when the Light-passes are combined in Photoshop using Screen blending mode, the IDL lighting appears properly. I imagined there could be a problem, having IDL present in each Light layer, but there isn’t.

The script also offers Occlusion and Ambient. Both come as rendered with all lights off. Without IDL, the Ambient pass presents a combination of Diffuse and Ambient but I don’t know what to do with it, while the Occlusion – Custom 1 pass offers the Ambient alone and the Custom 3 pass offers the neat self-shadowing. With IDL however, the Ambient pass presents a neat ambient while Occlusion still presents the self-shadowing, but goes wild in the Custom 1 sub-pass. That is: since all direct lights are off, a SkyDome with its color in Diffuse will not contribute either and leaves the Ambient channel of the ball, plus some color bleeding. But when the color of the Dome is in Ambient, the Occlusion- Custom 1 subpass presents us some IDL-only like version of the scene.

Are you still with me? Or are you feeling a bit like: let’s shut on and off IDL, let’s shut on and off shadow catching, let’s switch on all auxiliary sub passes, let’s render, and then let’s pick everything that might contribute to the result. Can’t blame you, sometimes that my approach too.

So, where are we?

  • I can get Occlusion (self shadow) separately.
    I just use the RenderPass script and check Occlusion.
  • I can get Shadows, separately, even per light.
    Either by rendering with Shadows Only or by selecting the Custom 3 sub-pass (PoserPro, 8-bit per color output (PNG, not EXR), and preferably IDL off), and by using the RenderPass script for a per-light result.
  • I can get Reflection and Specular, but only together. And per light separately
    That is: by selecting the Custom 2 sub-pass which requires PoserPro, 8-bit per color output (PNG). IDL does not matter, and I can use the RenderPass script for a per-light result.
    Not bad, as both are different sides of the same medal: reflection deals with surrounding objects while specular deals with the direct lights. In real life, these always come as one. The downside is that I have to find the right balance in Poser itself. But as long as raytrace nodes are used instead of reflection maps, I might render with raytracing switched off to get Specular only. And I can render while having all specular channels of the lights switched off, manually (or with a script). This might give me Reflection only, but without the reflection of the specularity in the surrounding objects.
    Does it matter whether I use reflection by raytracing or use a reflection (texture) map instead? No.
  • I can get Ambient, separately
    I just use the RenderPass script, check Ambient and set IDL on. Or I leave IDL off, check Occlusion and use the Custom 1 subpass which requires PoserPro and 8-bit per color output (PNG).
  • I can get Diffuse but only in combination with Ambient and Refraction.
    It requires the Custom 1 subpass which by itself requires PoserPro and 8-bit per color output (PNG), and preferably set Cast Shadows OFF. When using the RenderPass script it comes even per light. When the final result requires IDL, leave it on here as well but do note that a strong IDL SkyDome may wash out most object-shape and scene-depth.

Advanced Render Settings 2

Do I need an additional tool, as most of the passes can be made without it? I think I do, since it not only can do more and better but also saves me most of the hassle.

In the first place I can get Z-depth and Altitude (the Y-part of Position) and Alpha MAT (the equivalent of Toon/Object ID) without the sub-passes, and hence without the need for PoserPro or with PoserPro, it works for the high definition output as well.

Second, I can get Shadows, and everything else shadow-less, in one go: I just check the Shadows option and I uncheck the Cast Shadows for rendering, as the shadow pass is generated even when they’re off in the rendered result (tip: save the settings before rendering). I can ask for separate light-passes as well (even separate figure passes). I must render without IDL though, hence I can get it without the need for PoserPro or with PoserPro, it works for the high definition output as well. Note that I can get separate shadows in an IDL render as well, but that requires the Custom 3 subpass, and why should I follow the hard way?

Third, I can separate Reflection and Specular, provided that I render without IDL and that I use the Custom 2 pass. So that requires PoserPro and 8-bit per color. The reflection then can be found in the Color \ Custom 2 pass.
Specular comes on a per-light basis, when requested so by ticking that option. Note that if I don’t crank out the reflection as described, I can still get the Specular + Reflection combination in the Custom 2 sub-passes of each light. I can’t get Reflection out separately when using just Poser, or when exporting to EXR.

Forth, I can get Diffuse and Ambient out separately, I just tick the boxes. Diffuse can come per light, as requested. In the meantime, the (per light) Custom 1 sub-passes give me Diffuse + Ambient + Refraction, and those sub-passes require PoserPro and 8-bit per color (PNG) output. I’ve found no way yet to get out the Refraction separately. But at least I can alter the effect by adding or subtracting Ambient and/or Diffuse from this layer, which is nice anyway.