Home Articles Tutorials Resources About
Realtime Fur   Last update: 2008-08-01 08:31:20 by Rim van Wersch



XNA really needs a central sample aggregator or something, just found this awesome fur tutorial by Catalin Zima.

This tutorial shows a basic imlementation of realtime fur on arbitrary objects, based on Hoppe et al. This implementation does cut a few corners but the result looks nice and should suffice for some basic fur.

How does it work?
The original approach uses both so-called shells and fins to generate the geometry which is used to render the fur. This project only uses shells, because that's by far easiest to implement and actually doesn't require any geometry processing. These shells, as shown in the image below, represent the volume of the fur. By incrementally offsetting vertices a short distance along their normal, these shells can be rendered in multiple passes by a simple vertex shader.



At full opacity, these shells only make for a 'fat' looking object though, with only the outer shell visible. So we essentially need to apply a mask for shell opacity. Since we have multiple shells (16 in this demo), it makes sense to forego seperate 2D masks and store the volumetric fur mask in a 3D texture. This 3D texture can be generated by rendering some hair geometry and sticking slices of this into a 3D texture, as shown in the image below (courtesy of Hoppe et al).



This tutorial already comes with a fur volume, so the above mainly serves to give you an idea of how this 3D texture would look. Generating the volume is a bit out of the scope of this tutorial but in case anyone's wondering, I wrote a simple app which rendered slices of a bunch of lines and I stuck these slices in a Texture3D using my trusty Volume Texture Tool. That's rather messy, so if there should be an overwhelming demand, I might be persuade to write a decent tool for this.

So now we have our shells and our 3D texture containing the per-shell masks in the volume slices. Obviously we need to sample the 3D texture to get the mask, but with what texture coordinates? The depth coordinate (the 3rd one, z if you like, w in sampler terms) can be passed in to the shader based on which shell we're rendering, basically CurrentShell / NumShells. For the UV coordinates, it turns out a multiple of the original mesh UV coordinates works out rather well, so we use 5 * OriginalTextCoords and set the volume sampler to mirror for continuity, which can easily be done in the pixel shader.

And there we have it, rendering some basic fur using shells.


Performance, optimizations and notes
The perfomance is acceptable on my mobile 8600GT, but it does seem to consume masses of fillrate when the mesh is viewed from up close. This is probably due to the use of alpha blending, while the original paper proposes to use alpha testing, which should give less overdraw due to z testing. I tinkered a bit with alpha testing, but the results didn't look as good as alpha blending in my tests.

A quite crucial optimization (which isn't in the demo!) would be to reduce the number of shells rendered based on the distance of the mesh to the camera. From far off, the 'depth' of the fur isn't all that visible after all, so 4 or perhaps even 2 shells would be sufficient. Another optimization might be to switch to alpha testing for distant meshes.

A final note on the fur lighting. Instead of the lookup table described in the original paper, this sample uses some simple diffuse lighting with a bit of trickery to make it work for the fur. These tricks are documented in the shader, but you might want to tinker with the fake selfshadowing on line 101 in particular to give the fur more depth. Perhaps a more suitable/complex lighting algorithm would look better, so if anyone has a go at that let us know.



Files for this tutorial

Filename Size
  RealtimeFur.zip 2.7 MB


Further reading

 
XNA info is sponsored by vector4. All content is copyright © 2005-2014 by its respective authors | About XNA info | Terms of Use | RSS feed