Home Articles Tutorials Resources About
HDR Rendering Sample   Last update: 2008-09-29 13:12:58 by Rim van Wersch



About this sample

One of the “must-have” effects for modern video games is HDR rendering. When used, it allows for the rendering of beautiful and realistic lighting effects that utilize luminance values with a very high dynamic range. This makes for beautiful scenes, and can also simply the creation of artwork and assets as extremely bright or dark areas are no longer special case scenarios.

This sample presents a basic implementation of a typical HDR pipeline using the XNA Framework. The sample also implements a PostProcessor class, which can be extended for other post-processing tasks (such as depth of field, motion blur, etc.) The graphics code is all compatible with both the PC and the 360, however the sample itself only includes a Windows PC project. Update: the sample has been tweaked for the 360 as well now.

All details about the implementation and further background information on HDR can also be found in a write-up in the zip below, or downloaded seperately as a word document. Below you'll find the contents of this article:



Basic HDR Theory

When we create a GraphicsDevice, we typically have it create a backbuffer with SurfaceFormat.Color (INT8) as the surface format. This format specifies 8-bits per component per pixel, giving us 256 discrete values for each component. In our pixel shaders this [0,255] range is mapped to [0.0,1.0], and if our shaders output any value greater than 1.0 it is simply clamed when output to the backbuffer. This gives us a wide range of colors to work with in terms of what we display on the screen, however a problem arises when areas of the screen need to be significantly brighter than others.

To solve this problems, we can render to a format whose precision allows us to extend past the [0,1] range. This allows us to render wide range of luminance (brightness) values to a surface, which in turn allows us to do some pretty neat effects with the data. The most convenient way for us store this extended data is in a floating point format, such as SurfaceFormat.HalfVector4 (FP16). This format uses 16 bits per component, which can comfortably store a wide range of color values. However it also uses twice as many bits per pixel as SurfaceFormat.Color, which means double the memory usage and double the bandwidth. Other penalties can also be incurred depending on the hardware, for instance if the ROP’s or the texture units can’t handle FP16 data at the same speed they can handle INT8. To alleviate these problems, and alternate encoding format known as LogLuv is demonstrated in the sample code that encodes HDR data to a standard INT8 surface.



LogLuv Encoding

LogLuv is an encoding format described in Greg Ward’s paper The LogLuv Encoding for Full Gamut, High Dynamic Range Images. Originally designed for storing static images, it was adapted for use in real-time games by former Ninja Theory programmer Marco Salvi. By dedicating 16-bits of a 32bpp pixel to storing luminance information, it is capable of storing a very wide dynamic range of luminance values suitable for HDR rendering. Encoding and decoding can be achieved via simple pixel shader functions, found below:


// M matrix, for encoding
const static float3x3 M = float3x3(
0.2209, 0.3390, 0.4184,
0.1138, 0.6780, 0.7319,
0.0102, 0.1130, 0.2969);

// Inverse M matrix, for decoding
const static float3x3 InverseM = float3x3(
6.0013,    -2.700,    -1.7995,
-1.332,    3.1029,    -5.7720,
.3007,    -1.088,    5.6268);    

float4 LogLuvEncode(in float3 vRGB)
{        
float4 vResult;
float3 Xp_Y_XYZp = mul(vRGB, M);
Xp_Y_XYZp = max(Xp_Y_XYZp, float3(1e-6, 1e-6, 1e-6));
vResult.xy = Xp_Y_XYZp.xy / Xp_Y_XYZp.z;
float Le = 2 * log2(Xp_Y_XYZp.y) + 127;
vResult.w = frac(Le);
vResult.z = (Le - (floor(vResult.w*255.0f))/255.0f)/255.0f;
return vResult;
}

float3 LogLuvDecode(in float4 vLogLuv)
{    
float Le = vLogLuv.z * 255 + vLogLuv.w;
float3 Xp_Y_XYZp;
Xp_Y_XYZp.y = exp2((Le - 127) / 2);
Xp_Y_XYZp.z = Xp_Y_XYZp.y / vLogLuv.y;
Xp_Y_XYZp.x = vLogLuv.x * Xp_Y_XYZp.z;
float3 vRGB = mul(Xp_Y_XYZp, InverseM);
return max(vRGB, 0);
}

NOTE: credit for the optimized encoding function goes to Christer Ericcson, who posted it on his his blog.



Getting Set Up

The sample initializes itself by loading models, effects, and textures in the LoadContent method. Our two custom classes (FirstPersonCamera and PostProcessor, respectively) are also initialized here.

Before we start rendering, we also need to create our render target to which we’ll render our HDR color information. We do this in the MakeRenderTarget method, which creates a single RenderTarget2D. The parameters we supply to the RenderTarget2D constructor depends on the current settings of our app: if we’re using LogLuv encoding we’ll create use SurfaceFormat.Color, otherwise we’ll use SurfaceFormat.HalfVector4. We’ll also specify MultiSampleType.FourSamples if multisampling is enabled. To support multisampling, we also create a separate a new DepthStencilBuffer. For multisampling we don’t specify that we want the backbuffer to be multisampled, since we’ll already have rendered to a multisampled RenderTarget2D.


private void MakeRenderTarget()
{
MultiSampleType msType = MultiSampleType.None;
if (useMultiSampling)
msType = MultiSampleType.FourSamples;      

if (renderTarget != null)
renderTarget.Dispose();
if (dsBuffer != null)
renderTarget.Dispose();
postProcessor.FlushCache();                    

if (useLogLuvEncoding)
{
// LogLuv is encoded in a standard R8G8B8A8 surface                
renderTarget = new RenderTarget2D(GraphicsDevice,
GraphicsDevice.PresentationParameters.BackBufferWidth,
GraphicsDevice.PresentationParameters.BackBufferHeight,
1,
SurfaceFormat.Color,                                                    
msType,
0,
RenderTargetUsage.DiscardContents);
}
else
{
// Use regular fp16
if (!canMultiSampleFP16)
msType = MultiSampleType.None;
renderTarget = new RenderTarget2D(GraphicsDevice,
GraphicsDevice.PresentationParameters.BackBufferWidth,
GraphicsDevice.PresentationParameters.BackBufferHeight,
1,
SurfaceFormat.HalfVector4,
msType,
0,
RenderTargetUsage.DiscardContents);
}

// We'll use a seperate DS buffer in case we're using multisampling
dsBuffer = new DepthStencilBuffer(GraphicsDevice,
GraphicsDevice.PresentationParameters.BackBufferWidth,
GraphicsDevice.PresentationParameters.BackBufferHeight,
DepthFormat.Depth24Stencil8,
msType,
0);
}



Rendering the scene

For our scene, we’re going to render an HDR skybox and a single model. The HDR skybox contains texture data that is in fp16 format, which allows it to have areas that are significantly brighter than others. When we’re using FP16, we don’t really need to do anything special in our shaders. We simply output our value as normal, the only difference is that when outputting to FP16 the values won’t be clamped to [0,1]. For LogLuv however, we need to encode our final color value before outputting it. The same goes for the model Effect.

To accomplish, we include a header file called LogLuv.fxh in both the Skybox and Model .fx files. This file contains the encoding and decoding functions specified at the beginning of the tutorial. By placing the functions in a header file, we can simply include it in any Effect that requires these routines. To enabel easy switching between output in linear RGB and encoded LogLuv, we send a uniform bool parameter to the pixel shader. When we use a parameter like this whose value is specified in the technique definition, the effect compiler generates two versions of the pixel shader: one with the encoding, and one without. This allows us to easily generate different permutations of our shader, with each permutation conventiently referenced by the technique name.


float4 ModelPS (    in float3 in_vNormalWS        : TEXCOORD0,
in float3 in_vPositionWS    : TEXCOORD1,
uniform bool bEncodeLogLuv    ) : COLOR0
{
// Calculate the reflected view direction
float3 vNormalWS = normalize(in_vNormalWS);
float3 vViewDirWS = normalize(g_vCameraPositionWS - in_vPositionWS);
float3 vReflectedDirWS = reflect(-vViewDirWS, vNormalWS);

// Get the sunlight term
float3 vColor = CalcLighting(    g_vDiffuseAlbedo,
g_vSpecularAlbedo,
g_fSpecularPower,
g_vSunlightColor,
vNormalWS,
normalize(-g_vSunlightDirectionWS),
vViewDirWS    );

// Add in the reflection
float3 vReflection = texCUBE(ReflectionSampler, vReflectedDirWS);
vColor += vReflection * g_fReflectivity;                                                                            

// Encode to LogLuv?
float4 vOutput;
if (bEncodeLogLuv)
vOutput = LogLuvEncode(vColor);
else
vOutput = float4(vColor, 1.0f);                            

// return the color
return vOutput;
}



Applying Bloom and Tone mapping

Once we have HDR color data rendered to a RenderTarget2D, we can send it off to our PostProcessor to do some neat things with it. In the sample, we mainly accomplish 2 things:

  • Apply tone mapping to the scene, compressing the color values to the visible range
  • Add an HDR bloom effect
The tone mapping process implemented by the sample uses an operator described in Equation 4 of this paper. This operator allows for colors above a certain specified value (Lwhite) to “burn out”, or stay above the [0,1] range. This can be highly desirable for bloom affects, as for bloom to be applied only to very bright areas of the screen. The sample’s implementation is in pp_Tonemap.fxh, as seen below:


float g_fMiddleGrey = 0.6f;
float g_fMaxLuminance = 16.0f;

static const float3 LUM_CONVERT = float3(0.299f, 0.587f, 0.114f);

float3 ToneMap(float3 vColor)
{
// Get the calculated average luminance
float fLumAvg = tex2D(PointSampler1, float2(0.5f, 0.5f)).r;    

// Calculate the luminance of the current pixel
float fLumPixel = dot(vColor, LUM_CONVERT);    

// Apply the modified operator (Eq. 4)
float fLumScaled = (fLumPixel * g_fMiddleGrey) / fLumAvg;    
float fLumCompressed = (fLumScaled * (1 + (fLumScaled / (g_fMaxLuminance * g_fMaxLuminance)))) / (1 + fLumScaled);
return fLumCompressed * vColor;
}

This tone mapping operator requires calculation of the average luminance of the entire scene. To do this, we convert the HDR render target to luminance values and then repeatedly downscale until we have a 1x1 texture. To simulate the gradual adaptation of the human eye to different lighting conditions (or the auto-exposure feature of a camera) we can gradually adapt the current luminance value rather than directly using the value calculated through downscaling. The sample implements this feature in pp_HDR.fx, using a technique described in this presentation by Wolfgang Engel:


float4 CalcAdaptedLumPS (in float2 in_vTexCoord    : TEXCOORD0)    : COLOR0
{
float fLastLum = tex2D(PointSampler1, float2(0.5f, 0.5f)).r;
float fCurrentLum = tex2D(PointSampler0, float2(0.5f, 0.5f)).r;

// Adapt the luminance using Pattanaik's technique
const float fTau = 0.5f;
float fAdaptedLum = fLastLum + (fCurrentLum - fLastLum) * (1 - exp(-g_fDT * fTau));

return float4(fAdaptedLum, 1.0f, 1.0f, 1.0f);
}

In the code fTau is a constant that controls the rate of adaptation, and g_fDT is the amount of time elapsed since the last frame. fLastLum is the adapted luminance from the previous frame, and fCurrentLum is the luminance calculated from the current frame.

To add bloom effects, we first downscale our initial HDR texture to 1/16 size. We then apply our tone mapping operator to figure out what the color of the pixel will be once tone mapped, and then apply a simple threshold:


float4 ThresholdPS (    in float2 in_vTexCoord        : TEXCOORD0
uniform bool bEncodeLogLuv )    : COLOR0
{
float4 vSample = tex2D(PointSampler0, in_vTexCoord);

if (bEncodeLogLuv)
vSample = float4(LogLuvDecode(vSample), 1.0f);

vSample = float4(ToneMap(vSample.rgb), 1.0f);

vSample -= g_fThreshold;
vSample = max(vSample, 0.0f);    

if (bEncodeLogLuv)
vSample = LogLuvEncode(vSample.rgb);

return vSample;
}

Once we’ve applied a threshold, we apply a seperable gaussian blur and upscale back to full size. The bloom is then combined with the tone mapped scene image in a final pass:


float4 ToneMapPS (    in float2 in_vTexCoord        : TEXCOORD0,
uniform bool bEncodeLogLuv )    : COLOR0
{
// Sample the original HDR image
float4 vSample = tex2D(PointSampler0, in_vTexCoord);
float3 vHDRColor;
if (bEncodeLogLuv)
vHDRColor = LogLuvDecode(vSample);
else
vHDRColor = vSample.rgb;

// Do the tone-mapping
float3 vToneMapped = ToneMap(vHDRColor);

// Add in the bloom component
float3 vColor = vToneMapped + tex2D(LinearSampler2, in_vTexCoord).rgb * g_fBloomMultiplier;

return float4(vColor, 1.0f);
}

This final pass produces output suitable for display, so we simply output directly to the backbuffer. Which means we can sit back, and enjoy the pretty bloom effects!

-MJP





Files for this tutorial

Filename Size
  HDRSample.zip 10.0 MB
  HDRSample.doc 51.5 KB
 
XNA info is sponsored by vector4. All content is copyright © 2005-2014 by its respective authors | About XNA info | Terms of Use | RSS feed