Skip to content
haedri edited this page Mar 13, 2015 · 15 revisions

Shadow Mapping with Libgdx

Introduction

In this tutorial we will see how to do some shadow mapping with Libgdx, it will cover some specific points :

  • Directional Lights

  • Point Lights

  • Multiple lights, multi-pass / forward rendering

  • It will cover shadows, not lighting techniques (colors, reflection,...)

It is best if you already have some knowledge about :

  • 3D with Libgdx

  • Shaders

I will try to do a tutorial as I like them : small "exercises" with steps and hints so that you can find the answer by yourself, and my solution after. There will be lots of “useless” steps, that are just here to help understand how things work.

For the best viewing experience this tutorial should be viewed in PDF: http://www.microbasic.net/resources/tutorials/shadow-mapping-with-libgdx/

Disclaimer

There are hundreds of different techniques that can be used for shadow mapping (especially when dealing with multiple lights), in this tutorial we will just see one that works with Libgdx 1.5 & Open GL 2.0, it is not the most optimized one, it’s just to cover the basic and get something that works.

I am not a native english speaker, there can be some typos or badly written sentences, don’t hesitate to contact me if you want to correct this tutorial (I can give you a temporary write permission on the Google Doc), no worries!

Part 1 : the scene

The project

This project is a simple LibGdx project with PC, Android & iOS support. If you are not familiar with LibGdx you can explore the code, it should be easy enough and well documented.

There is just one big project for the whole tutorial, but each part is separated in its own package, you can choose which part should be launched by modifying the ShadowMapping class (com.microbasic.sm in the core project).

Everything is customized as we will have to personalize stuffs later, so for example we don’t use the included shaders, we just use our own passthrough shader that will be completed later on. It might seem useless for now, but it will be used later.

If you need help importing the project please refer to the LibGdx documentation: http://libgdx.badlogicgames.com/documentation.html

If you already have an environment ready just import the gradle project like any other LibGdx project and you are done.

The project zip can be downloaded here: http://www.microbasic.net/tutorials/shadow-mapping/shadow-mapping.zip

The scene

The scene we will use is just a complex enough test case :

  • One directional light (spot on the wall)

  • Two point lights (fire & torch)

  • Multiple objects

  • Floating objects

  • High vertex count

It’s just a single big model that contains everything, it uses some elements created for one of my games that I just randomly put together to get this scene.

This is a voxel scene with around 36 000 voxels (in the original file, before export), slightly optimized (19 060 vertices, 10 123 faces). This vertices count is high enough to notice when bad optimization happen, and low enough to not disturb development.

By default it uses a FirstPersonCameraController from LibGdx to move around freely.

You can press the F2 key to take screenshots, they will be saved in the desktop/bin folder (on PC)

The shader

Current shader is a simple pass through that adds some internal shadows on faces depending on their normal (just to guess the volume of objects)

The shader program (SimpleTextureShader) is a simplified version of the default LibGdx shader program that removes everything we don’t need.

Tool classes

  • **IntFirstPersonCameraController : **just a copy/paste of LibGdx FirstPersonCameraController but adding support for azerty keyboards & sprinting (using left shift or left ctrl)

  • **FrameBufferCubeMap : **copy of LibGdx FrameBuffer but with CubeMap support, this class might not be disposed correctly when exiting program.

  • **ScreenshotFactory : **copy of the sample screenshot factory from LibGdx documentation, adding support for specific width/height (to get a screenshot from a frame buffer) and filename prefix.

Part 2 : one directional light

Introduction

In this part we will see how to render one directional light. A directional light is a light that will lighten the scene in just one direction, like a flashlight.

If you already know a bit about shadow mapping you can go through this part quickly as we will try to go slowly here for beginners to understand.

Moving the camera

Exercise

  • Create a new camera on the square light (33,10,3), looking at the center of the room (negative x)

  • Render the scene with this camera

Notes

  • Just create the camera based on existing one and use it to render the scene.

Solution

First we have to create a new PerspectiveCamera:

cameraLight = new PerspectiveCamera(120f, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());

cameraLight.near = 1f;

cameraLight.far = 100;

cameraLight.position.set(33,10,3);

cameraLight.lookAt(-1, 0, 0);

cameraLight.update();

And temporarily replace the camera at render time

modelBatch.begin(cameraLight);

modelBatch.render(modelInstance);

modelBatch.end();

What we have done here is just create a regular camera, but placed on the light object itself, it means that what is visible with this camera is also visible from the light. We’ve set up an high fov (120°) in order to enlarge what light will "see".

The far is set to 100, meaning the camera can see the whole scene, but we could set a different value depending on the light parameters.

The idea here is that everything we see with this camera should be bright, thus everything else should be in shadow later as it can’t be seen by the light. So we need a way to check from the scene camera what is really visible from the light camera point of view.

Rendering to frame buffer

Exercise

  • Render scene of previous exercise to a frame buffer, revert the main scene to its normal state/camera

  • Create a screenshot of this frame buffer to make sure everything is correct

Notes

  • Use the screenshot factory before ending the frame buffer, it will save the frame buffer content and not the screen

Solution

The main scene is reverted to its original state (using the previous camera) but the scene is now rendered twice:

  • Original view, "player" view, on screen

  • Light view, in a frame buffer

The new render function is:

public FrameBuffer frameBuffer;

public static final int DEPTHMAPIZE = 1024;

public void renderLight()

{

if (frameBuffer == null)

{

		frameBuffer = new FrameBuffer(Format.RGBA8888, DEPTHMAPIZE, DEPTHMAPIZE, true);

}

frameBuffer.begin();

Gdx.gl.glClearColor(0, 0, 0, 1);

Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);

modelBatch.begin(cameraLight);

modelBatch.render(modelInstance);

modelBatch.end();

if (takeScreenshots)

{

		ScreenshotFactory.saveScreenshot(frameBuffer.getWidth(), frameBuffer.getHeight(), "depthmap");

}

frameBuffer.end();

}

And it should be called from the render

public void render(final float delta)

{

act(delta);

renderLight();

renderScene();

}

We now have two screenshots when pressing F2, the main scene

And the frame buffer

This last view is highly deformed (high fov and doesn’t depend on the window size anymore), but we don’t care as the player will never see it.

The size of the frame buffer (DEPTHMAPIZE) can be adjusted for higher/lower shadow quality.

Creating a depth shader

Exercise

  • Instead of rendering the actual scene render a depth map for the light

Notes

  • This will need a new shader for the depth map

  • Depth is just the length of the vector (position,light position)

  • To get a value between [0,1] just divide by the far value of the camera

Solution

Duplicating the shaders

The idea now is to create a depth map : an image that contains the depth of all the points (the distance to the camera/light) instead of the actual image data (texture and such)

This image will be used later to decide on what is in shadow or not.

What we need first is to copy/paste the scene ShaderProgram and ModelBatch :

shaderProgramDepthMap = setupShader("depthmap");

modelBatchDepthMap = new ModelBatch(new DefaultShaderProvider()

{

@Override

protected Shader createShader(final Renderable renderable)

{

		return new DepthMapShader(renderable, shaderProgramDepthMap);

}

});

You can also copy/paste the SimpleTextureShader and create a new one called DepthMapShader, it’s basically the same except it doesn’t need some uniforms/attributes (texture information, normal matrix,...). This is purely optional (just in case you would need to do some specific work on the shader later).

And of course use this new ModelBatch in the frame buffer

modelBatchDepthMap.begin(cameraLight);

modelBatchDepthMap.render(modelInstance);

modelBatchDepthMap.end();

Also create the depthmap fragment and vertex shaders by copy/pasting the scene shaders (in the android project, assets).

If you take a screenshot now you should just get the exact same images as before (or a black screen if you have already removed the diffuse texture information).

Creating depth map

Now what we want in the frame buffer is the depth map, then length between each point to the camera/light.

So in the vertex shader we now need to save the vertex coordinates in a varying for the fragment shader

varying vec4 v_position;

void main()

{

v_position = u_worldTrans*vec4(a_position, 1.0);

gl_Position = u_projViewTrans *v_position;

}

What is done here is that the position is saved after applying transformation on the object (the scene is scaled in this project) but before applying the camera projection.

Then in the fragment shader we need to calculate the length of the vector (v_position,light position), problem is the value is something like [0,cameraFar] and we can only write [0,1] values as colors. So we also need to send the camera far value to the shader before rendering the light

shaderProgramDepthMap.begin();

shaderProgramDepthMap.setUniformf("u_cameraFar", camera.far);

shaderProgramDepthMap.setUniformf("u_lightPosition", cameraLight.position);

shaderProgramDepthMap.end();

modelBatchDepthMap.begin(cameraLight);

modelBatchDepthMap.render(modelInstance);

modelBatchDepthMap.end();

And retrieve it in the fragment shader

uniform float u_cameraFar;

varying vec4 v_position;

uniform vec3 u_lightPosition;

void main()

{

gl_FragColor 	= vec4(length(v_position.xyz-u_lightPosition)/u_cameraFar);

}

The framebuffer content now becomes :

A darker pixel means the point is farther away from the camera.

At this point we could theoretically use an Alpha texture for the frame buffer (Format.Alpha instead of Format.RGBA8888) as we don’t need the 4 color components, but this format is not well supported on mobile device, should be ok on PC though. We could also use some pack/unpack functions to store depth on all 4 components (32 bits) of the texture, but it makes the screenshots hard to understand (see the end of the tutorial for some pack/unpack functions).

Rendering "depth map" on main render

Exercise

  • We want to see the light depth map, but from the scene camera, for that we will need 2 renders First on the main scene, recalculate the depth map viewed from light and display it Second on the main scene, retrieve the light depth map (from frame buffer) and display it

Notes

  • You need to send the *camera.combined *matrix to get the transformation matrix of the light camera in the scene camera

  • To get the position inside the texture from the coordinates in the camera projection the formula is *(v_positionLightTrans.xyz / v_positionLightTrans.w)0.5+0.5

Solution

Depth map - light to scene

We are now on the main scene but we need to do transformation from the light camera perspective, so we will need to send the camera transformation matrix to the shader.

In the scene vertex shader, add the new uniform

uniform mat4 u_lightTrans;

And in the MainScreen class add the corresponding uniform before rendering the scene

shaderProgram.begin();

shaderProgram.setUniformMatrix("u_lightTrans", cameraLight.combined);

shaderProgram.setUniformf("u_cameraFar", cameraLight.far);

shaderProgram.setUniformf("u_lightPosition", cameraLight.position);

shaderProgram.end();

modelBatch.begin(camera);

modelBatch.render(modelInstance);

modelBatch.end();

Back to the vertex shader we can now calculate the vertex coordinates in the light camera space

varying vec4 v_positionLightTrans;

varying vec4 v_position;

void main()

{

v_position = u_worldTrans * vec4(a_position, 1.0);

v_positionLightTrans = u_lightTrans * v_position;

gl_Position =   u_projViewTrans * v_position;

...

In the fragment shader we just have to override the previous value of the color to get the depth map

...

uniform float u_cameraFar;

uniform vec3 u_lightPosition;

varying vec4 v_positionLightTrans;

varying vec4 v_position;

void main()

{

...

float len = length(v_position.xyz-u_lightPosition)/u_cameraFar;

finalColor.rgb = vec3(1.0-len);    

gl_FragColor = finalColor;    

}

And the resulting image :

What we see here is the distance of each point to the light source

Depth map - from frame buffer

We will now need the previous frame buffer resulting texture, so we have to bind it and add it to the uniforms

shaderProgram.begin();

final int textureNum = 2;

frameBuffer.getColorBufferTexture().bind(textureNum);

shaderProgram.setUniformi("u_depthMap", textureNum);

shaderProgram.setUniformMatrix("u_lightTrans", cameraLight.combined);

shaderProgram.setUniformf("u_cameraFar", cameraLight.far);

shaderProgram.setUniformf("u_lightPosition", cameraLight.position);

shaderProgram.end();

And in the scene fragment shader

vec3 depth = (v_positionLightTrans.xyz / v_positionLightTrans.w)*0.5+0.5;

float len = texture2D(u_depthMap, depth.xy).a;

finalColor.rgb = vec3(1.0-len);

gl_FragColor = finalColor;

First we calculate the depth vector which contains the coordinates to look for in the light depth map, then we extract the pixel color and display it

Now we can see the same depth map but viewed from the light, we can already guess where shadows should be

Applying shadows

Introduction

From previous exercise we now have an idea of where the shadows should be, if we apply a differential filter to both images we get something like that

When values are equal we get a pitch white, and when values are different we get a gray color. The idea will now to do this programmatically

Exercise

  • Compare both depth map, render shadows appropriately

  • Make the light dimmer farther away from source

  • Clean up code (create a light class,...)

Notes

  • You must compare depth maps calculated in previous exercises: from the texture extracted from the frame buffer and calculated with the camera projection matrix

Solution

Rendering shadows

First approach can be to just check if values are nearly equal

vec3 depth = (v_positionLightTrans.xyz / v_positionLightTrans.w)*0.5+0.5;

float lenFB = texture2D(u_depthMap, depth.xy).a;

float lenLight = length(v_position.xyz-u_lightPosition)/u_cameraFar;

float diff=lenFB-lenLight;

if(!(diff<0.01 && diff>-0.01))

finalColor.rgb*=0.4f;

gl_FragColor = finalColor;

Result is not perfect but we are getting there

Looking the other way we can see it’s not so good

Basically everything that is not in the light frame buffer texture will not render nicely, which make sense as it is not visible from the light camera, so we need to clean everything :

void main()

{

vec4 finalColor  = texture2D(u_diffuseTexture, v_texCoords0);

finalColor.rgb   = finalColor.rgb*v_intensity;

vec3 depth = (v_positionLightTrans.xyz / v_positionLightTrans.w)*0.5+0.5;

// Make sure the point is in the field of view of the light

// and also that it is not behind it

if (v_positionLightTrans.z>=0.0 &&

	 (depth.x >= 0.0) && (depth.x <= 1.0) &&

	 (depth.y >= 0.0) && (depth.y <= 1.0) ) {

 float lenToLight=length(v_position.xyz-u_lightPosition)/u_cameraFar;

 float lenDepthMap= texture2D(u_depthMap, depth.xy).a;

 // If can not be viewed by light > shadows

 if(lenDepthMap<lenToLight-0.005){

	 finalColor.rgb*=0.4;

 }

}else{

 finalColor.rgb*=0.4;

}



gl_FragColor 	= finalColor;    

}

In practice we don’t check for equality but we check if the current point we are drawing is not behind the calculated depth (lenDepthMap<lenToLight) because there could be objects in front of the light that don’t cast shadows but are still impacted by lights. For example in our scene let’s say there are some particles between the light and the monster (anything, vapor, bullet trail,...), we don’t want them to cast shadows as it would be too costly (if we were using cache it would have to be updated every time a particle moves) but we still want them to be darker/lighter depending on whether they are in a shadow or not. So doing this simple comparison will act as if the object was transparent to light but will still be darker in shadows.

We also add an error margin (lenToLight-0.005) because the precision is never high enough to get a perfect 1 to 1 pixel definition between the scene and the light frame buffer. There are many ways to manage this error margin, this is just the easiest one to implement.

This is a lot better now.

Another explanation

Another way to view this is to go back to the depth map viewed from the scene, when a line is drawn from the light and we check the depth/brightness of the points :

From the light view it can only see the point with a brightness/depth of 0.75 as the other one is behind the monster, but we need to check that programmatically using the real depth map.

In the light view only the point with a brightness of 0.75 exists, it means the other one (0.45) can not be seen by the light, hence it should be in darkness.

Dimmer light

Now we just add a little calculation to get a light that gets dimmer when it’s further away from the source.

if(lenDepthMap<lenToLight-0.005){

finalColor.rgb*=0.4;

}else{

finalColor.rgb*=0.4+0.6*(1.0-lenToLight);

}

It’s not a huge difference when far is set to a high value, but it still is a bit better

Cleaning

I won’t detail code cleaning, just have a look at the project, the idea is to have an abstract class called Light that could refer to any light, and a child class called DirectionalLight that just applies to directional lights.

At this point if you have something that works you can just copy/paste the part2 package of the tutorial project to start the next parts with a clean code.

Part 3 : one point light

Introduction

Point lights are like 6 directional lights with a FOV of 90° looking in all 6 directions. You could achieve a point light by using 6 directional lights and merging them but there is something easier: cube maps.

Basically a cube map is an object composed of 6 textures, one for each face of the cube. There is a specific frame buffer used to write a cube map: instead of rendering the scene 1 time and closing the frame buffer now the scene will be rendered 6 times to a specific side of the frame buffer cube map, each time changing the camera orientation.

Creating frame buffer (cube map)

Exercise

  • Create a new Light class (based on DirectionalLight)

  • Place the light on the scene (-25.5, 12.0, -26) replacing the previous one

  • Use FrameBufferCubeMap instead of FrameBuffer, render the scene to the 6 sides

Notes

  • Don’t forget to add a higher near for the light camera as the light will end up inside a sprite, so it needs to ignore geometry near itself

  • The light must now render the scene 6 times, the frame buffer begin(side,camera) function must be called 1 time per side, the end() function should be called only once, at the end

Solution

First copy/paste the DirectionalLight class, and call it PointLight, it will be a good start. Replace the light on the scene by this one, placing it at the specified coordinates.

Remove all references to the direction variable for this class, they won’t be used for a point light as this light emits in all directions.

If you render the scene at this point you should just have a small part of the scene lighten. At this point this is just a directional light pointing at the default direction.

Change the FrameBuffer to FrameBufferCubeMap, instantiation is the same (except height isn’t used as this is a square), but the render function changes as we need to render the scene 6 times (each side of the cube that will hold the final texture).

public FrameBufferCubeMap frameBuffer;

public Cubemap depthMap;

@Override

public void render(final ModelInstance modelInstance)

{

if (frameBuffer == null)

{

		frameBuffer = new FrameBufferCubeMap(Format.RGBA8888, MainScreen.DEPTHMAPSIZE, true);

}

shaderProgram.begin();

shaderProgram.setUniformf("u_cameraFar", camera.far);

shaderProgram.setUniformf("u_lightPosition", camera.position);

shaderProgram.end();

for (int s = 0; s <= 5; s++)

{

	Cubemap.CubemapSide side = Cubemap.CubemapSide.values()[s];

	frameBuffer.begin(side, camera);

	Gdx.gl.glClearColor(0, 0, 0, 1);

	Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);

	modelBatch.begin(camera);

	modelBatch.render(modelInstance);

	modelBatch.end();

	if (mainScreen.takeScreenshots)

	{

   	ScreenshotFactory.saveScreenshot(frameBuffer.getWidth(), frameBuffer.getHeight(), "depthmapcube-" + side);

	}

}

frameBuffer.end();

depthMap = frameBuffer.getColorBufferTexture();

}

We just go through the possible sides from* Cubemap.CubemapSide* beginning the frame buffer each time and rendering the scene. The *begin() *function takes care of positioning the camera accordingly. We need to call the ScreenshotFactory each time, we can’t call it only once as it would just store the latest side being written.

If you render the scene it shouldn’t do anything special, render could crash as texture type in the shader is not correct, in this case just comment out the renderScene() call, we just need screenshots of the frame buffer content.

One last thing to do is to adapt camera field of view, it needs to be set to 90° in order to get a perfect cube.

camera = new PerspectiveCamera(90f, MainScreen.DEPTHMAPSIZE, MainScreen.DEPTHMAPSIZE);

If you take some screenshots you should get 6 of them, each displaying the depthmap from the light point of vue in all directions. Here they are combined in a single image (and rotated) :

Just imagine applying this texture on a cube and placing yourself inside that cube you would see the whole scene.

You can see some of the light model geometry as the light is placed inside the light model itself, we will just adjust camera near to ignore this geometry while rendering the light depth map.

@Override

public void init()

{

super.init();

camera = new PerspectiveCamera(90f, MainScreen.DEPTHMAPSIZE, MainScreen.DEPTHMAPSIZE);

camera.near = 4f;

camera.far = 70;

camera.position.set(position);

camera.update();

}

This is better, we can still see some of the flame particle above, but it’s not important

We now have a correct depth map to work with!

Applying frame buffer

Exercise

  • Replace the previous texture in the scene shader by the cube map

Notes

  • Use samplerCube instead of sampler2D as the texture type in the shader

  • To get a pixel from the cubemap use* textureCube(u_depthMap, lightDirection) *

Solution

We just have to work on the scene fragment shader here, the depthMap type must be changed from sampler2D to samplerCube as it is now a cube map texture, and to retrieve a pixel from this texture we need to use the textureCube function. This function takes a direction as a parameter, it is the vector (current position, light position), the same one we were using previously to create the depth map (how convenient)

uniform samplerCube u_depthMap;

void main()

{

vec4 finalColor  = texture2D(u_diffuseTexture, v_texCoords0);

finalColor.rgb   = finalColor.rgb*v_intensity;

// Make sure the point is in the field of view of the light

// and also that it is not behind it

vec3 lightDirection=v_position.xyz-u_lightPosition;

float lenToLight=length(lightDirection)/u_cameraFar;

float lenDepthMap= textureCube(u_depthMap, lightDirection).a;

// If can not be viewed by light > shadows

if(lenDepthMap<lenToLight-0.005){

		finalColor.rgb*=0.4;

}else{

	finalColor.rgb*=0.4+0.6*(1.0-lenToLight);

}



gl_FragColor 	= finalColor;    

}

We don’t need to check if the point is in the camera’s field of view as the light now emits in all directions, so the code is actually easier to understand.

We now have our second light type, time to combine them now!

Part 4 : multiple lights / forward rendering

Introduction

There are lots of techniques to render multiple lights, usually you will hear about deferred shading, it seems to be the most common approach nowadays. But here we will focus here on forward rendering, it’s easier to work with and it will work on desktop & mobile (even old devices).

A simple approach that could work in our case (few lights) is to use an array of depth map textures and combine them when rendering the scene. Problem is that it might work here but in a real test case you will quickly run out of available textures units and it will be hardware dependent, so we will not do that.

Instead we will have a forward rendering approach, again there are lots of ways to implement this but basically what we will do is:

  1. Render all depth maps to their own texture (as we do currently)

  2. Create all shadow maps for the scene and blend them in a single texture (new step)

  3. Render the scene using the texture from previous point to display shadows (scene shader highly modified)

In point 2 the scene geometry will be rendered one time per lights, which of course is the downside of this approach. We could also combine steps 2 & 3 (adding a final pass in step 2) but it is easier to understand that way.

New scene frame buffer

Exercise

  • Create an intermediate frame buffer that will just be an exact replica of what is actually displayed on screen

  • It should have its own shader (based on SimpleTextureShader), and copied/pasted shaders from the scene

Notes

  • The resulting screenshot must be exactly the same as the screen screenshot (size,…) but using its own (copied) shaders.

Solution

First we have to duplicate some elements:

  • SimpleTextureShader (from tools package) to ShadowMapShader

  • scene_v.glsl to shadows_v.glsl

  • scene_f.glsl to shadows_f.glsl

And adapt the code (MainScreen)

private ShaderProgram shaderProgramShadows;

private ModelBatch modelBatchShadows;

private ModelInstance modelInstanceShadows;

public void initShaders()

{

...



shaderProgramShadows = setupShader("shadows");

modelBatchShadows = new ModelBatch(new DefaultShaderProvider()

{

	@Override

	protected Shader createShader(final Renderable renderable)

	{

    	return new ShadowMapShader(renderable, shaderProgram);

	}

});

}

/**

  • Render the scene shadow map

*/

public void renderShadows()

{

if (frameBufferShadows == null)

{

	frameBufferShadows = new FrameBuffer(Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), true);

}

frameBufferShadows.begin();

Gdx.gl.glClearColor(0, 0, 0, 1);

Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);

currentLight.applyToShader(shaderProgramShadows);

modelBatchShadows.begin(camera);

modelBatchShadows.render(modelInstance);

modelBatchShadows.end();

if (takeScreenshots)

{

	ScreenshotFactory.saveScreenshot(frameBufferShadows.getWidth(), frameBufferShadows.getHeight(), "shadows");

}

frameBufferShadows.end();

}

/**

  • Render a frame

*/

@Override

public void render(final float delta)

{

act(delta);

currentLight.render(modelInstance);

renderShadows();

renderScene();

}

This is fairly easy, just copy/paste/adapt existing rendering code, make sure the frame buffer is the same size as the screen, and don’t mess up the renamed variables. To make sure the new shader is really used you can slightly modify it and see if modifications appear in the screenshots.

When taking screenshots the scene and shadow screenshots must be the same.

Shadows frame buffer (1 light)

Exercise

  • Instead of rendering the whole scene, just render the light map (lighten areas that are not in shadows)

Notes

  • We start here with a black screen and lighten the pixels (in the shader) that are not in shadows

Solution

Just adapt the shadows fragment shader, we don’t need to use the actual texture color now we just have to insert the shadow/light part on to the final image.

void main()

{

float intensity=0.0;

// Make sure the point is in the field of view of the light

// and also that it is not behind it

vec3 lightDirection=v_position.xyz-u_lightPosition;

float lenToLight=length(lightDirection)/u_cameraFar;

float lenDepthMap= textureCube(u_depthMap, lightDirection).a;

// If can not be viewed by light > shadows

if(lenDepthMap<lenToLight-0.005){

		intensity=0.4;

}else{

		intensity=0.4+0.6*(1.0-lenToLight);

}

gl_FragColor 	= vec4(intensity);

}

And now the frame buffer should just contain lighting information

Applying shadows frame buffer

Exercise

  • Use the texture from previous frame buffer to render shadows on the final render

Notes

  • When drawing a pixel you will need to get the pixel color (light information) from the previous frame buffer

  • gl_FragCoord fragment shader variable contains the coordinates of the pixel being drawn (coordinate is in pixel, so you need the screen width/height to get a [0,1] value)

Solution

The idea now is to remove our current light calculation in the scene shader, instead we will use lighting information from the shadow framebuffer.

So for every pixel in the scene shader we will need to retrieve the pixel at the same position in the shadow framebuffer and multiply its alpha value. It’s fairly easy to know what pixel is being drawn from within the fragment shader, problem is that this pixel is in screen coordinates (pixels) and the texture from the framebuffer is in [0,1] range coordinates, so we need to know the screen size to adjust variables.

Let’s send all these informations to the scene shader

/**

  • Render the main scene, final render

*/

public void renderScene()

{

Gdx.gl.glClearColor(0, 0, 0, 1);

Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);

shaderProgram.begin();

final int textureNum = 4;

frameBufferShadows.getColorBufferTexture().bind(textureNum);

shaderProgram.setUniformi("u_shadows", textureNum);

shaderProgram.setUniformf("u_screenWidth", Gdx.graphics.getWidth());

shaderProgram.setUniformf("u_screenHeight", Gdx.graphics.getHeight());

shaderProgram.end();

   	 

modelBatch.begin(camera);

...

}

The previous call to currentLight.applyToShader(shaderProgram); is removed as we won’t use it anymore and instead we bind the framebuffer texture and we send the screen size to the shader.

In the scene vertex shader we just remove unnecessary code, everything related to light calculation is not necessary anymore

uniform mat4 u_projViewTrans;

uniform mat4 u_worldTrans;

uniform mat3 u_normalMatrix;

//uniform mat4 u_lightTrans;

varying vec2 v_texCoords0;

varying float v_intensity;

//varying vec4 v_positionLightTrans;

//varying vec4 v_position;

void main()

{

// Vertex position after transformation

vec4 pos = u_worldTrans * vec4(a_position, 1.0);

gl_Position = u_projViewTrans * pos;



v_texCoords0 = a_texCoord0;

...

And in the fragment shader we should now apply shadow map from the framebuffer

uniform sampler2D u_diffuseTexture;

uniform sampler2D u_shadows;

uniform float u_screenWidth;

uniform float u_screenHeight;

varying vec2 v_texCoords0;

varying float v_intensity;

void main()

{

vec4 finalColor  = texture2D(u_diffuseTexture, v_texCoords0);

finalColor.rgb   = finalColor.rgb*v_intensity;

// Retrieve the shadow color from shadow map

vec2 c= gl_FragCoord.xy;

c.x/=u_screenWidth;

c.y/=u_screenHeight;

vec4 color=texture2D(u_shadows,c);



// Apply shadow

finalColor.rgb*=color.a;



gl_FragColor 	= finalColor;

}

The code is quite easy to understand, we retrieve the pixel in the frame buffer and apply it to the current pixel we are writing on the screen.

The final result should be the same as before as we didn’t change anything to lights yet, except shadows are now rendered separately.

Multi pass shader

Exercise

  • Add a new point light on the torch (0, 13.8, 32)

  • Render shadows/lights from those 2 point lights on the shadow framebuffer

  • Reinsert the old directional light

Notes

  • The easiest solution is to change the render function in the shader, rendering the geometry multiple times (one per light)

  • Default color is pitch black, we just need to add lights on each passes

  • Shadows/lights from each lights are rendered on top of each other using an additive blending for example context.setBlending(true, GL20.GL_ONE, GL20.GL_ONE);

Solution

This is probably the hardest part to understand if you are not very familiar with 3d, depth buffer and so one. And again there are lots of different techniques possible to achieve the same result.

Multiple lights

First let’s add some code to handle multiple lights and remove reference to currentLight variable

public ArrayList lights = new ArrayList();

public void init()

{

...

lights.add(new PointLight(this, new Vector3(-25.5f, 12.0f, -26)));

lights.add(new PointLight(this, new Vector3(0f, 13.8f, 32)));

}

/**

  • Render the scene shadow map

*/

public void renderShadows()

{

...

//currentLight.applyToShader(shaderProgramShadows);

...

}

public void render(final float delta)

{

act(delta);

for (Light light : lights)

{

		light.render(modelInstance);

}

renderShadows();

renderScene();

}

If you try to render the scene at this point there should be no light at all, but if you take a screenshot you will get the depth maps for the 2 lights.

Combining shadow maps

What we will try to do now is to change the shadow framebuffer so that it will combine all shadow maps in a single image

We will work on the ShadowMapShader, first add a reference to MainScreen (it must also be added in the initShaders function)

public MainScreen mainScreen;

public ShadowMapShader(final Renderable renderable, final ShaderProgram shaderProgramModelBorder, final MainScreen mainScreen)

{

this.mainScreen = mainScreen;

And then let’s work on the render function:

public void render(final Renderable renderable, final Attributes combinedAttributes)

{

super.render(renderable, combinedAttributes);

}

As you can see by default it will render the geometry once as intended, we will now render it one time per light

public void render(final Renderable renderable, final Attributes combinedAttributes)

{

boolean firstCall = true;

for (final Light light : mainScreen.lights)

{

	light.applyToShader(program);

	if (firstCall)

	{

    	// Classic depth test

    	context.setDepthTest(GL20.GL_LEQUAL);

    	// Deactivate blending on first pass

    	context.setBlending(false, GL20.GL_ONE, GL20.GL_ONE);

    	super.render(renderable, combinedAttributes);

    	firstCall = false;

	}

	else

	{

    	// We could use the classic depth test (less or equal)

// but strict equality works fine on next passes as

// depth buffer already contains our scene

    	context.setDepthTest(GL20.GL_EQUAL);

    	// Activate additive blending

    	context.setBlending(true, GL20.GL_ONE, GL20.GL_ONE);

    	// Render the mesh again

    	renderable.mesh.render(program, renderable.primitiveType, renderable.meshPartOffset, renderable.meshPartSize, false);

	}

}

}

You can refer to the comments to understand better, but basically what needs to be understood

  • First pass is special, at this point the depth buffer is empty so vertex that should be hidden could still be drawn so we don’t want blending, else we will get some weird results. For example a vertex on the wall behind the monster might be drawn first even if it will not be visible later, then the monster vertex will be drawn on top. We just want the second vertex to be visible so we have to disable blending so that the one behind is overwritten.

  • On next passes the depth buffer is already filled, and we draw the exact same geometry, so we know invisible vertex won’t be drawn again. So we can just re enable blending, as we know it will now just blend whole shadow maps.

A simple additive blending is used (GL20.GL_ONE, GL20.GL_ONE), we could use other blending methods for better results, but we will stick to this one for this tutorial.

For example if one pixel has a brightness of 0.3 for one light and 0.4 for another light then in the shadow map it will be 0.3+0.4=0.7

Also the applyToShader functions in the Lights classes should be adapted, there’s no need to begin() / end() the program anymore

@Override

public void applyToShader(final ShaderProgram sceneShaderProgram)

{

//sceneShaderProgram.begin();

...

//sceneShaderProgram.end();

}

If we render the scene there should be something like that

If you look closely there should be some shadows, but very few, it’s because we add a default minimum brightness value in the shadow map (0.4) that gets blended (additive, so 0.8 minimum for 2 lights) and makes the scene a lot lighter than expected. We need to adapt the shaders, first in the shadow fragment shader

if(lenDepthMap<lenToLight-0.005){

}else{

intensity=0.5*(1.0-lenToLight);

}

And in the scene fragment shader

finalColor.rgb*=(0.4+0.6*color.a);

You can adapt the 0.5 coefficient in the shadow shader, it just means that one shadow alone can not fully lighten a pixel. In a real environment this kind of property should be set through a uniform variable.

Directional lights

Now that we can have multiple lights we should re-enable the directional light, first just add it to the light list

lights.add(new DirectionalLight(this, new Vector3(33, 10, 3), new Vector3(-10, 0, 0)));

If you render the scene at this point you should just get some weird result as code is currently set to use cubemaps. So we will need to handle two different cases in the shadow shaders, and use a sampler2d or samplerCube accordingly.

In the PointLight class:

public void applyToShader(final ShaderProgram sceneShaderProgram)

{

final int textureNum = 2;

depthMap.bind(textureNum);

sceneShaderProgram.setUniformf("u_type", 2);

sceneShaderProgram.setUniformi("u_depthMapCube", textureNum);

sceneShaderProgram.setUniformf("u_cameraFar", camera.far);

sceneShaderProgram.setUniformf("u_lightPosition", position);

}

And in the DirectionalLight class:

public void applyToShader(final ShaderProgram sceneShaderProgram)

{

final int textureNum = 3;

depthMap.bind(textureNum);

sceneShaderProgram.setUniformi("u_depthMapDir", textureNum);

sceneShaderProgram.setUniformMatrix("u_lightTrans", camera.combined);

sceneShaderProgram.setUniformf("u_cameraFar", camera.far);

sceneShaderProgram.setUniformf("u_type", 1);

sceneShaderProgram.setUniformf("u_lightPosition", position);

}

A new uniform needs to be created called u_type to distinguish the two cases, and the u_depthMap uniform is now split in two: u_depthMapCube and u_depthMapDir. The textureNum is also adapted in the DirectionalLight to avoid conflicts, theoretically in a real world program we should use the texture binder from LibGdx to manage these texture bindings.

And finally the shadow fragment shader becomes:

uniform sampler2D u_depthMapDir;

uniform samplerCube u_depthMapCube;

uniform float u_cameraFar;

uniform vec3 u_lightPosition;

uniform float u_type;

varying vec4 v_position;

varying vec4 v_positionLightTrans;

void main()

{

// Default is to not add any color

float intensity=0.0;

// Vector light-current position

vec3 lightDirection=v_position.xyz-u_lightPosition;

float lenToLight=length(lightDirection)/u_cameraFar;

// By default assume shadow

float lenDepthMap=-1.0;



// Directional light, check if in field of view and get the depth

if(u_type==1.0){

	vec3 depth = (v_positionLightTrans.xyz / v_positionLightTrans.w)*0.5+0.5;

	if (v_positionLightTrans.z>=0.0 && (depth.x >= 0.0) && (depth.x <= 1.0) && (depth.y >= 0.0) && (depth.y <= 1.0) ) {

    	lenDepthMap = texture2D(u_depthMapDir, depth.xy).a;

	}

}

// Point light, just get the depth given light vector

else if(u_type==2.0){

		lenDepthMap = textureCube(u_depthMapCube, lightDirection).a;

}



// If not in shadow, add some light

if(lenDepthMap<lenToLight-0.005){

}else{

		intensity=0.5*(1.0-lenToLight);

}



gl_FragColor 	= vec4(intensity);

}

Basically we just have two branches to calculate the lenDepthMap variable, the old one taken from the directional light part of this tutorial, and the cube map one we were using previously.

We now have two point lights and one directional light and everything seems fine.

Caching lights and adding a moving light

Exercise

  • Cache light depth map, update it only when necessary

  • Create a point light at the top of the room (0, 30, 0) that moves in circle

Notes

  • This is just an optimization part, we don’t need to render the light framebuffers each frame but just when necessary (when the scene changes, which doesn’t happen in our case, or when the light moves)

  • The moving light is just a sub class of the PointLight, it just needs to move each frame.

Solution

Caching

Currently lights are rendered each frame, this is unnecessary and instead lights should be rendered only when the scene changes (if an object moves for example) or when the light itself moves. So we will just add a simple cache system, a variable to set when we want the light to be updated.

Update the Light class

public boolean needsUpdate = true;

public abstract void act(float delta);

By default the variable is true as light needs to be rendered on creation. The act function has been added for later use (just use an empty implementation for now).

In the light render functions we need to check if render is necessary

public void render(final ModelInstance modelInstance)

{

if (!needsUpdate)

{

	return;

}

needsUpdate = false;

And finally we can update the MainScreen act function:

public void act(final float delta)

{

if (System.currentTimeMillis() - lastScreenShot > 1000 * 1 && Gdx.input.isKeyJustPressed(Keys.F2))

{

	takeScreenshots = true;

	// Force an update on all lights, else the render function won't be called and no screenshot taken

	for (final Light light : lights)

	{

    	light.needsUpdate = true;

	}

	lastScreenShot = System.currentTimeMillis();

}

else

{

	takeScreenshots = false;

}

for (final Light light : lights)

{

	light.act(delta);

}

...

We just force an update when taking a screenshot, else we could not get the shadow maps screenshots as the render function would not be called. And we also call the act functions of the light for later use.

If you execute the code the result should be exactly the same, except there should be a huge FPS improvement.

Moving light

To check if cache works we will create a new light at the top of scene, it should just move in circle. Create a new class called MovingPointLight that extends PointLight

public class MovingPointLight extends PointLight

{

public Vector3	originalPosition	= new Vector3();

public float	angle            	= 0;

public float	distance        	= 20f;

public MovingPointLight(final MainScreen mainScreen, final Vector3 position)

{

	super(mainScreen, position);

	originalPosition.set(position);

}

@Override

public void act(final float delta)

{

	angle += delta / 1f;

	position.set(originalPosition.x + MathUtils.cos(angle) * distance, originalPosition.y, originalPosition.z + MathUtils.sin(angle) * distance);

	camera.position.set(position);

	needsUpdate = true;

}

}

And add it to the scene

lights.add(new MovingPointLight(this, new Vector3(0f, 30.0f, 0f)));

And that’s it, we now have a moving light

image alt text

Conclusion

You should now have the basic knowledge to perform some shadows/lights on your program/game. We just saw the basics here, as you can see the result is not perfect, there are some artefacts, and we don’t handle lighting correctly, we just focused on shadows, but it’s a good starting point.

If you run this sample on Android/iOS it will work but might be quite slow, it’s mainly because our scene has a complex geometry and memory bandwidth on mobile is limited; if scene was optimized you would get an higher frame rate.

Just some points you might be interested looking into:

PCF (Percentage-closer filtering)

http://developer.nvidia.com/GPUGems/gpugems_ch11.html

It will help softening the edges of shadows

Shadow "acne", bias

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/#Shadow_acne

Deferred shading

http://en.wikipedia.org/wiki/Deferred_shading

http://gamedevelopment.tutsplus.com/articles/forward-rendering-vs-deferred-rendering--gamedev-12342

Shadow mapping

http://en.wikipedia.org/wiki/Shadow_mapping

Just check the different techniques of the wikipedia page

Pack/unpack function to increase depth map precision

http://stackoverflow.com/questions/18453302/how-do-you-pack-one-32bit-int-into-4-8bit-ints-in-glsl-webgl

Updates

And of course you can follow me on twitter or my blog if you want some updates or new tutorials (see below)

About the author

Twitter : @Haedri

Website : http://www.microbasic.net

Email : haedri@microbasic.net

Copyright information

The tutorial itself (this pdf/webpage with associated images) is CC-by-nc, it means it can be distributed and modified as long as no commercial use is made out of it and you make appropriate credits (link to my website for example).

The source code / project associated is CC0, it means you can do anything you want with it, no credits needed (but they would be appreciated). Some classes are released under Apache License as they are taken from LibGdx (see top of source file for specific license information)

Clone this wiki locally