-
Notifications
You must be signed in to change notification settings - Fork 7
Part 2 : one directional light
In this part we will see how to render one directional light. A directional light is a light that will lighten the scene in just one direction, like a flashlight.
If you already know a bit about shadow mapping you can go through this part quickly as we will try to go slowly here for beginners to understand.
- Create a new camera on the square light
(33,10,3)
, looking at the center of the room (negative x) - Render the scene with this camera
- Just create the camera based on existing one and use it to render the scene.
First we have to create a new PerspectiveCamera
:
cameraLight = new PerspectiveCamera(120f, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
cameraLight.near = 1f;
cameraLight.far = 100;
cameraLight.position.set(33,10,3);
cameraLight.lookAt(-1, 0, 0);
cameraLight.update();
And temporarily replace the camera at render time
modelBatch.begin(cameraLight);
modelBatch.render(modelInstance);
modelBatch.end();
What we have done here is just create a regular camera, but placed on the light object itself, it means that what is visible with this camera is also visible from the light. We've set up an high fov (120°) in order to enlarge what light will "see".
The far is set to 100, meaning the camera can see the whole scene, but we could set a different value depending on the light parameters.
The idea here is that everything we see with this camera should be bright, thus everything else should be in shadow later as it can't be seen by the light. So we need a way to check from the scene camera what is really visible from the light camera point of view.
- Render scene of previous exercise to a frame buffer, revert the main scene to its normal state/camera
- Create a screenshot of this frame buffer to make sure everything is correct
- Use the screenshot factory before ending the frame buffer, it will save the frame buffer content and not the screen
The main scene is reverted to its original state (using the previous camera) but the scene is now rendered twice:
- Original view, "player" view, on screen
- Light view, in a frame buffer
The new render function is:
public FrameBuffer frameBuffer;
public static final int DEPTHMAPIZE = 1024;
public void renderLight()
{
if (frameBuffer == null)
{
frameBuffer = new FrameBuffer(Format.RGBA8888, DEPTHMAPIZE, DEPTHMAPIZE, true);
}
frameBuffer.begin();
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
modelBatch.begin(cameraLight);
modelBatch.render(modelInstance);
modelBatch.end();
if (takeScreenshots)
{
ScreenshotFactory.saveScreenshot(frameBuffer.getWidth(), frameBuffer.getHeight(), "depthmap");
}
frameBuffer.end();
}
And it should be called from the render
public void render(final float delta)
{
act(delta);
renderLight();
renderScene();
}
We now have two screenshots when pressing F2, the main scene
And the frame buffer
This last view is highly deformed (high fov and doesn't depend on the window size anymore), but we don't care as the player will never see it.
The size of the frame buffer (DEPTHMAPIZE
) can be adjusted for higher/lower shadow quality.
- Instead of rendering the actual scene render a depth map for the light
- This will need a new shader for the depth map
- Depth is just the length of the vector
(position,light position)
- To get a value between [0,1] just divide by the far value of the camera
The idea now is to create a depth map : an image that contains the depth of all the points (the distance to the camera/light) instead of the actual image data (texture and such)
This image will be used later to decide on what is in shadow or not.
What we need first is to copy/paste the scene ShaderProgram and ModelBatch :
shaderProgramDepthMap = setupShader("depthmap");
modelBatchDepthMap = new ModelBatch(new DefaultShaderProvider()
{
@Override
protected Shader createShader(final Renderable renderable)
{
return new DepthMapShader(renderable, shaderProgramDepthMap);
}
});
You can also copy/paste the SimpleTextureShader
and create a new one called DepthMapShader
, it's basically the same except it doesn't need some uniforms/attributes (texture information, normal matrix,...). This is purely optional (just in case you would need to do some specific work on the shader later).
And of course use this new ModelBatch
in the frame buffer
modelBatchDepthMap.begin(cameraLight);
modelBatchDepthMap.render(modelInstance);
modelBatchDepthMap.end();
Also create the depthmap fragment and vertex shaders by copy/pasting the scene shaders (in the android project, assets).
If you take a screenshot now you should just get the exact same images as before (or a black screen if you have already removed the diffuse texture information).
Now what we want in the frame buffer is the depth map, then length between each point to the camera/light.
So in the vertex shader we now need to save the vertex coordinates in a varying for the fragment shader
varying vec4 v_position;
void main()
{
v_position = u_worldTrans*vec4(a_position, 1.0);
gl_Position = u_projViewTrans *v_position;
}
What is done here is that the position is saved after applying transformation on the object (the scene is scaled in this project) but before applying the camera projection.
Then in the fragment shader we need to calculate the length of the vector (v_position,light position)
, problem is the value is something like [0,cameraFar]
and we can only write [0,1]
values as colors. So we also need to send the camera far value to the shader before rendering the light
shaderProgramDepthMap.begin();
shaderProgramDepthMap.setUniformf("u_cameraFar", cameraLight.far);
shaderProgramDepthMap.setUniformf("u_lightPosition", cameraLight.position);
shaderProgramDepthMap.end();
modelBatchDepthMap.begin(cameraLight);
modelBatchDepthMap.render(modelInstance);
modelBatchDepthMap.end();
And retrieve it in the fragment shader
uniform float u_cameraFar;
varying vec4 v_position;
uniform vec3 u_lightPosition;
void main()
{
gl_FragColor = vec4(length(v_position.xyz-u_lightPosition)/u_cameraFar);
}
The framebuffer content now becomes :
A brighter pixel means the point is farther away from the camera.
At this point we could theoretically use an Alpha texture for the frame buffer (Format.Alpha
instead of Format.RGBA8888
) as we don't need the 4 color components, but this format is not well supported on mobile device, should be ok on PC though.
We could also use some pack/unpack functions to store depth on all 4 components (32 bits) of the texture, but it makes the screenshots hard to understand.
- We want to see the light depth map, but from the scene camera, for that we will need 2 renders First on the main scene, recalculate the depth map viewed from light and display it Second on the main scene, retrieve the light depth map (from frame buffer) and display it
- You need to send the
camera.combined
matrix to get the transformation matrix of the light camera in the scene camera - To get the position inside the texture from the coordinates in the camera projection the formula is
(v_positionLightTrans.xyz / v_positionLightTrans.w)*0.5+0.5
We are now on the main scene but we need to do transformation from the light camera perspective, so we will need to send the camera transformation matrix to the shader.
In the scene vertex shader, add the new uniform
uniform mat4 u_lightTrans;
And in the MainScreen class add the corresponding uniform before rendering the scene
shaderProgram.begin();
shaderProgram.setUniformMatrix("u_lightTrans", cameraLight.combined);
shaderProgram.setUniformf("u_cameraFar", cameraLight.far);
shaderProgram.setUniformf("u_lightPosition", cameraLight.position);
shaderProgram.end();
modelBatch.begin(camera);
modelBatch.render(modelInstance);
modelBatch.end();
Back to the vertex shader we can now calculate the vertex coordinates in the light camera space
varying vec4 v_positionLightTrans;
varying vec4 v_position;
void main()
{
v_position = u_worldTrans * vec4(a_position, 1.0);
v_positionLightTrans = u_lightTrans * v_position;
gl_Position = u_projViewTrans * v_position;
...
In the fragment shader we just have to override the previous value of the color to get the depth map
...
uniform float u_cameraFar;
uniform vec3 u_lightPosition;
varying vec4 v_positionLightTrans;
varying vec4 v_position;
void main()
{
...
float len = length(v_position.xyz-u_lightPosition)/u_cameraFar;
finalColor.rgb = vec3(1.0-len);
gl_FragColor = finalColor;
}
And the resulting image :
What we see here is the distance of each point to the light source
We will now need the previous frame buffer resulting texture, so we have to bind it and add it to the uniforms
shaderProgram.begin();
final int textureNum = 2;
frameBuffer.getColorBufferTexture().bind(textureNum);
shaderProgram.setUniformi("u_depthMap", textureNum);
shaderProgram.setUniformMatrix("u_lightTrans", cameraLight.combined);
shaderProgram.setUniformf("u_cameraFar", cameraLight.far);
shaderProgram.setUniformf("u_lightPosition", cameraLight.position);
shaderProgram.end();
And in the scene fragment shader
vec3 depth = (v_positionLightTrans.xyz / v_positionLightTrans.w)*0.5+0.5;
float len = texture2D(u_depthMap, depth.xy).a;
finalColor.rgb = vec3(1.0-len);
gl_FragColor = finalColor;
First we calculate the depth vector which contains the coordinates to look for in the light depth map, then we extract the pixel color and display it
Now we can see the same depth map but viewed from the light, we can already guess where shadows should be
From previous exercise we now have an idea of where the shadows should be, if we apply a differential filter to both images we get something like that
When values are equal we get a pitch white, and when values are different we get a gray color. The idea will now to do this programmatically
- Compare both depth map, render shadows appropriately
- Make the light dimmer farther away from source
- Clean up code (create a light class,...)
- You must compare depth maps calculated in previous exercises: from the texture extracted from the frame buffer and calculated with the camera projection matrix
First approach can be to just check if values are nearly equal
vec3 depth = (v_positionLightTrans.xyz / v_positionLightTrans.w)*0.5+0.5;
float lenFB = texture2D(u_depthMap, depth.xy).a;
float lenLight = length(v_position.xyz-u_lightPosition)/u_cameraFar;
float diff=lenFB-lenLight;
if(!(diff<0.01 && diff>-0.01))
finalColor.rgb*=0.4f;
gl_FragColor = finalColor;
Result is not perfect but we are getting there
Basically everything that is not in the light frame buffer texture will not render nicely, which make sense as it is not visible from the light camera, so we need to clean everything :
void main()
{
vec4 finalColor = texture2D(u_diffuseTexture, v_texCoords0);
finalColor.rgb = finalColor.rgb*v_intensity;
vec3 depth = (v_positionLightTrans.xyz / v_positionLightTrans.w)*0.5+0.5;
// Make sure the point is in the field of view of the light
// and also that it is not behind it
if (v_positionLightTrans.z>=0.0 &&
(depth.x >= 0.0) && (depth.x <= 1.0) &&
(depth.y >= 0.0) && (depth.y <= 1.0) ) {
float lenToLight=length(v_position.xyz-u_lightPosition)/u_cameraFar;
float lenDepthMap= texture2D(u_depthMap, depth.xy).a;
// If can not be viewed by light > shadows
if(lenDepthMap<lenToLight-0.005){
finalColor.rgb*=0.4;
}
}else{
finalColor.rgb*=0.4;
}
gl_FragColor = finalColor;
}
In practice we don't check for equality but we check if the current point we are drawing is not behind the calculated depth (lenDepthMap<lenToLight)
because there could be objects in front of the light that don't cast shadows but are still impacted by lights.
For example in our scene let's say there are some particles between the light and the monster (anything, vapor, bullet trail,...), we don't want them to cast shadows as it would be too costly (if we were using cache it would have to be updated every time a particle moves) but we still want them to be darker/lighter depending on whether they are in a shadow or not. So doing this simple comparison will act as if the object was transparent to light but will still be darker in shadows.
We also add an error margin (lenToLight-0.005
) because the precision is never high enough to get a perfect 1 to 1 pixel definition between the scene and the light frame buffer. There are many ways to manage this error margin, this is just the easiest one to implement.
This is a lot better now.
Another way to view this is to go back to the depth map viewed from the scene, when a line is drawn from the light and we check the depth/brightness of the points :
From the light view it can only see the point with a brightness/depth of 0.75 as the other one is behind the monster, but we need to check that programmatically using the real depth map.
In the light view only the point with a brightness of 0.75 exists, it means the other one (0.45) can not be seen by the light, hence it should be in darkness.
Now we just add a little calculation to get a light that gets dimmer when it's further away from the source.
if(lenDepthMap<lenToLight-0.005){
finalColor.rgb*=0.4;
}else{
finalColor.rgb*=0.4+0.6*(1.0-lenToLight);
}
It's not a huge difference when far is set to a high value, but it still is a bit better
I won't detail code cleaning, just have a look at the project, the idea is to have an abstract class called Light
that could refer to any light, and a child class called DirectionalLight
that just applies to directional lights.
At this point if you have something that works you can just copy/paste the part2
package of the tutorial project to start the next parts with a clean code.