-
Notifications
You must be signed in to change notification settings - Fork 1
Core Graphics
This page provides a lot of use-cases and examples of accessing low-level OpenGL operations, such as changing staet, creating textures, loading vertex/fragment shaders, and uploading vertex data to the GPU.
Notes:
A lot of the content on this page relies on thourough knowledge of computer graphics, OpenGL specifically.
All code references assume using namespace zen; using namespace gfxcore;
All other namespaces are explicitly prefixed.
If you've read the section on regular subsystems, you'll be quite familiar with the content of this section. OpenGL subsystems are those that need to be explicitly initialized and destroyed. Naturally, this could be done in the constructor/destructor, but then there would be no error checking available, and it isn't always desireable to initialize immediately. Thus I opted for explicit initialization much like the zSubsystem
route.
In the Zenderer framework, OpenGL subsystems are built into the following:
- zVertexArray (covered below)
- gfx::zRenderTarget (see the "Graphics" page)
- gfx::zEffect (see the "Graphics" page)
The process for implementing custom OpenGL subsystems is identical to that of regular subsystems; see the relevant page. There are several additional requirements that are OpenGL specific.
- It must bind the OpenGL object handle via
zGLSubsystem::Bind()
. - It must unbind the OpenGL object handle via
zGLSubsystem::Unbind()
. - It must give raw access to an OpenGL object handle via
zGLSubsystem::GetObjectHandle()
.
Naturally, the user can perform raw OpenGL state commands via calls such as glEnable
. I aimed to provide a way to access common state changes through an abstracted interface that could potentially expand support DirectX.
Note: There is a handy debugging macro (enabled with ZEN_DEBUG_BUILD
) that will check the OpenGL error state after an OpenGL call, causing an assertion failure if there is one. To use it, simply wrap any OpenGL calls / statements with GL()
, i.e. GL(glEnable(GL_BLEND));
.
The way Zenderer provides this is via the gfxcore::zRenderer
API. This provides commonly used state-change wrappers allowing for the user to toggle blending and material states, and provides access to default objects necessary for rendering, such as a simple shader and the current projection matrix.
Over time, this wrapper will be expanded to provide more abstracted functionality such as depth or stencil states.
There are a couple of blending operations available in Zenderer, without delving into direct calls to glBlendFunc()
. All are contained within the BlendFunc
enumeration, and are passed to the zRenderer::BlendOperation(BlendFunc)
method.
-
DISABLE_BLEND
— As the name implies, this disables all blending operations and enables depth testing. -
A call to
zRenderer::DisableBlending()
is identical to callingzRenderer(BlendFunc::DISABLE_BLEND)
. -
ENABLE_BLEND
— This enables blending and disables depth testing, using the same blending operation as was being used prior to this call. This means that if you've never passed an actual blending equation to this method or calledglBlendFunc()
explicitly, the behavior is unknown. -
STANDARD_BLEND
— This will perform anENABLE_BLEND
operation, and will then set the standard blending operation that most people typically use when trying to achieve alpha transparency effects. This equation can be described as Co = 1 - As, where Co is the output color, and As is the source alpha. -
ADDITIVE_BLEND
— This will also initially perform anENABLE_BLEND
operation, but will then set an additive blending operation. This equation can be described as Co = 1Cs + 1Cd, where Co is the output color, Cs is the source color, and Cd is the destination color.
Here's a handy blending guide if you'd like to perform direct calls to glBlendFunc()
:
You can disable all existing material state (materials are described in detail in the "Graphics" section) by calling zRenderer::ResetMaterialState()
. This can be handy at the end of a drawing sequence if you are unsure of what explicitly needs to be disabled, or are simply too lazy to reference each part of the material separately.
All rendering to a certain framebuffer should reference the same projection matrix. You can achieve an immutable reference to this matrix via zRenderer::GetProjectionMatrix()
. It will always point to the projection matrix being used by the current framebuffer.
Instead of it being necessary to create an effect with the default shaders (see the section on effects), you can access it directly via zRenderer::GetDefaultEffect()
(or zRenderer::GetDefaultMaterial().GetEffect()
). It is immutable, but actually allows for calls modifying the matrices bound to the effect ("mv" for the model-view matrix and "proj" for the projection matrix). There is no way of knowing what the existing matrix state is of the effect is, so it is always recommended that you set these parameters explicitly, or at least the model-view matrix if you aren't using alternate render targets.
Furthermore, there is access to a default texture, achieved with a call to zRenderer::GetDefaultTexture()
. This is simply a 1x1 white pixel, and is used throughout the framework in order to properly display color info when rendering untextured primitives.
Finally, there is access to a special fullscreen vertex buffer object, accessible through zRenderer::GetFullscreenVBO()
. It's primarily used internally for additive blending of lights and post-processing effects achieved through the gfx::zScene
wrapper, but is also publicly accessible if the user wishes to draw full-screen quads directly.
Shaders are an essential aspect of rendering things the "modern" way, whether you use OpenGL or DirectX. They are abstracted away in Zenderer through the high-level effect API (see the relevant section in "Graphics" for more). Shader objects can still be individually created, though its more valid to simply create shader program objects instead.
Note: This API might be hidden into the private
section of shader programs in the near future, because shader objects truly have no use independently, and shader programs don't have a LoadFromShaderObject()
method or its equivalent.
Implementation Note: Shader objects inherit from asset::zAsset
rather than zGLSubsystem
, because they are focus more on loading data from the disk (or strings) than on binding to OpenGL state.
Sometimes, shader compilation may fail. It'd be useful to know what these errors are. They can be generated by OpenGL, or occasionally by the framework itself. The compilation log can be retrieved with a call to GetShaderLog()
. Other errors will appear in the global engine log.
Shader programs are sets of shader objects (from above). The relevant API used by Zenderer is zShaderSet
. These are typically loaded from disk individually, or from strings. Then, a shader program object is "compiled" on the GPU.
Shader program workflow is typically as follows:
- Bind shader.
- Set custom parameters.
- Do rendering.
- Unbind shader.
In terms of the Zenderer framework, workflow is identical. Before rendering, a call to zShaderSet::Bind()
is performed. Setting parameters is no simpler than with raw OpenGL handles, this is left to the gfx::zEffect
interface, which operates at a higher level.
Below is a compare/contrast example of a typical shader use case. A shader object is created using the default shader files on disk, and the current projection matrix is attached to it. Then this same process is repeated using the high-level gfx::zEffect
interface.
Note: In both cases, we assume that Assets
is an object of type asset::zAssetManager
that's been created and initialized without issue.
zShader Shader(Assets);
if(!Shader.LoadFromFile("Zenderer/shaders/Default.vs",
"Zenderer/shaders/Default.fs"))
{
// The engine log contains a failure message.
// Failure data is in Shader.GetError()
return;
}
Shader.Bind();
GL(glUniformMatrix4fv(Shader.GetUniformLocation("proj"),
zRenderer::GetProjectionMatrix().GetPointer()));
// -snip-
// Drawing code.
// -snip-
Shader.Unbind();
For the higher-level gfx::zEffect
interface, code would look something like this:
gfx::zEffect Shader(gfx::EffectType::DEFAULT_EFFECT, Assets);
if(!Shader.Init())
{
// The engine log contains a failure message.
// Failure data is in Shader.GetError(), like above.
return;
}
Shader.Enable();
Shader.SetParameter("proj", zRenderer::GetProjectionMatrix());
// -snip-
// Drawing code.
// -snip-
Shader.Disable();
As you can see, the APIs are remarkably similar, except the higher-level interface has much better uniform passing to the GPU, and a cleaner loading interface. It should only be necessary to use zShaderSet
when a pre-built effect does not exist for your needs. But even then, it's advisable to write a custom Zenderer material file and passing it on to the gfx::zMaterial
API, which will provide even more abstraction.
See above for error checking. In this case, the method to call to retrieve the linker log is zShaderSet::GetLinkerLog()
. Additionally, you can access all other errors (non-linker OpenGL or otherwise) through zShaderSet::GetError()
.
Applying and attaching textures to world geometry is essential to any game, and Zenderer provides a low-level interface to interact with directly if users wish to. Its preferable to use the gfx::zMaterial
API (covered in detail here), but sometimes bare-metal access is necessary too.
Textures can be loaded in a variety of ways: directly from image files, from raw pixel data, from existing textures (shallow copy), or from raw OpenGL texture handles (both shallow and deep copies). Since it implements asset::zAsset
, it has all of the necessary characteristics. You can directly access raw pixel data (provided that the OpenGL context that this texture is tied to remains valid) by calling zTexture::GetData()
. This isn't the fastest operation, and should be used sparingly.
The static method zTexture::GetDefaultTexture()
behaves the same way as zRenderer::GetDefaultTexture()
, which is detailed above.
Slightly misleading is the zTexture::GetID()
method. This is tied to the material sorting system used to optimize draw calls when rendering scenes. This does not gives access to the raw OpenGL texture handle. There is no such method. If you want to call OpenGL functions on a texture, call zTexture::Bind()
and modify away.
Similarly to shader objects, this class inherits from asset::zAsset
rather than zGLSubsystem
, because its primary function is loading things from the filesystem rather than controlling OpenGL state.
There are multiple ways to load textures, and they are outlined below.
zTexture::LoadFromFile(string_t&)
supports creating OpenGL textures from image files on disk. Zenderer uses the stb_image API for loading its images, so anything that it can load, this framework can, too. Textures must only be in RGB or RGBA formats. Texture objects are always created in RGBA format from disk, forcing an alpha-component if one does not exist.
When given another zTexture
instance, LoadFromExisting(asset::zAsset*)
creates a deep copy of the texture, directly accessing raw pixel data and creating a new texture handle.
When given an OpenGL texture handle, LoadFromExisting(GLuint)
creates a shallow copy of the texture, merely assigning the texture handle to itself internally. This means that if the original handle is deleted, this object will no longer have a valid handle.
When giving an OpenGL texture handle to CopyFromExisting(GLuint)
, on the other hand, a deep copy is created from the texture handle, directly accessing raw GPU pixel data much like the first case of LoadFromExisting()
, with another zTexture
instance.
LoadFromRaw()
takes a number of parameters and creates an OpenGL texture from the given pixel data. This is the only place where you can specify a custom pixel format, as opposed to any of the other zTexture::[Load|Copy]From*()
methods which only create RGBA textures. The given pixel data is offloaded to the GPU directly, so you can safely delete[]
the original data you passed in if it was dynamically allocated.
See the OpenGL spec for more details on the parameters.
Just call zTexture::Bind()
when you want to use one, and zTexture::Unbind()
when your done. A simple optimization for multiple textures in a scene is to just repeatedly call Bind()
for each successive texture, as it will automatically override the previous one. Then it is only necessary to call Unbind()
on the last one to remove all OpenGL texture state.
An important part of modern, high-performance graphics applications is letting the GPU do a lot of the work. One of these techniques involves placing vertex data (position, color, etc.) directly into video memory (VRAM). Zenderer internally uses an API especially made for this purpose, and it's available for users as well, should they need direct manipulation of vertex data.
Zenderer adapts an index-based approach to rendering geometry. This allows, for example, a quad (made up of two triangles) to be drawn using 4 vertices and 6 indices, rather than 6 vertices. This seemingly minor savings adds up significantly over time, especially if you can spare the time to optimize vertex duplicates (feature TODO).
First, it's important to define what a "vertex" really is. In computer graphics, vertices come in dozens of different formats and orderings, varying in many ways.
Vertices in Zenderer have 3 distinct parts: 3-dimensional position data, 3-dimensional texture coordinate data, and RGBA color storage, in that order. Assuming sizeof(float) == 4
, we can deduce that a vertex (referenced as vertex_t
in the framework) will take 40 bytes of data at the minimum. This is a useful fact to keep in mind when offloading large quanities of vertex data to the GPU.
An "index" is defined by index_t
, and is merely an unsigned integer referencing a specific index in your vertex array.
Zenderer uses index buffers (see the documentation on glDrawElements()
) to render individual vertices in a vertex buffer. This process is performed on the GPU similarly to how you would index into a buffer using any ol' programming language.
For example, if you wanted to render a red 32x32 quad, you would have a vertex array that looks something like this, in pseudo-C++:
vertex_t quad[4] = {{
{ 0, 0 }, // position
{ 0, 0 }, // texture coordinates
{ 1, 0, 0 } // color
}, { // top-right vertex
{ 32, 0 },
{ 0, 0 }
{ 1, 0, 0 }
}, { // bottom-right vertex
{ 32, 32 },
{ 0, 0 }
{ 1, 0, 0 }
}, { // bottom-left vertex
{ 0, 32 },
{ 0, 0 }
{ 1, 0, 0 }
}};
You would create an index buffer intended to index into the vertex array, drawing each vertex as it iterates over the indices.
index_t qindices[6] = { 0, 1, 3, 3, 1, 2 };
This would first the triangle formed by quad[0]
, quad[1]
, and quad[3]
. Then it would draw an adjacent triangle formed by the remaining indices. Together, these triangles would form a 32x32 square, as intended.
Think of the process as the following pseudocode:
for index in index_buffer:
draw(vertex_buffer[index])