Game Engine Development: Rendering

In this series of articles, I’m going to talk about developing your own game engine from scratch, from the perspective of the hobby game developer. We won’t do much actual development, there’s 100’s of websites that can teach you that (I’ll list what I know of at the end of the article).

For myself, there’s nothing more enjoyable than making a really well written system with a whole heap of sub-systems, all integrated in a really elegant way.

I also enjoy the idea of having done everything myself, or at least as much as possible that interests me. Because the final draw for myself doing game engine development is learning and getting better at it.

If these sorts of things don’t interest you, then there’s not much point doing them. If you want to make a game, I’d recommend not making an engine. Making a game engine is the number one way to not make a game. Whatever kind of game you want to make, there’s likely an engine that already exists. If one doesn’t, because your game is unique and different enough, then you really should rethink if you have the time.

But you already know all that. It’s why you’re here.

The first topic we’re going to discuss is likely the first thing you’d want to do: draw something to the screen. As we go, we’ll also talk about architecture.

2D or 3D rendering?

This is a fairly fundamental thing that has a very large impact on your engine. I have personally written code for both 2D and 3D, and I have to say that 3D is a lot harder than 2D.

With 3D you have things like meshes, models, textures, lighting, bump-mapping etc. These are all things that you’re going to want to include, I mean, who wants a 3D game without bump-mapping? Maybe your game doesn’t need it, maybe you’re looking to make a surreal 3D low-poly world? It’s still going to be more work than 2D, I guarantee it.

So for your first engine, I’d recommend 2D. With 2D, you don’t need to worry about meshes… well technically you do, but it’s just two triangles. Later on, you can even add some 2.5D effects, like breaking up a sprite into 100 triangles that explode out when a monster dies. You can even add dynamic lighting to your 2D engine.

Now we need to talk about a word here. Engine. What exactly does that mean?

Exactly what is a game engine?

You’ve read a bunch of forum posts and articles. And you’re a little confused. There are amazing engines like Unreal and Unity, sure, they seem sort of focused on specific kinds of games, but man, they are full-featured! They have editors, scripting engines (with visual coding!), wow!

You’re not going to be making all that.

Then there’s old games that have gone open-source, and there’s not really all that stuff in them. So what’s the deal?

Well, a game engine is really just whatever you want it to be. Some engines feature editors and tools. Some feature integrated editors and tools. Some can be used to make several games, other’s are only designed for a single game in mind. An engine can be devoted to a genre, or can be devoted to multiple genres.

As a rule of thumb, the more advanced the engine is, the more integrated features it has, the less flexible it becomes. Unity and Unreal becoming exceptions to this rule.

What you’ll be making is more akin to a library. A bunch of super useful classes and functions that you can use to help make a game. Eventually, as you’re progressing, you’ll start to make tools on top of these. You’ll also start to design systems and classes that you can re-jig to use again for your next game, eventually they’ll become game-agnostic and you could give them to use by other people.

Slowly progressing in that direction will net you the most benefit, in my opinion. Here’s how it works:

  1. You write a function to load an image into an OpenGL texture
  2. You write a function that draws an OpenGL texture to a screen
  3. You write a class that encapsulates the texture with a position, dimensions, and even the source rectangle from the texture. It provides a method to render.
  4. You write a class that handles animation, rejigging your previous class to do so
  5. You finally write a sprite class that handles all of the above, with multiple animations that are referenced by a string of text
  6. You discover that adding the code and recompiling to see changes is annoying, so you create a function that can load a data-file and create a sprite class instance from it
  7. You discover that changing the text for a sprite is annoying, so you make a program to edit sprite data that can scan the sprite for boundary boxes that you draw in a specific color in your image editor.
  8. You find that quitting and re-running your game to see changes is also annoying, so you make a resource manager class that reloads assets when keys are pressed.
  9. You find that when the Resource Manager reloads assets that there’s occasional errors that crash the program. So you develop a console that hooks into the game to get all the reporting.

That’s how your engine will be developed. During the above steps, you’re making your game, or making several games. Eventually, you come out with what can be described as an engine. The above is a somewhat integrated asset pipeline for your engine, really cool stuff.

Rendering library

I’ve found that I typically don’t need a class for my rendering system. This is because I use Opengl, which is largely a state-driven system.

This sort of system lends itself well to having a series of routines.

Also, I find it cumbersome to render a rectangle by first creating a RectangleShape and then setting it’s border thickness and colours before submitting it to an instance of the Renderer.

Instead, my code is just a single call to something like render_rectangle with a bunch of parameters.

So the way I organised my rendering code is in two files. DrawFunctions.hpp and DrawFunctions.cpp. Any state data, which there’s not a lot of, that needs to be retained is just set in some global variables in DrawFunctions.cpp.

Sure, globals are bad. But not for single developers. You won’t stuff up because you know your own code. And the reality is, as a solo hobby developer, that your code will never be seen by anyone else. In this case, globals are just fine.

Doing things this way meant that I don’t need to worry about a massive cumbersome Renderer class or some kind of annoying Singleton. I wrap everything up into a namespace, so calling the code is much the same as calling methods in a static class.

Rendering Methods

Whatever way you decide to structure your code, there are a few methods that you will definitely need for 2D rendering.

Initialise / Destroy

You’ll need to initialise the renderer, probably creating a new window and a rendering context. In reverse you’ll need to free up these resources before you game exits back to the operating system. You don’t want your game to crash every time you exit?

Draw

I do this function as a bunch of different overloads:

  • Texture at a destination
  • Sub-Rectangle of a texture at a destination
  • Both of the above with a colour modulation value (passed to the shader)
  • Colour filled rectangle at a destination
  • Colour filled circle at a destination

You get the idea. That way, you always know when you want to draw something that the method is called Draw and the first parameter is the kind of thing to draw, followed by a destination, followed by all the additional parameters.

Effects

Now you can start to do some slightly more interesting things. You can set global effects that should take place against all future drawing methods until further notice.

Because everything is just in variables in your source file, it’s easy to reference them etc.

Render to texture

This sort of thing eventually comes up. You need the next set of drawing calls to not draw to the screen (or backbuffer), but to a different render target, a texture that you’ve created.

Doing this is fairly straight-forward (at least in OpenGL), and will allow you to do a bunch of effects later. It is also useful when rendering text, because that often means you’re rendering each letter until the text string is drawn, so if you render it to texture once, you don’t need to repeatedly draw each letter, and instead can make one draw call of your already rendered text.

Additional Pieces

Other than rendering, you will need a few more bits and bobs. Textures are important, as well as the distinction between a “texture” on the Graphics Card, denoted by some kind of ID, versus a “texture” on the application side which features the pixel data (there may be times when you need both).

Typically, you’d handle this with a Texture class. I handle both cases in the same class, just sometimes I send it to the load_texture_from_file function, and other times I send it to the create_texture function.

You also can’t forget about text rendering. This is one of those things that’s easy to do simply, and pretty tricky to do really well. There’s a lot of ways to handle text, the simplest method is to have an image file with all the characters you want to be able to draw, and you render them one after another when you want to draw text.

In like 90% of cases, this will do fine. Hell, you could even just have an image file that has the words “Game Over” and just use that. If your game requires so little text that you can get away with it, go for it. Complex text rendering can be developed for your third game where you want to show more.

Architecture

This is really the most interesting part for you to do yourself but to get you started, this is how I laid things out in my engine.

  1. The “rendering” functions make calls to Windows API, FreeType2, and OpenGL directly
    • Creating windows and rendering context
    • Loading image data from file and storing it in OpenGL textures
    • Rendering text, shapes, and images to screen
    • Loading fonts
  2. Depending on the rendering being done, a different GLSL Shader will be used. So, for example, rendering text uses a different GLSL shader than rendering a coloured rectangle.
  3. Texture Data is kept in a struct that can be passed around and stored on the client side. The struct doesn’t have much other than width x height and the GLint of the texture in video RAM
  4. Special FX are all concrete implementations of an abstract class. All rendering functions can take an array of special FX. Each of these knows how to setup it’s own uniforms etc. Every GLSL shader that can use that effect, has code in it to handle the application of that effect.

I think the below diagram might help.

A small part of the overall engine, the way the rendering is laid out

As stated before, I’m keeping it simple. I’m not abstracting between OpenGL, Vulkan and DirectX (not yet anyway). The functions are responsible for setting all of the needed OpenGL state and there’s a general expectation that they put things back to how they were before.

Resources

As promised, here are what I’ve found to be the best resources for learning how to get things drawn to the screen.

In Part 2, I’ll go over my engine and all of it’s components. You’ll be able to see where I utilize libraries, and where I code my own. This way you can see roughly how much work is involved to get to a 2D engine that you can use to make a game.

Leave a Comment