DacquoiseX

Download:
Demo Scene (Windows)
| Source (VS2010) | Video

Completed 21st May 2012, DacquoiseX is a first attempt at a simple 3D game engine written in C++ and HLSL with DirectX 11, DevIL and Assimp as part of the Interactive 3D Graphics Programming module on my course at the University of Derby. DacquoiseX wraps a number of features neatly into a scene graph and content manager, including terrain generation, rendering and interaction, shader and other state binding, model loading and rendering, transformations, user control, lighting, render textures and collision. Although the engine is usable, and quite nice to work with (imho), in trying to make it flexible, and wrap low-level features such as constant buffer and shader binding so tightly, DacquoiseX took a significant performance hit, and is therefore not suitable for large projects. I also found the scene graph to be quite an odd structure for a game, though we were required to implement this for the module.

Below I will attempt to break down some of the key features and their implementations. I will, of course, break up the impending wall of boring text with pretty pictures and videos.

Of a Scene Graph and its Stacks
DacquoiseX provides a scene graph and stacks for dealing with a number of DirectX states and other features. Stacks are maintained for states such as bound shaders, constant buffers, rasteriser states, shader resource views, light managers and cameras using a template class, BindablePStack, and interface IBindable. The top item on a stack is automatically bound when a push or pop operation occurs. Nodes are included with the framework which push and pop common items like these when the scene graph is traversed during the update or draw phases (depending on what is appropriate). While managing low-level states like shaders and constant buffers in this way maintains flexibility alongside ease of use, it also (as I realised too late) incurs a significant performance penalty. Transformations are, of course, also managed in the scene graph – see below for the most manly demonstration possible.

Terrain
Terrain can be loaded from a custom file format (.ter). This is a text format based on the easily parsed .obj format. Within these files you can specify a heightmap texture, the size of the terrain generated from it in xz units, the y values of black and white, the weight that red green and blue contribute to this, the texture, number of repetitions in x and z, the normal map texture, its repetitions, and the terrain’s lighting coefficients. I decided to do this while I was cleaning up the project as a whole, since it allows easy, external customisation of terrain without recompilation, and allows me to parse and load terrain into the content manager along with models and textures.

Full processing of the height map, generation of face, vertex normals, and texture coordinates occur when the .ter is loaded into the content manager, which frees up the previously cluttered terrain node to deal with only what is necessary, in the most flexible, readable way possible.

Terrain Detail Normal Mapping
Terrain can be loaded with or without detail normal mapping simply by including (or not including) the relevant line in a .ter file. The framework will automatically generate the required vertex data and bind the correct shader for this. In the normal mapping shader I wrote, detail normal mapping is only applied a short distance away from the viewpoint, and its effect is linearly interpolated out for a smooth transition. Using this technique I can make much more interesting, believable terrain textures that react to lighting, or I can just screw around and put troll-faces everywhere.

I used the awesome NVidia normal map filter found here to create the troll texture, and some snippets from rastertek for the maths, which at the time of writing I am working towards understanding.

Rendering to Textures
Rendering to a number of different textures or buffers is an important part of modern computer graphics, which is useful beyond putting images on screens within a 3D world. Still, that’s what I wanted to use it for this time around and I say any excuse for practice is a good one.

I created a node whose children are not drawn in the standard scene graph traversal, but are instead used to update a texture (stored in the content manager) whenever a refresh is requested on it, and the scene beneath this node has changed. This texture can then be used in place of any other, for example, by using the special model and terrain nodes I created which allow you to simply swap out individual material’s textures. Thus, I can render moving, 3D scenes and objects onto surfaces such as the ‘mirror’ pictured below, or even the land itself.

Bounding Volumes and Intersection
For this I made use of two classes, BoundingSphere and BoundingBox which are provided in DirectX 11.1, and which our lecturer kindly ported to DirectX 11 for those of us without access. I wrapped these in a template and another class which hides their differences from higher level implementation, allowing flexible use of axis-aligned bounding boxes (AABBs) and bounding spheres. These are also integrated into the scene graph and transformed as required.

Alpha Transparency
A trivial feature, perhaps, but a useful one. I probably shouldn’t have used it as heavily as I did in the demo, but performance was okay, so I won’t worry. With alpha transparency I was able to allow transparent areas on textures (mirror labels in the demo), and even fade the terrain into the sky box when approaching the draw distance, for a smooth transition. The image below is actually a cube with a partly transparent texture on all four sides resting close to the camera, while the terrain is far, far away.

More on Cameras and Transformation
I decided to experiment a little when implementing transformation in the last iteration of this project. I took out the guts of previous camera classes I’d written, which handled transformation and matrix generation in a useful, efficient manner, and reorganised them to work as transformer modules. These modules could be attached to certain nodes, and derived from an interface, so that different transformation modes could be swapped in and out of a node, such as generic, gimbal lock-prone pitch, yaw, roll rotation, quaternion rotation, or accumulative 6dof. This worked well, with the one downside being that vital things such as position are hidden under one layer of class from entities like players and enemies.

Since I saw little point in re-implementing my old camera classes after this, I decided only to implement a dummy camera which could have a projection and view matrix set to it, set a node in which it dwells, and apply the inverse of that node’s transformation to it every update. This means that any moving camera can be implemented simply by moving the nodes to which it is attached, or attaching it to a different node.

Advertisements

Your thoughts:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s