As a wee lad, I became fascinated by computer graphics from the moment I glimpsed my first videogame. For me, most of the appeal of videogames wasn’t actually playing them, but learning how they worked. How could an innocuous looking plastic box under a TV produce the complex moving images and environments seen on it?
Most people’s first exposure to “3D” real time graphics in a videogame, if they’re older than 30, was Wolfenstein 3D. Only, it wasn’t actually rendering anything in 3D. As evidenced by the fact that the floor and ceiling could only be one height throughout the level and every corner was 90 degrees, something fucky was going on in this engine.
It was a raycasting engine, and the level design process was tile based. You had an empty grid, and if you clicked a square in it, the square was “on”, becoming a solid block. If not, it was “off”, an empty, open space of the same size.
The player/camera moved through this 2D maze, and anything within your field of view was then taken from the 2D outline and extrapolated as vertical columns of pixels varying in length according to their distance from you.
You can think of it as a distortion filter you pass 2D information through which turns it into a 3D looking output. Similar to how the flat tracks in Mario Kart were distorted to give the appearance of depth and perspective.
This type of engine got a lot of mileage early on simply because it was the only game in town, and ran well on even relatively slow machines. Some games expanded greatly on it, such as Rise of the Triad which permitted any distance between floor and ceiling you wanted (though it still couldn’t vary throughout the level), parallaxed sky textures, floating sprite based platforms you could walk on and more.
The next step was to separate these worlds into sectors. Where before the whole world had to have the same floor and ceiling height, in the Doom engine, every sector could have its own unique floor and ceiling height. But only one floor and ceiling per sector, so no rooms on top of other rooms.
This made possible much more varied, creative architecture even though the engine still rendered walls and ceilings in essentially the same way. Then, much as Rise of the Triad expanded tremendously on the features of the Wolfenstein 3D engine, along came Duke Nukem 3D.
Based on Ken Silverman’s BUILD engine, it worked much like Doom except permitted sloped sectors, horizontally moving sectors, rotating or stretching sectors, transparent sprites or masked walls, water you could submerge in and so on:
Still not 3D though, despite the name. Still using essentially the same rendering trick. Still couldn’t look up or down, despite faking it reasonably well by rendering a very tall vertically oriented perspective and then showing only a 4*3 section of it which panned up or down depending whether you wanted to look up or down.
Duke 3D managed rooms over other rooms and 3D walkways but by way of trickery. Rooms overlapping other rooms would visually glitch out if you could see into both at once. 3D walkways were constructed from flattened 2D sprites.
Quake was the first truly 3D game engine to break into the mainstream consciousness. Finally, everything was made of textured polygons. You could truly look up and down. You could have rooms over other rooms and 3D stairs/walkways with no compromises.
In many ways this heralded the beginning of the end for 3D engine innovation, because there’s pretty much nothing a “real” 3D engine like Quake’s cannot do. All that remained was the rapid increase of polygon count and the addition of fancy lighting effects and shaders.
But there were some atavistic holdouts. Many games catered to gamers with low end PCs, trying to achieve the Quake look with more rudimentary engines.
Chasm: The Rift was called “The Poor Man’s Quake” and in fact ran on a primitive Rise of The Triad style engine. However it could display polygonal 3D objects in the game world, which was exploited in a clever manner to add real 3D architectural set pieces into the level to confuse the player as to whether they were playing a fully 3D game like Quake or not.
The enemies were also polygonal, and higher polygon count than Quake’s baddies as the engine didn’t need to devote nearly as much polygon rendering capability to the environment. Many other games used similar tricks.
For example the Star Wars themed FPS “Dark Forces” could display both polygonal objects (used for parked spacecraft for example) and simple voxel objects (used for holograms of the Death Star). Shadow Warrior and Blood (both BUILD engine games) could display voxel objects in the levels, used for stuff like gravestones, keys, health pickups, barrels and so on.
Then, there were the oddballs. Games like Delta Force, where the entire game world and all objects/characters within it were made of voxels. Or like Outcast, where the terrain was voxels but characters were polygonal.
For a while this seemed like it would be the future, and it may still be. At the time of release, this method made possible much more complex and smooth terrain than polygonal engines could manage:
This approach was limited mainly by the fact that it was done entirely in software with no 3D acceleration, and indeed no possibility of 3D acceleration unless the industry decided seriously to pursue voxels instead of polygons. At that point GPUs would begin to be designed to accelerate voxel graphics properly.
There are good reasons why building game worlds out of itty bitty 3D pixels is a good idea, not the least of which is that our own reality consists of little points called atoms, and in a sense could be called voxel based. If the goal is to more closely imitate reality, that seems like a promising direction to move in.
Anyway, after about 1999 or so every 3D engine went fully polygonal and nobody ever looked back. It’s been dull since then for people like me who are fascinated by the elaborate tricks used to achieve good looking real time 3D back before everybody had the hardware for it.
Follow me for more like this! And why not read one of my stories?