Monday, October 16, 2017

Grass Fire Simulation and Rendering

About two weeks ago I was looking through some graphics blogs and was inspired by this article on fire spreading in the game Far Cry. I decided to implement something similar in 3DWorld, since it seemed like a good addition to the existing physics and terrain systems. I already have support for isolated fires, but so far I didn't have a good/efficient system for fire spreading.

Note: This work was started the week before the California wildfires and is unrelated to that disaster.

I decided to start with only grass fires that spread across the terrain mesh but don't spread to other scene objects. The existing uniform mesh x-y grid structure seemed like a fine place to store fire related data. Each grid cell contains the following fire fields:
  • Hitpoints: The amount of fire/heat damage this cell can take before it catches on fire; this is a function of humidity (grass wetness), current precipitation (rain/snow), and some randomness
  • Available Fuel: The amount of grass fuel available for the fire to burn; consumed over time as the fire burns; this is a function of grass density (blade count)
  • Burn Intensity: The intensity of the fire from 0.0 to 1.0; this controls fuel consumption rate, heat/damage generated, spreading rate, and flame density + height
Weapons such as the plasma cannon and rocket launcher can be used to start fires in grass covered mesh cells. Explosions have a probability of creating one or more nearby fires. Once an isolated fire is burning in close proximity to the mesh, it will start doing damage to reduce the hitpoints of the containing cell. When the hitpoints reach zero, the fire will start to burn with a small intensity. The intensity increases over time until all fuel is consumed, at which point the intensity will decrease over time to zero. High intensity cells do damage to their four neighbors, which will eventually also catch on fire. Rain and snow quickly extinguish fires by reducing their intensity to zero over the course of a few seconds. They also wet the grass, making it more difficult to set it on fire later.

Here are some screenshots of fires burning in a scene consisting of dense grass with light tree cover. The fires burn the grass and flowers but not the trees. Fire leaves behind burnt black grass blades and gray ash covered ground where the green grass used to be.

Grass fire burning a circular area between the trees. Oops, looks like flowers aren't burning.
Ring of fire, after flowers were fixed so that they burn as well.

Fire burning the grass and flowers nearby, with heat haze producing rippling effects on the screen.

Fire won't burn areas that are over dirt or rock, or cells that are underwater. Also, once the grass has been burned, it remains black and can't be set on fire again. This image shows multiple fires. If you look closely, there is a shallow pool of water under the grassy area in the bottom right, which is why it's not burning. In addition, the steep slopes are covered in dirt and don't burn. The source of the nearest fire was an explosion in bottom center of the image where the grass is pushed away in a circle.

Multiple fires burning in this scene. The puddle near the bottom right produces wet grass that doesn't burn.

The rate and direction of fire spreading is affected by wind speed and direction. The fire will spread more quickly in the direction of strong winds. In the absence of strong wind, fire spreads approximately uniformly in all directions that contain grass, forming a circular burn area and a doughnut shaped ring of active fire. The fire at the center of the circle has consumed all of the grass fuel and has burned out.

This fire implementation is integrated into the physics and gameplay system of 3DWorld through the following effects:
  • Green grass blades are slowly turned black at randomly varying rates
  • Green terrain texture is temporarily turned black and will fade to gray ash over time
  • Light black/gray smoke is produced that will float toward the sky
  • The player takes damage while walking over fire in gameplay mode
  • Objects tagged as explode-able explode when fire contacts them (see video below)
  • A burning sound loop is played when the player is near a fire
  • Heat from nearby fire produces a wavy screen space effect (see image above)
  • Heat sensors can detect the heat from fires
Here's another image showing a fire burning in the back yard of my house scene. It was like that when I got there, I swear! This plasma cannon I'm holding has nothing to do with it!

Fire from the plasma cannon spreading in the back yard.

Given enough time, the fire will eventually spread to all the grass around the house, but it currently won't set anything else on fire.

I recorded two videos of the grass fire effect. The first video shows how a fire spreads, destroying an exploding tank in its path. Near the end I stand in the fire and take damage until I burn to death. Sorry, there's no sound this time.


The second video shows fire spreading in a dense grassy meadow with sparse tree cover. It will climb the hills and reach any area connected by a path through the grass until the entire scene is burned black. You can see how the fire produces emissive lighting when I cycle to night time. At the end of the video, I enable rain, which wets the grass and quickly extinguishes all fires.


This system is very efficient. The entire scene can be burning and I still get about 40 frames per second. More reasonable fires will give me at least 100 FPS. Most of the frame time is taken by the grass and terrain texture updates, as these require new data to be sent to the GPU each frame in many small batches. To reduce the time spent in these operations, grid cells are split into batches and updated sparsely. Each cell is assigned a random number fro 0 to 31 every frame, with the following update cycle:
0: Update grass color (green => black)
1: Update terrain texture (grass => dirt or rock, depending on elevation)
2: Update terrain color modulation (white => black, temporary)
3: Generate a smoke puff that rises
4: Damage nearby objects such as exploding tanks
Any other number does no update. Therefore, expensive updates are only done for 1/32 of the active fire cells each frame. This is enough to keep cell update time around 1ms in most cases.

Fire is drawn as a collection of up to 6 variable sized, camera facing quads (billboards) per active cell, for the cells within the camera's view frustum. The number of quads is a function of burning intensity with some random variation. Each quad is textured with a sequence of 16 images from a texture atlas that gives a burning flame animation. They're drawn with additive blending and emissive lighting to produce bright, intense colors. Enabling additive blending and disabling depth testing allows thousands of fire billboards to be drawn without having to sort them back to front for alpha blending. I also use the depth buffer from the previous frame to blend the bottom edges of the flame quads to transparent where they meet the ground, which avoids ugly artifacts at locations where the quads intersect the terrain mesh under them.

For future work, I would like to extend this system to non-grass types of fires. For example, fire could be made to spread to trees and other scene objects. This is more difficult because it's a 3D effect rather than a 2D affect applied to a uniform grid on the ground. It's not clear if my billboard rendering will look good when the fire is up in the air on a tree branch. It's also unclear how to make other scene objects appear burned. I'll be sure to post more screenshots and videos if I ever figure this out.

Sunday, September 24, 2017

Slow Motion Physics Video

I haven't done much work on the 3DWorld source code since my last post. I've been mostly experimenting with the physics system and trying to reduce instability. I took some videos of slow motion physics to help me analyze the quality and problems in my system, and I'll show two of these here.

3DWorld has an interactive timestep adjustment feature that allows the user to slow down the physics timestep by an arbitrary factor, which can be used to make objects move in slow motion. The timestep can also be reduced to 0 to disable physics and lock all objects in place. This doesn't affect the player movement or weapon firing, only dynamic object motion. I can use this feature for analysis and debugging of physics problems. For example, I can fire a rocket at an object, then freeze physics and walk up to the object for a closer view of the collision, then let the rocket continue to explode in slow motion.

The following video shows me breaking windows in 3DWorld with the physics timestep set slow enough that the individual triangular glass fragments can be seen spinning though the air and colliding with the scene. They don't collide with each other. The sharp glass fragments damage the player, which is why there is so much blood in the air and on the ground. Blood moves in slow motion as well. Bloody glass fragments will turn a reddish color.

Note that I'm recording video in realtime on 7 of 8 threads (3.5 of 4 cores) and only letting 3DWorld use one thread for physics + rendering + screen capture. Even with this setup, the video compression can't always keep up with the 60 FPS target framerate. This means that 3DWorld sometimes can't produce the results at the recording rate, which is why the video plays back somewhat faster than realtime. However, the physics is slowed down by ~10x, so you can see the fragments moving just fine. Sorry, I still haven't figured out how to capture and record 3DWorld's audio stream, which is why there's no sound.


That big gray sphere at the end is my skull, and the black sphere is one of my eyes. The camera follows the other eye as it bounces on the floor. I have no idea what happened to my red nose. Yes, I bled to death from cuts due to broken glass. It happens, especially when using the baseball bat on windows. I don't recommend trying this at home. Oh, and if you're curious, you can get hurt by wood fragments as well.

If you look closely, there's some instability in the glass fragments resting on the windowsill and on the ground. Some of them rotate in place even though they're no longer falling. Why is that? Fragments rotate about a random axis at a rate that's a function of object velocity. This means that angular velocity scales with linear velocity. Objects that have come to rest on a flat horizontal surface such as the ground are flagged as "stopped". This allows the physics system to skip some of the work when processing stopped objects, and it also disables object rotation. Objects that aren't moving but aren't in a stopped state have a small negative Z velocity impulse related to the effect of gravity that pulls them down. This velocity produces a small rotation that looks odd.

The problem is that the stopped state logic is approximate, and doesn't work well when an object collides with multiple scene faces at the same time. This is what happens near the outside corner of the windowsills and the inside corner between the wall and the floor. They're not put into a stopped state due to the possible collision with a vertical face that may push the object away at the next timestep. If they land far enough away from vertical surfaces, everything is fine. These fragments look like triangles, but the physics system uses bounding spheres for them. This means that a triangle that appears to be resting on the ground may actually have its bounding sphere touching the wall.

In addition, the wood fragments from the bench are 1D polygons. I hadn't implemented texturing of thick 3D triangle fragments yet. I've partially fixed the glass instability now - the fragments sometimes jitter a small amount, but they don't rotate. I've also added textured thick wood fragments. These look better than flat polygons, but still don't look much like real wood splinters. It's progress, but far from complete. The updated video is below.


I was careful not to die this time.

There are still some issues to fix, but it looks a bit better. Well, at least if you know what to look for. For example, real glass fragments don't land at odd angles, they land with the flat side down. Unfortunately, that's also a bit too simple and doesn't look right. Drawing triangle fragments that intersect each other on the ground looks even worse. No, they land with the flat side down, but only if they're not landing on top of each other. If fragments land on each other, they form a chaotic pile. This requires collision detection between thousands of fragments. It's something to possibly add later.

I'll post more screenshots or videos if I make progress later.

Tuesday, August 29, 2017

Bouncy Ball Physics

Recently, I've watched some videos of physics simulations in other game engines. This got me interested in experimenting with physics and collisions of large number of objects in 3DWorld. 3DWorld supports full physics for spherical objects, including gravity, elastic and inelastic collisions, momentum, friction forces, air resistance, etc. All "large" (non-particle) spheres collide with all objects in the scene, including each other. Spheres are simpler to work with than other object shapes due to their rotational invariance and other factors. Each frame runs multiple physics simulation + collision detection steps on all active (moving) objects. The active object test significantly reduces computation time for common cases where most of the objects have stopped moving. In theory, the collision detection should prevent objects from intersecting each other and the scene.

I added a config file option that reduces weapon fire delay to 0, which allows me to throw a ball every frame. The maximum number of active balls is set to 20,000. At ~100 FPS, it only takes a few minutes to generate 20K balls. Here is a video of me throwing some 10K balls into the office building scene in realtime. See the balls fall and bounce in parabolic trajectories. Note that I've disabled player collision detection to prevent the ball recoil from pushing me around wildly.



Collision detection works for dynamic as well as static scene objects. I can push balls around on the ground using the player model or by pushing a movable object such as a crate into them. They'll go up and down inside an elevator, and fall when pushed over a ledge. I can place objects on a glass surface or in a glass box, shatter the glass, and watch them fall. Even though the balls are stopped, the breaking glass will re-activate their physics and put them back into motion.

Balls will also stack when added to a container so that they're non-overlapping, and will spread out to fill in the gaps just like real world spheres. I have a colored glass box on the third floor of the office building scene that I use for lighting experiments. It's fairly large, so it takes a while to fill with balls. I marked the glass as shatterable in the config file so that I can break it with weapon fire. Here is the video showing the glass plate and glass box experiments. Note that the lighting is static/baked into the scene because the glass box isn't supposed to be destroyable, which is why the lighting looks odd near the end of the video.



This is a high resolution screenshot showing the balls stacked in the glass box. See how closely together they're packed. The collision force of the glass on the balls keeps them inside the volume. It takes several seconds for the simulation to converge to this solution and the balls to stop moving. When the glass is broken, the balls will spill out to form a single layer on the floor.

1000 bouncy balls in a colored glass enclosure. Collision forces keep the balls from intersecting each other and the glass.

There are a number of different ways to apply forces to objects in 3DWorld. Weapon explosions produce force radiating outward from a central point, which affects dynamic objects. Here I use a rocket launcher (named "Seek and Destroy") to create explosions that push the balls around the scene. The following video shows the effect of these explosions on the physics state. This also shows off my neat explosion shockwave postprocessing shader effect. The shader distorts the image by simulating a spherical region of compressed gas with higher index of refraction forming a lens.



The rocket launcher is a crazy weapon to use with zero fire delay! I wouldn't recommend this in real gameplay, it's usually instant death. The small random rocket firing error causes two rockets to occasionally collide in mid-air - they're also dynamic objects. This results in an explosion and chain reaction that sometimes propagates back to the player along the chain of close proximity in-flight rockets, resulting in a high amount of area damage. Oops. Better stick to bullets, or some other non-explosive weapon.

Wednesday, August 16, 2017

Screenshot Gallery

I've been busy with other things and haven't worked on 3DWorld much lately. It's been a while since my last blog post. I've mostly been making minor bug fixes, feature additions, and improvements. I don't have enough to say about any particular topic to create a separate post. Instead of writing another wall of text, I'll just show some interesting screenshots. Most of these are new, but a few were captured some time ago and never posted anywhere.

First up, I've added ambient occlusion to tiled terrain scenes. Well, I've had ambient occlusion for a few years now, but this version works with procedurally generated terrain as well as terrain read from heightmaps. And it uses height values computed on the GPU, which is faster. The runtime penalty of ambient occlusion is much lower now, so why not use it? It certainly adds depth to the scene.

Terrain generated using domain warping and precomputed ambient occlusion to darken the deep ravines.

3DWorld gives the user control over temperature, vegetation, atmosphere, and many other physics parameters. These variables can also be set automatically by traveling to other planets through universe mode. Not all terrain is covered with green grass, trees, and water. Here are some other biomes.

Barren moon terrain with no clouds, atmosphere, or water, only small ice caps on the mountain peaks.

I even implemented a lava mode for the water shaders so that volcanic planets can be shown.

Hot lava/rocky planet with strong wind and dense, low clouds.

I've been experimenting with volumetric spotlights that cast glowing particle cones in a dark room. This is a simple and efficient technique that draws a cone using a custom shader rather than expensive GPU ray marching. I found this blog post that explains the technique. It looks okay, but I'm not sure where to use this effect in 3DWorld. There aren't a lot of spotlights in dark, smoky rooms. I'll keep it around for future use.

Experimental volumetric fog effect for spotlights in dark basements.

This is one of my favorite 3DWorld screenshots. It was taken a few months ago. I tried to get all of the different universe objects in the same screenshot: sun, stars, nebula, planet, rings, and asteroids. There's some nice contrast between the yellow tinted foreground and purple tinted background.

One of my favorite universe screenshot images, though it's kind of dark. Stars, nebula, asteroids, and planet with rings.

Finally, here is an older screenshot showing the moment after a cluster grenade has exploded. There are a huge number of particles here, somewhere around 10,000. Each one has physics including gravity and collision detection, and all of them receive dynamic light + shadows. The triangle particles also emit their own light. Smoke and fire are drawn in many depth sorted layers with low alpha to produce a volumetric effect, which is dynamically lit with sun shadows. The light of the fires is what tints the otherwise gray-black smoke a yellowish color.

Cluster grenade explosion screenshot with ~100 light emitting particles, ~10,000 colliding physics particles, smoke, and fire.

That's it for this post. I'll put up more content when I have something new to show.

Monday, June 26, 2017

Procedural City With Buildings

I would like to have large cities and other man-made objects in 3DWorld. Terrain, vegetation, clouds, and water look fine, but the scene needs more. My large array of museum building models from a previous post is interesting, but looks too repetitive, nothing like a real city.

I started looking into procedural city creation a few weeks ago. There are tons of articles and demos of generated cities online. Most of them seem to start with city planning, then add roads (often in a grid), and finally add buildings. This works fine with flat open land areas, but probably not so well in 3DWorld procedural scenes. It's not clear how this top-down approach works with water, mountains, cliffs, and other non-flat terrain features. So I gave up on planning and roads, and went straight to building generation.

I got some of my procedural building ideas from this post by Shamus Young on building generation for the Procedural City project. 3DWorld buildings need to look good both during the day and at night, which means they need textures. At some point I'll probably also need to add detail features such as railings, antennas, AC units, etc.

To simplify things, I have only two textures per building, one for the sides and another for the roof. The easiest textures to use are brick/stone walls and arrays of office building windows. I was able to find some good ones online. A few of them also have normal maps. I'm using per-material texture scales and custom texture mapping to make the windows and bricks/blocks appear to be a consistent size across textures and building shapes. The textures are very repetitive so that they tile properly across the building, which means they don't have any features that really stand out. No doors, etc.
Buildings using plain brick or stone wall textures don't even have windows. I may need to improve on these issues in the future. It would be nice to have a texture library of the same exterior building materials and style but a variety of tiles that can be interchanged. For example, blank walls, rows of windows, door(s), etc. These would need to tile properly so that a window next to a door looks seamless. Unfortunately, it's very difficult to find acceptable texture sets online that are both free and high quality.

I've added the following building types/shapes to 3DWorld, starting with the simplest and most common:
  1. Low brick buildings (rectangle, L, T, and U shapes) with 1-3 levels
  2. Office buildings of 1-8 stacked cubes of decreasing size, possibly with cut corners
  3. Round (cylindrical or ellipsoid) office towers, possibly with a flat side
  4. Polygon shaped office towers with 3-8 sides of 1 or 2 different side lengths
  5. Large buildings composed of multiple cubes of various heights attached together
  6. Large buildings composed of angled geometric pieces attached together
Here are some example screenshots showing how building generation has advanced over time. The first one was early in the development process, and the last few are the most recent.

Early cube buildings of types 1 and 2.

Mixed cube, cylinder, and polygon buildings of types 1-4.



I have a config file format that defines different building classes, which I call materials. Each class has a large number of parameters including frequency, textures, placement rules, size variables, shape probabilities, distribution parameters, etc. The width, height, aspect ratio, number of levels, number of splits, number of sides, allowed wall angles, rotation angles, altitude ranges, and others can all be user configured. The user assigns material parameters (with ranges) to the various building types and probabilities to control their ratios for each city. I asked the system to generate 100K buildings over a radius of several square miles, and it came back with around 60K buildings. Buildings over water and on high mountains are skipped, which means that buildings cover approximately 60% of the surface within the city radius. Generation time is about half a second using 4 CPU cores.

Buildings are packed together and checked so that they don't intersect each other. They're placed at the correct height value by querying the terrain height generation function, which is where most of the generation time is spent. This works correctly for both procedural terrain and height values read from a texture. Their bases are adjusted to remove gaps between the bottom of the buildings and the ground. Trees and plants are placed in the gaps between buildings. The scales aren't completely right though. Buildings are too small compared to the player and trees, which make trees look odd when placed near tall office skyscrapers. Buildings are probably too large compared to mountains and other terrain features though. All of these sizes are configurable in 3DWorld, so I'll have to work on this later.

Buildings are pretty well integrated into 3DWorld now. They're generated on-the-fly when needed each time the terrain changes. The algorithm can be re-seeded by the user to generate a new city. I've implemented shadows from and onto buildings, and building self-shadowing for both ground and tiled terrain mode. The shader used supports dynamic and indirect lighting, and fog. I've added player collision detection with buildings; you can even walk on a building's roof. I've mostly finished the line intersection tests for building ray tracing so that they appear correctly in overhead map view:

Overhead map view of buildings, showing that ray casting is working.

Rendering is efficient because it supports block-based distance and view frustum culling, back face culling, and dynamic level of detail (LOD) for complex buildings. Draw time is 2.7ms for around 5000 visible buildings such as in the screenshots below. I could probably use multiple threads for rendering, but it's much more complex and doesn't help much. I'm currently generating the vertex data (mostly quads) on the CPU and sending it to the GPU every frame. I could precompute this once and store it on the GPU, but that could take a lot of GPU memory for 60K buildings at full detail. Keep in mind that most buildings have flat faces, so the normals are different for the corner vertices, which means the vertex data can't be shared. The data I actually have to send to the GPU is very small compared to the full set, only the front faces of the nearby buildings within the view frustum. I'm guessing that's around 1-2% of the total number of vertices/triangles. Also, it's difficult to do culling and batching of buildings by texture if the data is stored on the GPU. If I eventually add transparency from glass windows, I'll need to sort back to front by depth, and that will also complicate things.

Here are several more images of my generated city. What you see is only a tiny fraction of the total city. It extends off into the distance out to a radius several times what the view/fog distance is. The player is free to walk or fly out to the edge of the city to explore it all. It's also possible to specify more than one city at different locations.

Mixed building types 1-5 on rock and grass terrain.

Cube buildings of types 1-2 among various types of trees. Trees are probably too large compared to the buildings.

Here are some screenshots showing close-ups of interesting building types. I believe the most complex building is composed of 192 triangles, though at most 128 are visible and drawn at once. Note that every one of the 60K buildings is unique. I could use instancing, but that doesn't seem to be required to get good performance.

Various buildings with a variety of shapes, of all types. The building in the bottom left is a cube with corners cut off.

Triangular building of type 4 with cut corners on a hillside. The trees are probably too large compared to the buildings.

Complex angled office building of type 6 on a snowy peak.

Maybe I should also place larger/higher buildings in the city center, and smaller buildings near the perimeter. That can actually be done using the config file, where I can define a different placement rectangle or circle radius for each building type. I can even create multiple cities using different center and radius values, or multiple districts within one city that each have a different mix of building types. It's worth experimenting to get a more consistent and realistic look to the city. ... Okay, here is what it looks like when I place the office towers in the city center:

City skyline: Tall office buildings in the city center, smaller brick and stone buildings in the outskirts.

Huge city in a flat area with a few sharp peaks in the distance.

The next step is to generate roads or some other system to tie the buildings together. I don't know quite how to do that yet, since it either requires flattening the terrain, or curving the roads to match it. This is particularly difficult because the terrain hasn't been generated at the time of building placement. It's generated in chunks when needed, whereas the entire city is created at once. It's also unclear whether the roads should form a uniform grid, or curve to follow the terrain contour. A grid road network is likely easier, though it requires the buildings to be oriented parallel to the grid axes rather than randomly rotated like they are now. There are a lot of choices with assorted trade-offs. It's will be interesting to see how all of this turns out.

Saturday, June 17, 2017

Procedural Clouds in Tiled Terrain

I've shown tiled terrain clouds in many screenshots over various blog posts, but I've never had a post that shows off the different types/layers of clouds. The 3DWorld sky is rendered back to front as several layers:
  • Blue sky with gradient for atmospheric scattering effect (day) / starfield (night)
  • Sun flare when the sun is in view (day) / moon (night)
  • Fog/haze layer that increases in intensity with distance to blend the terrain with the sky
  • Procedural 2D wispy cloud plane layer that casts shadows on the ground
  • Procedural volumetric 3D clouds that move with the wind
Both cloud layers are drawn using screen aligned quads and noise evaluation in shaders on the GPU. The 2D cloud layer casts shadows on the terrain and grass by tracing a ray from the sun to the point on the ground and intersecting it with the cloud plane to compute light intensity. I haven't figured out how to efficiently produce shadows for the 3D cloud layer. The two types of clouds use different lighting calculations because the shaders have access to different types of data. For example, the cloud plane doesn't have local cloud density information, so it can't compute volumetric lighting like the 3D cloud system can. They don't always blend together perfectly, but in most cases it looks good enough.

3DWorld has a set of config file parameters, key bindings, and UI sliders to control the amount of clouds on the 2D and 3D layers. Clouds are also affected by weather conditions such as wind, rain, and snow. I've added an "auto" mode with day/night cycle that dynamically varies the weather and cloud cover over time to simulate a real environment.

Here is a set of screenshots showing the various weather and cloud conditions that can be selected. I've turned off the birds in the sky for most of these images so that the clouds can be seen clearly.

Sunny day with sun lens flare and minimal cloud cover.

Light, wispy, high clouds, with distant fog.

Medium wispy clouds and haze. These clouds cast dynamic shadows on the ground.

High, dense clouds, with small areas where the sun peeks out.

Scattered small 3D cloud puffs.

Mixed 3D low lying volumetric clouds and high 2D cloud layers.

Heavy cloud cover with mixed cloud types. The lighting doesn't really match between cloud types.

Storm clouds with rain. Are those city buildings in the background?

Clouds colored red at sunset.

Night time clouds in a starry sky.

Thursday, June 15, 2017

Assorted Images and Videos

I'm currently working on procedural building generation, but it's not yet finished. I don't want to show off unfinished work, at least not until I get closer. So here are some images and videos on assorted 3DWorld topics that I've been working on.


Dynamic Light Sources + Shadows + Reflections + Indirect Lighting

I did some experimenting with dynamic lighting with shadows, reflections, and indirect light. Adding shadows works just fine - see the screenshot near the end of my earlier post. I've also had horizontal plane reflection support for a while. Indirect lighting of dynamic objects is new. It works the same way as indirect sun and sky lighting, using multi-threaded ray/path tracing. Unfortunately, the performance vs. quality trade-off isn't there yet. I need at least 50K rays for a noise-free image, and that only runs at around 8 FPS with 4 ray bounces. Maybe this isn't the right approach? At least it looks cool. Here is a blue light casting shadows and (difficult to see) indirect light.

Blue dynamic light source casting shadows at night in the Sponza atrium.

Blue light source with floor reflections and indirect light from multiple sources.


Procedural 3D Clouds

Last month I went back to working on procedural 3D volumetric clouds in tiled terrain mode. I've had static 3D clouds for a while now. It was time to make them animated. There are between 400 and 600 active clouds in the visible area of the scene, with a user controlled cloud density parameter. Clouds move in the direction of the wind and change shape over time using 4D procedural noise in the shader (x, y, z, and time). [It's actually 3D noise; time is added as a vertex offset.] Cloud speed and the rate of shape change for individual clouds increases with wind speed. When a cloud floats outside the view distance, another cloud is generated on the opposite side just out of view distance, so that it floats into the scene. I don't have enough material for a full post, only one video.


I took some screenshots, which look like all of my previous tiled terrain cloud screenshots because you can't see the clouds moving.


Mandelbrot Set Viewer

I implemented a Mandelbrot Set fractal viewer in 3DWorld's "overhead map view" mode for fun. I spent some time coming up with a custom color mapping that shows off the fractal pattern well. The user can pan around and zoom in and out with the mouse and arrow keys. It's implemented using double precision math so that you can zoom in further before running into precision problems. I don't think I can get good results with double precision on the GPU, so I made this run on multiple threads on the CPU. I normally get around 30 FPS at 1080p resolution, but it competes with video recording for CPU cores and ends up running at only 15 FPS in this video. A 16s video recorded at 60 FPS appears to play back at 4x the speed and lasts for only 4s. Oh well.





Thursday, May 18, 2017

Domain Warping Noise

I recently watched this YouTube video and decided to make another pass at landscape height generation for tiled terrain mode in 3DWorld. The existing system allowed for several user-selectable noise variants, starting with my custom sine wave tables and including my recent implementation of GPU ridged simplex noise, which is a more efficient version of ridged Perlin noise. Two interesting variants of noise that I hadn't yet used for terrain height were noise derivatives and domain warping. Since domain warping seemed to be easier to implement in 3DWorld, I decided to start with that. The reference article (with source code) can be found on the Inigo Quilez website here.

Domain warping adds a swirly effect to regular noise that can be shown in the following overhead map view screenshot from 3DWorld.

Overhead map view of the central area of my island using domain warping noise is pseudocolor.

The features in this scene look very realistic, similar to eroded terrain with river valleys and steep peaks. Take a look at this image of North America for reference. It's much rougher than my smooth rolling hills seen in previous screenshots. If I use dirt and rock to replace grass and snow on steep slopes, the complexity of the terrain really stands out. Here is a slice of terrain from ground mode with grass, trees, flowers, and plants. Note that the grass blades are now curved, rather than being single triangles.

Ground mode closeup of terrain using domain warping, with grass, trees, flowers, and plants.

I can get some stunning views in tiled terrain mode, especially with the height scale turned up and lots of water in the scene. Here is a shot of steep cliffs near the edge of the ocean. The front side of the cliffs look like they have grass stairs cut into them. Notice the shadow of the mountain on the water to the right.

Steep cliffs above the ocean cast shadows to the right.

Keep in mind that 3DWorld creates an endless terrain that can be explored. There are no real limits to procedural generation (other than the wraparound every 2^32 = 4 billion tiles). The player can walk for hours in the same direction and never see the same landscape features. This is the main advantage of procedural generation compared to hand drawn maps. The downside, of course, is that generated terrain often lacks variety compared to what a human can create. This is the part I'm currently trying to improve.

Here is an image taken of the beaches and ocean near sunset, with fog in the background. It looks almost like a photograph. The black dots in the sky are birds.

Sun setting on the mountains near the ocean, with waves, distant pine trees, clouds, and fog.

I've applied a detail normal map to the textures of the sand, dirt, rock, and snow layers. This improves the look of the surface by adding high frequency detail. The distant peaks use per-tile normal maps to produce high frequency lighting details, even though the mesh vertices themselves are drawn at a lower level of detail to improve frame rate. Fog helps to reduce level of detail transitions in the distance.

Here is another view showing deep ravines and high peaks with water and grassy terraced fields below. I've disabled the trees so that the terrain stands out more.

View of an island with snowy peaks and sharp ridgelines. Trees have been disabled so that the underlying terrain can be seen more easily.

3DWorld does have some limited modeling of biomes. Some beaches are sandy and others are dirty/rocky. A low frequency procedural vegetation map is used to make some areas desert where no plants grow. This screenshot shows a strip of sandy desert between a snow peaked mountain range, with grassy areas behind it, and forest in the distance. No plants grow in the desert, and few plants are found on the rocks.

A strip of desert is caught between the lush green fields and the snow covered peaks.

This image shows a wall of mountains with a deep valley cut into the middle of it. I'll bet this area would be interesting to explore in a game. Domain warping can cause a large section of heightmap noise to be picked up and moved somewhere else in the scene. This effect can create high frequency content that isn't present in most other heightmap generation functions.

Grass fields leading toward a narrow mountain pass.

Finally, here is a screenshot showing a narrow land bridge connecting two islands. This feature was naturally created by my noise function. None of these scenes have been manually edited.

A natural land bridge connects these two islands.

I've also seen the inverse of this effect where rivers are cut into the land. It's not too common, and I forgot to take a screenshot. That's it for now. If I manage to get noise derivatives to work well, I'll post some screenshots for comparison.

Sunday, May 14, 2017

Instancing 3D Models in Tiled Terrain

My grand goal for 3DWorld is to allow an entire city of many square miles to be explored seamlessly. No loading screens, no delays, a huge view distance, shadows, indirect lighting, collision detection, etc. I started with the terrain in tiled terrain mode. Then I added water, trees, grass, flowers, plants, and rocks. That's good for natural scenery, but what about buildings? I haven't implemented procedural generation of building yet. In the meantime, I'll stick with importing 3D models in OBJ and 3DS formats.

Over the past few weeks, I've implemented a model instancing system that has allowed me to place a 2D array of - not just a few - but 10 thousand building models into a scene. I'm using the museum model from this post, which contains about 1.5M triangles across all materials. While only 100 or so museums are visible at any viewing location (limited by fog/distance), this is still a huge amount of data. 3DWorld's instancing system also includes sun/moon shadows, indirect sky lighting, and collision detection for the player. I'll discuss these features in more detail below.

These museum models are all placed on the mesh at the proper height/Z value using the terrain height values, in this case from a heightmap source texture. The mesh can be flattened under the buildings to make them level and remove any gaps. Buildings that are far away are obscured by fog and are not rendered.

Rendering / Level of Detail

Each museum model has nearly 1.5M triangles, so rendering all 10K of them would require 15 billion triangles. Clearly, that's no good for realtime. I needed to cut that down by three orders of magnitude (1000x) to something closer to 15M triangles. The obvious first step is to skip drawing of models that are far away. I already do distance culling for terrain, trees, etc. - that's what the distant fog is for. Also, View Frustum Culling (VFC) can be used to skip drawing models that aren't in the camera's view, for example buildings behind the camera. If I use the same visibility distance for buildings, and add VFC, this brings the number of models to draw down to only a hundred or so. Here is a screenshot of them, taken from a view direction near where the corner of the array starts. I believe there are about 120 buildings visible.

Large 2D arrays of museum models in various orientations, with shadows and indirect lighting. Around 120 model instances are visible in this view.

Okay, that's 120 * 1.5M = 180M triangles. If I use 3DWorld to brute force draw these, it runs at around 12 Frames Per Second (FPS). Interactive, but not realtime. Now, the buildings have a lot of small objects inside them, and these objects can hardly be seen when the player camera is outside the building such as in the screenshot above. Can you actually see any dinosaur bones in the nearby buildings? Disabling these small objects when the player is a few hundred meters away from a building helps somewhat, and the frame rate increases to 19 FPS. This is definitely helpful, but doesn't quite reach my goal.

Why doesn't this help more? Well, the problem is all that brick texture you see on the buildings. Over half the total triangles are brick material, and it's all one large object with randomly distributed triangle sizes. Most of what you see are the large outer wall polygons. What you don't see from outside are all the tiny triangles from the interior columns, stairs, railings, walkways, etc. Here is an interior screenshot showing indirect lighting and shadows (more on those topics later). The model looks much more complex on the inside than on the outside. Look at all those bricks!

Interior of museum showing indirect lighting. Adjacent museum models are visible though the windows.

I decided to bin the triangles by area in a power-of-two histogram, using up to 10 bins per material. Each bin contains triangles that are on average twice the surface area of the triangles in the previous bin. If only the first bin is drawn, this represents the largest 2% to 5% of triangles, which together account for 50% or so of the total surface area of that material. The min/max/average area values are stored for each material, along with the offsets for where each bin starts. The maximum visible bin can be determined based on projected pixel area, which varies as the square of the distance from the camera to the closest point on the model's bounding cube. The further the object is from the camera, the fewer bins need to be drawn.

If the player is inside a model, the distance is zero and all triangle bins are drawn. If the player is far from the model, only the first few bins are drawn, drastically reducing the number of triangles sent to the GPU. The largest count bins happen to be the ones with the smallest triangles, so even dropping the last bin or two can reduce triangle count by a factor of 2. This yields an overall 3-4x speedup, increasing frame rate from 19 FPS to 63 FPS for the view in the first image above. I think I was lucky with this model, because the outer walls don't have any small triangles in them that would produce holes when they're removed. The image with LOD turned on looks almost exactly like the image with LOD turned off. So much similar that I'm not even going to show both screenshots.

The final rendering performance improvement is dynamic occlusion culling. I manually added occlusion cubes to the museum model that include the large rectangular walls. Yes, these walls have some windows in them, so the occlusion isn't entirely correct. I was able to exclude the large windows in the roof though. This makes such a big difference in performance that I enabled occlusion culling anyway, even if it's not entirely correct. Each frame, 3DWorld collects a list of the 5 closest visible models to use as occluders. All of the models are checked to see if they're completely enclosed in the projected volume of any of the occluders from the point of view of the camera. If so, drawing of that model is skipped. This optimization has the most effect when the player is at ground level in the middle of the buildings, where the row of nearby museums forms a wall that obscures the other rows of museums behind it. In this case, only a handful of museums are drawn, and frame rate is increased from 60 FPS to 150-250 FPS.

Here is a video showing showing the array of museum models from various view points, both from the air and from the ground. There are almost no visible LOD artifacts while in the air. There are some artifacts due to occlusion culling when entering and exiting buildings, where the player crosses through a building's occlusion cube. Occluders are disabled when the player is inside them. I'll see if I can fix that somehow later.



This system works pretty well. I'm getting a good trade-off of performance and visual quality. But, I'm still lacking variety. I can't have a scene with the same one building placed over and over again. It's difficult to find high quality 3D building/architecture models for use in 3DWorld, and I don't have the time, tools, or experience to create them myself. Many of the free model files I can find online are poor quality, have missing meshes or textures, only represent one part/side of a building, are in a format that 3DWorld can't import (not OBJ or 3DS format), or have import errors/invalid data. I'll have to invest more time in searching for suitable model files in the future if I want to create a realistic city. However, the low-level rendering technology may be close to completion. That is, assuming there's enough memory for storing shadow maps and indirect lighting for each model. On to those topics next.

Shadows

I enabled standard shadow maps with a 9x9 tap percentage closer filter for 3D models in tiled terrain mode. Shadow map size is defined in the config file and currently set to 1024x1024 pixels. Models cast shadows on the terrain, trees, plants, grass, and water. They're rendered into the individual shadow maps of each terrain tile and cached for reuse across multiple frames. This is no different from how mesh tile and tree shadows work.

Models also cast shadows on themselves. Shadows from directional light sources only depend on light direction, so the shadow maps can be reused in all translated instances of a model that have the same orientation. My arrays of museum models use three different orientations (0, 90, and 180 degree rotations), so three shadow maps are needed.

These shadow maps only need to be regenerated when the light source (sun or moon) moves. They're shared, so updating them only requires rendering one museum model in a depth only pass for each orientation, which is quite cheap. This means that the light sources can move in realtime with only a small reduction in frame rate - for self shadows, anyway. Updating all of the tile shadows can be more expensive, especially for low light positions during sunrise and sunset. This is because the shadow of a single model can stretch far across the landscape, which requires drawing many models into the shadow map of each tile. Note that models out of the player's field of view can still cast shadows that fall within the field of view. For this reason, nearby models have to be loaded and ready to draw, even if they're behind the player.

However, a model can't currently cast a shadow on another nearby model. This breaks the translational invariance, and seems to require many more than three shadow maps. If the models were all on the same plane I could reuse the same shadow map for all interior instances, which are known to have neighbors in all directions. Unfortunately, the models are all placed at different Z height values (based on terrain height), so this doesn't work. It can't generally be relied on. I'll try to find a workaround for this problem later. As long as the buildings aren't too close together, and the sun isn't too low in the sky, this shouldn't be much of a problem.

Indirect Lighting

I managed to apply my previous work on indirect lighting to tiled terrain models. In particular, I wanted to apply indirect sky lighting to instanced buildings. Indirect lighting is precomputed by casting rays from uniformly distributed points in the upper (+Z) hemisphere in random directions into the scene. A billion points are ray traced along their multiple-bounce reflection paths on all CPU cores using multiple threads. All ray paths are rasterized into a 3D grid that's then sampled on the GPU during lighting of each pixel fragment. The sampled texel contains the intensity and color of the indirect lighting. The resulting lighting information is also saved to disk and reused when the same building model is loaded later.

The nice property of sky light is that it comes from all directions, which means the lighting solution for an isolated model is independent of it's position or orientation within the scene. All I needed to do was generate the indirect lighting solution for an isolated museum, and the same solution could be used for all instances. This assumes nearby buildings have little impact on the indirect illumination. It all depends on how close the buildings are to each other. I'm not sure how much influence the other buildings would have on the lighting because I have no easy way to show it. The scene doesn't look obviously wrong, so it must be acceptable to drop this term. Buildings are pretty bright when viewed from the outside, even when in shadow and close to other buildings. The interior lighting mostly comes from the skylights in the roof, which aren't affected by adjacent models.

One additional benefit of my lighting system is that it stores normalized reflection values, rather than absolute color and intensity. Meaning that the cached lighting values are multiplied by the sky color and intensity during rendering, which allows these values to be changed dynamically. The lighting solution can be reused for all times of day, even at night! Just swap the daytime blue sky color with a much lower intensity night time color. This also works for changes in weather such as cloud cover, where bright blue is replaced with a dim white on cloudy days.

Collision Detection

3DWorld supports a simple, limited collision detection for tiled terrain mode that has been extended to models. It's a limited version of the ray/sphere collision detection system used in ground and gameplay modes. Here it's only used for player collisions, since I haven't implemented any gameplay yet.

Each unique model stores its own Bounding Volume Hierarchy (BVH), which is used across all model instances. This serves as an acceleration structure for fast ray queries. When the player is within the bounding cube of a model, two lines/rays are fired from the player's center point.

One ray points down, in the -Z direction. This is the gravity ray. Gravity is enabled in "walking" mode but not in "flight" mode. The first polygon that this ray hits is the polygon the player is walking on, and is used to set the height (Z) value of the player. This test is what allows for walking up stairs and falling over ledges. I haven't implemented real falling, so walking over a ledge will just teleport the player to the bottom. There are some holes in the stairs of the museum model which can cause the player to fall though the floor. Oops! I'm not sure what I can do to fix this, other than inserting some other tiny model to fill in the gaps like I did in another scene. It's not like I can easily find and fix these polygons in a 56MB file filled with numbers.

The second line extends from the player position in the previous frame to the position in the current frame. This represents the distance the player has walked over the past frame, and is typically very short. If the movement is legal, the line won't intersect any polygons. But if the line does intersect, this means the player has run into a wall. The normal to the polygon is used to produce an updated player position that allows for sliding against a wall but not traveling through it. I haven't implemented anything more complex such as bumping your head on a low ceiling.

This simple collision system is enough to allow for exploring the buildings and terrain by walking. I'll have to find a way to extend this system to volume (cube/sphere) intersections if I want gameplay to work in the future.

Sunday, April 30, 2017

1000 Dynamic Lights

It's been over a month since my last post. For the past month, I've been working on a number of smaller unrelated topics, so I'll probably add a few shorter length posts in quick succession. Recent work has been on the subjects of large numbers of dynamic lights, the game mode freeze gun, instanced placement of complex 3D models in tiled terrain mode, and tiled terrain animated clouds.

The first topic is dynamic light sources. I've discussed my dynamic light implementation in 3DWorld before. Lights are managed on the CPU, then the visible lights are sent to the GPU each frame as a group of 1D and 2D textures. These textures encode the world space position of each light source grouped spatially, so that the GPU fragment shader can efficiently determine which lights affect which pixels. Every fragment (similar to a pixel) looks up its position in the lights list texture using the {x,y} position of the fragment. Then, the contribution of each light in the list corresponding to that grid entry is added to the fragment. Typical texture/grid size is 128x128.

This has been working well for a long time. I've been experimenting with 100 dynamic light sources in scenes and have posted screenshots of these in my planar reflection posts such as here and here. I recently experimented to see how many lights I could add to a scene while achieving realtime (60+ FPS) rendering. After various low-level optimizations on the CPU side, I managed to get up to 1000 lights for many scenes. Here are some screenshots of the office building scene with dynamic light sources at night.

Office building scene viewed from outside, at night, with 1000 dynamic, floating, colored light sources.

Back of the office building where the only lighting comes from 1000 dynamic light sources, at 67 FPS.

Keep in mind that all of these glowing spheres are moving, colliding with the scene, and casting light. In addition, they cast shadows from the sun and moon. However, I've created these images on a moonless night, so that there are no other light sources. All lighting comes from dynamic lights. I normally get around 60 FPS (frames per second) with reflections enabled and 90 FPS without.

Here is a view from inside the building lobby. There are a few dozen lights visible in this room alone. I've enabled reflections for the floor so that the glowing sphere reflections are visible.

A room densely filled with light sources, which also reflect off the floor.

Here is a screenshot from a larger room that contains around 100 light sources. The building is huge! This is just one of 4 floors (+ basement), all full of lights.

Another room containing over 100 light sources, with floor reflections.

The max number of lights is actually 1024, because I encode light indices as 10-bit values within the lookup textures. I used to run into problems with this limit when there were a large number of reflected laser beams in the scene during gameplay. I previously had implemented laser beam light using a series of small point light sources along the beam path. This quickly adds up when there are multiple beams crossing an entire floor of the building. Each beam segment could have over a hundred point lights. Now I'm using analytical line light sources for laser beams, which are directly rasterized into the lighting lookup texture. One beam segment is only one light source.

I also increased the number of lights in the Sponza atrium scene from 100 to 200. This scene is much smaller (~8x), so the lights are packed together more closely. They have the same light radius in a smaller space. Each pixel has contributions from 10-20 light sources. 1000 lights is possible, but overkill. The frame rate drops and the walls appear a very bright white.

Sponza atrium scene with 200 dynamic colored light sources and floor reflections.

These images are also taken at night, where the glowing spheres and fires are the only light sources. Here is the same scene from a different view on the lower level, with smooth reflective floors enabled. There are no shadows for these light sources, so they shine through walls.

Sponza scene lower level, with floating lights and reflective floor. The fire and burning embers also cast light.

Note that the fire on the right of the image also emits dynamic light, with approximate shadows. In addition, it throws out glowing embers, which also emit light. You can see three of these on the floor, reflecting their red glow on the smooth marble.

Here are two older screenshots of the Sponza scene with fewer lights.

Sponza scene at night, lit by only dynamic lights.

Sponza scene at night, with reflections on the floor.

I've also implemented dynamic cube map shadows for moving light sources. It's limited to a small number of lights, currently only 9 (64 textures, 6 textures per cube face/light). This is determined by shader size and uniform variable count. I could increase the texture array size for my new system/GPU, but I also want this to work properly on older systems. For now, I have added a reasonable cap on shadowed lights.

Here is a screenshot showing two shadow casting lights near the stairs in the office building. If you look closely, you can see some shadows cast by the red light on the stairs and support columns.

Two shadow casting, moving lights. Shadows can be seen on the stairs for the red light on the right.

The approach I'm using for local lighting is similar to a standard tiled forward approach, also known as Forward+. Some reference articles and examples can be found here, here, and here. This is an alternative approach to deferred shading. 3DWorld was originally written using a forward rendering pipeline because I wanted to have a lot of transparency effects. Also, deferred shading wasn't very common back when I started writing 3DWorld in 2001, and I've never gotten around to implementing it. The tiled forward approach works well in my engine so I've continued to use it.

However, I'm doing it a nonstandard way. I came up with my system years ago, before deferred shading and tiled forward rendering were well known. This is one of the early application of GPU shaders to replace the fixed function pipeline in 3DWorld. Instead of using screen space tiles, I use world space tiles in the XY plane. This is the floorplan or map view of the building; I use up=Z in 3DWorld. In the past few weeks, I attempted to implement screen space lighting tiles in the standard way. I thought it might yield better performance, but my experiments showed that this was not the case. Why is that? Part of the problem is that it takes more work to transform world space light positions into screen space, but that's a minor issue. Maybe I haven't optimized it properly. But there's more.

3DWorld's "ground mode" is intended to be a first person shooter environment. The player is usually standing on the ground/floor, unless they have the flight powerup or are in spectate mode. This means that the view direction is often in the XY plane. A common case is to be looking down a hallway that has lights on the sides. Using my world space light tiles, these lights will be in their own texture tiles. The subdivision system effectively splits them up so that pixels along the view direction (along the hallway) will see different tiles at different distance/depth. On the other hand, if screen space tiles are used, the rows of lights will all stack up behind each other at different depths, which means that many of them will contribute to the same pixel. Screen space tiles appear to be slower in this situation.

Maybe I'm doing something wrong here? Or maybe my system just works better for my type of scenes? I suspect that if the lights are small and dense enough, the screen space system works better. For example, if there are 100 lights that all fit within a small area of a single XY tile in 3DWorld, they would all contribute to each pixel. The tile subdivision system is ineffective. But if these were screen space tiles, there would still be subdivision. I guess this case doesn't come up much in 3DWorld because there are no sources of small, dense lights. Explosion particles and shrapnel, the smallest light casting objects, use a system of grouping of multiple smaller lights into a single larger virtual light to work around this problem. World space tiles sized on the order of a typical light source's radius work well in most cases.

Some tiled forward systems use 3D tiles (screen X, screen Y, and depth/Z). [I've read about this before, but I can't seem to find the original article.] That would give better resolution along the hallway, fixing this issue. It would seem to guarantee that no matter what the view vector and light placement is, the tiles always split up light sources that are far apart from each other in world space. The only pathologically bad case is where many light sources are close enough to each other that their spheres of influence overlap. Of course, I could add a 3D texture/array of world space tiles to my system as well. I have experimented with that before, but it turns out the performance is worse. 3D texture lookups are slower than 2D texture lookups, the math is more complex (and slower), and I need to use more total tiles. This increases memory usage and CPU => GPU transfer costs as well.

As far as I can tell, the dynamic lighting system I've implemented in 3DWorld works well. It scales to 1000 lights for a large scene, with no problems. If I ever need more than 1024 lights, the system needs to change. For now, this seems to be sufficient for even my most demanding scenes.