You can get Voxel Farm now. For more information click here.

Tuesday, January 17, 2017

When Mountains Move

There is an aspect of procedural generation I do not see discussed often. What happens when you have layers of procedural content, add hand-made content on top of it, and then go on to improve the procedural generation?

Last week we did a release of our tools that make it really simple to create terrain biomes. You can see it in action here:

This system can produce fairly good looking terrain with just a few clicks, however, there are still some aspects of it I wanted to improve.

One key issue that you can see in this video, is that it tends to produce straight lines. There is still some "diamond" symmetry we need to address the core algorithm. The image below shows a really bad case, where the produced heightmap has parallel features forming an almost perfect square:

At the left, you can see the generated heightmap. The right panel shows a render of the same heightmap.

This did not take long to fix. Instead of running the generation algorithm on a grid made of squares, switching to irregular quadrilaterals is enough to break most of the parallel lines. The following image shows the results after the fix:

A simple fix or improvement like this leaves us with bigger questions: How do we make this backward compatible? Do we make it backward compatible at all, or just apologize and ask human creators to relocate their data?

We could provide an option to turn this improvement off. That would be the most diplomatic approach. But then what about any other improvements we have planned for the future, do they all get their own setting? At this point, it seems we would be complicating the UI with many options that are meaningless to new users. These would be just switches to make algorithms behave in more primitive forms.

Whenever algorithms create content along with humans it really becomes a muddy, gray area for me. I still haven't figured this one out.

Wednesday, November 23, 2016

Worm Day

I was testing a new version of the Alien Biome made by Bohan, one of our in-house artists. The biome was looking good. I was making sure the new material rendering in UE4 worked as expected. (In case you have not seen it, here is a video showing just that.)

While I roamed around this terrain, I wondered how easily a non-artist like me could extend the scenery. I decided to test what I could achieve in just one day.

I figured out it would be nice to have some massive point of interest. Some sort of giant worm maybe. I loved the Dune treatment by Jodorowsky and Giger. They collaborated mostly about the house Harkonnen. No worms there, but the scale and weight of the concepts was striking. It would be fun to explore a really large formation like the ones they had.

I did not want to create any new textures, materials or props for this. I would be using whatever was already there in this biome. For this reason, I chose to make the giant worm look more like a fossil. Minerals in this planet had taken over every cell of the worm, leaving something that up close looks pretty much like terrain.

The first step was to create a simple model for the worm. I did this using traditional modeling in 3ds max:

This is just a series of primitives arranged in a worm-like fashion, with some worm-tusks sticking out.

The next step was to paint "meta-materials" on top of this. To make it simpler to identify where each meta-material would go, I picked distinct colors to represent each one of them:

In a normal production project, you would have a greater variety of meta-materials. In this case, however, I was really pushing for the minimum amount of work necessary.

Once I had the textured worm, I imported it as meta-mesh in Voxel Studio. I made sure it was huge (3km long) and that it fit nicely the already existing terrain:

I also had to set up some connections between the existing materials and the meta-materials I had painted on top of the model. I chose to use only two meta-materials: one for the tusks and another one for everything else. In this procedural component, each meta-material is subdivided into final materials using artist-created maps. I hoped just two meta-materials was enough to make it interesting.

This part of the setup was quite simple:

After Voxel Studio processed the new mesh and meta-materials, I got to see the worm for the first time:

I was then able to export into Unreal Engine 4 and see how this looked in the final environment. 

I captured this short video so you can have a better idea of the results so far:

While the model and material assignment could use more detail, the sense of scale and weight was already close to what I was looking for. 

I clearly do not have Giger's talent (or Jodorowsky's hallucinogenics supply) so I was curious about how far I could go by myself, and how much the procedural systems we have built could help me. I think it turned out to be OK, considering I only spent around five hours in total creating this feature.

As usual, I look forward to your opinions and comments.

Tuesday, November 22, 2016

Improved LOD

This is a continuation of an earlier post.

In the past, every Voxel Farm scene could be described by this diagram:

This is a 2D representation of how the entire scene is segmented into multiple chunks. Each chunk may cover a different area (or volume in 3D), but the amount of information in it is roughly the same compared to other chunks, regardless of their size.

Thanks to this trick we can use bigger chunks to cover more distant sections of the scene. Since the chunk is far away, it will appear smaller on screen so we can get away with a much lower information density for it.

Until recently the only criteria we used to decide chunk sizes was how distant the chunk was from the viewer, which is the red dot in the image. For some voxel content types like terrain, we could afford to quickly increase the chunk size along distance to the viewer. The resulting lower density terrain would still look alright.

Some other types of content, however, required more detail. Imagine there is a tower a few kilometers away from the viewer:

The resolution assigned to the chunk containing this tower is simply too low to capture the detail we want for the tower. We could increase resolution to all chunks equally and this would bring the tower into greater detail, but this would be very expensive because many chunks containing just terrain would have their density bumped up as well.

The solution is simple. Imagine that while we build this world, we can compute an error metric for each larger chunk based on the eight (or four in 2D) children chunks it contains. For terrain-only chunks this error would always be zero. For the chunks containing the artist-built tower, this error could be just a counter of how many voxels in the larger chunk failed to capture the detail in the voxels from the smaller children chunks.

Starting from the distance-based configuration we can do another round of chunk refinement. Each chunk having an error that we consider too high will be subdivided. We can keep doing this until the errors are below the allowed threshold and the overall scene complexity remains within bounds.

This gives us a new scene setup:

As result of the additional subdivision, we now use higher density chunks to represent the tower. Since we know how distant these are, we could even pick a chunk size that shows no degradation at all as all errors become sub-pixel on screen.

The following video shows more about this technique:

In the last part of the video, you can see how the terrain LOD changes while the tower remains crisp all the time. If you would also like to minimize terrain LOD changes, this technique can give you that as well:

Just like we can focus the scene more on fine architectural details, we can "unfocus" sections we know will be fine with lower information density.

There is still one issue this technique does not address. When we are bumping up the level of detail of a distant castle, this may also bring a lot of information we do not necessarily want, like walls and spaces that may be inside the castle.

We found a very elegant way to deal with this. This is what enabled the very complex Landmark builds from the earlier post to display in high detail and still run at interactive rates. How we did it will be the topic of the final post on the LOD series. 

Monday, October 17, 2016

Slaying the LOD monster

LOD, or Levels of Detail, is a technique for managing very large and dense scenes like we have today in open world games. The general idea is you can have multiple copies of the same content at different resolutions. Then you switch which copy to use depending on how much detail is necessary for the scene.

LOD switches can be implemented as plain swaps, morphing between states as shown in this earlier post, or even as completely adaptive mesh representations, as shown here.

Sometimes we get fixated on LOD swaps and we drift away devising tricks to make them harder to see. I feel this is not the right angle to look at this problem. While LOD swapping techniques are very important, and progressive swaps are much nicer, etc., the real goal is to have as much detail as possible in the first place. If you find yourself masquerading LOD swaps, it is probably because your scenes still include levels of detail that are too coarse.

How much detail do you need? Just enough to make LOD transitions close to being sub-pixel. Once LOD swaps are sub-pixel, it does not matter what swapping technique you use. It also means your rendering of the content is as detailed as the screen resolution will allow.

This sounds like a Holy Grail, is it feasible with the current hardware generation? We believe it is. Take a look at the following video:

As you can see in the video, buildings preserve their detail, even when viewed from afar. LOD changes, while they do happen, are virtually impossible to detect. All this while keeping interactive frame-rates.

It turns out the same techniques that prevent buildings in the distance from looking like melted crayons also give you smooth LOD transitions. I will be covering how all this works in my next post.

Thursday, September 8, 2016

Introducing Farm Cloud

We have included a preview of our collaborative edition system in the latest Voxel Farm 3 release.

The following short video shows what takes to setup and share a project from scratch:

We are still working on this system, but hopefully you get the idea. Our focus is to make the setup as simple as possible so people can start collaborating without significant effort.

This is also intended for more than collaboration and source control. This same layer of distributed persistence can be used by any application to store and sync the changes their users make.

This is a very active area of development for us. I will be posting more about this in the future, meanwhile as always I look forward to the questions you may have about this system.
There was an error in this gadget