Internal development related posts.

We usually work with C++ at Anticto, like most people doing video games technology. C++ is a great language with more tools and libraries than any other. Ok, i cannot prove that, but you get what I mean. C++ is a systems language with pretty low level access. As such, it won’t stop you from causing all sorts of crashes. The most common of them are usually related to memory management and this includes bugs that may only happen sometimes, in some computers, or even in some compiler configurations. They may be hard to catch and solve, but luckily, there is Valgrind.

Valgrind logo.png
Image rights: Valgrind Dev Team

For most people Valgrind is a memory debugger, but it is actually a framework to develop tools to run together with a program. The best known of them is indeed the memory debugger, but there are several that we use:

  • Memcheck: a memory access error and leak detector
  • Massif: a memory profiler for both heap and stack
  • Cachegrind: a low level performance profiler

Additionally, we use the KCachegrind, massif-visualizer, and the integration in Qt Creator for memcheck, to get more visual feedback of valgrind results. We also use it in our internal Continuous Integration system to find problems before they get to production versions.

If you are a Windows-only developer, I am sorry for you. Also, you won’t be able to use Valgrind, due to OS-level functionality missing. We use it basically under Linux, but in theory it also supports MacOS and Android in several architectures.

After using Valgrind’s memory checker in many projects, I wouldn’t dare to release any software to the public without “valgrinding” it first. It is a humbling experience, and sometimes when you look at the results it makes you think valgrind is not working properly, but in the end… Valgrind Is Always Right.

The other sort of program defects that is as common as memory errors are concurrency issues. Valgrind has tools to detect some of them but we haven’t used them yet. That’s a big TODO in our list.

For performance, Valgrind is ok, but we often resort to manual code instrumentation, and platform-specific tools which are usually great on consoles. Also, Intel and AMD are providing some excellent tools like VTune or uProf. For graphics, there are excellent tools as well, but this is an entirely different subject for another time.

In the first of a series of posts related to the tools we use daily at Anticto, we talk about FastBuild: it’s fast, and it builds.

FastBuild (mainly by Franta Fulin) is a great build system that we use to build our tools and in-house technology. Our use case involves supporting C++ in 8 different platforms, and several compiler toolchains and standard libraries to produce final programs and also static libraries that we then reuse in other build environments.

In the past I have worked with several build systems, including make, cmake, autotools, scons and waf, in addition to the integrated tools in commercial IDEs, and they all have their own strengths and weaknesses.

Our priorities are:

  • Free software, as in “libre”.
  • Full control: no blackbox/black-magic under-the-hood processing that we cannot understand or modify if needed.
  • It has to be reliable: nothing is worse that finding a bug that is actually caused by issues in the build tool using outdated objects by mistake.
  • It has to be fast in a single workstation.
  • It has to support distributed builds in case we need them. (We don’t, yet).
  • It has to be multiplatform.
  • It has to be self-contained: no system-wide installations please. And if it is small, and without dependencies like python, then much better.
  • It has to be actively developed, and documented.

FastBuild does all of that. The only small drawback is that, well, you have to learn it, but this applies to all build systems. In the past i had a pet project (that didn’t go anywhere) of a C++ based build system where the build files where written in C++ with a thin API. It only required a C++ compiler to work. It was a little bit convoluted, and not very fast, since it required to generate a dynamic library everytime you changed the build files.

Fastbuild is low-level enough to let you precisely control all the build steps, and toolchain options. It is extremely fast, and it is getting better every day. In the past we used waf which is also great, but it required a full python installation, and modifying it (which we required sometimes) was outside our zone of confort. Fastbuild is lower level, so it fits our needs better.

We are very happy with FastBuild, so if your requirements more or less match ours, give it a go!



This is the first of a series of posts discussing the problem of character customization in computer games. I am writing this with several targets:

  • to put in order all the thoughts I have been piling during the design and implementation of 4 character customization systems for commercial games.
  • to try to help anyone facing the development of such a problem, by providing some analysis of the requirements of such systems.
  • to explain and justify the design of the Mutable middleware that we have created in Anticto. So… yeah, for self-promotion too.

Please note that while I use “character customization system” all the time, these systems may be used to configure people, cars, butterflies or zorks (whatever they are) depending on the game context.

The target audience are mainly technology programmers but it may be useful to technical 3D artists as well.

About me and the previous systems

I have been developing graphics technology for games for about 10 years now. That wouldn’t be enough to make me an expert in anything except for the fact that i have had to develop 4 different character customization systems from scratch during those years.


The first one was developed for a game called “One: Become a legend“, that became history pretty fast together with the target platform: the Nokia N-Gage. Despite being developed for an non-smart phone with an integer-only ARM processor, the game managed to offer 3D characters with motion-capture animation, normal mapping, and directional+environment lighting, which was pretty cool. The system offered swappable parts with several color layers for every part, and fixed-location decals to add logos to t-shirts, etc. All the characters in the game were designed with the system.



The second one I participated in is the one in “All Points Bulletin” (APB). I only worked on it for a year, but enough to complete pre-production and start production. The game used Unreal Engine 3 which was heavily under development at that time, and didn’t include enough features to implement the ambitious system that the company designers wanted. I think it is one of the best character customization systems ever developed for games and most merit should go to the team that stayed there for years developing it. Some years have passed, but you can still see it in action in the re-spawned “APB Reloaded” game.. It is a pity that the game didn’t succeed in its initial release. because the plans to make it huge along time were promising.



After some years, I worked in Blueside Inc, in the game “Kingdom Under Fire 2” and developed the first iteration of its character system. The game is not out yet, so I cannot really explain much about it. However I can say this much: this game has the potential to be awesome, take my advice and keep an eye on it. 


Finally, I decided to take some time and develop “the ultimate” character system and the result is Mutable: what we are offering here at Anticto. I have used this system to be able to develop all the ideas that I had to discard from other systems due to time and budget constraints.

What is a character customization system?

If you are reading this, you probably don’t need an explanation of what a character customization system is, but maybe a slightly deeper analysis can be useful.

A character customization system is used to let the players create their own avatars in games. This usually includes an artist-driven set of modifiers applied to 3D meshes, and textures with some parameters offered to the players. These parameters usually control the general shape of the character, details like facial features, skin colors, hairstyles, clothing and equipment.

How far you want to go with this is a matter of game and art design. You can go as far as Second Life and let you players import their own assets into the game, or you can keep tight control of the aesthetics resting the color palettes and mesh combinations. With more freedom comes more responsibility, and you players probably won’t care about what your artists envisioned for your game: they will try to make the ugliest possible characters, and if they have enough freedom in the decal design etc, you will have to hire a crew of censors to avoid offensive (and even illegal) designs.

The character system affects two branches of the game development pipeline: asset production and engine development. You need to prepare the assets for your system, so until the design and features are not closed and verified, the art production may be subject to changes which your artists won’t like. But we will focus on the technical side in these posts.

Use cases

A character customization system may need to build characters in several scenarios of a finished game:

  • Loading time: When you are starting the game or entering a new level. In this scenario, you have most resources available to build the characters, including the GPU if necessary, since there isn’t any heavy real-time action going on. You want to build the most optimized version of your in-game data, and you can take some time to do it.

  • Customization lobby: With this, I refer to the scenes where the user is changing parameters in real-time to configure its avatar (or any content). In this scenario you usually have a lot of resources available, since the action is focused in the character being customized. You are still rendering a scene in real-time and you want the system to reflect the updates of the 3D model as fast as possible. Often, you want to use assets with higher quality than the ones used in-game and you can afford denser meshes, uncompressed textures, and more rendering calls. This means that the generated model doesn’t need to be fully optimised.

  • In-game: This happens when the player is in the middle of real-time action and other players join. This is the most difficult use case since you have to build characters without stalling the CPU or the GPU, and with severe memory constraints.


There are many requirements in tension in a character customization system. They will depend on the game going to use it, of course, but in some degree you will always need:

  • Performance in the construction process. It cannot take long to build a character and it cannot require a lot of memory.

  • Optimized data generation. You will want your data to be as optimal as data generated directly by your artists. Optimized geometry: with only the required triangles to avoid overdraw and z-fighting. Optimized textures: without wasting space, channels, and using compressed formats. Optimized draw calls: you cannot use more draw calls for your customized character than you would use for a static one.

  • Flexibility in the range of modifiers that your artists can use to define the customization of characters. These modifiers will probably include mesh merging, morphing and removal, and various image effects to change colours, blend in normalmap effects, projection, etc.


  • Reusability is not a usual requirement, since developers tend to focus on single projects when developing customization systems. However in the case of a general game engine, or a middleware like the one we develop, it is a key element.



    Giving the control to the artists

    In APB we had a long pre-production stage, where two programmers and two artists worked together defining what would it be possible to customize in the game and how. This included the skin color effects, the skin layers for scars, moles, tattoos, etc., how this would affect the normals, specular and other material properties, etc. It also included how would we model the clothing accessories, the morphs in the body and the face, the hair-style etc. Then we did the same for the customization of the cars.

    After that long phase, we threw away all the test assets, produced a many-pages document for artists, developed a tool to define and preview all this data and we implemented the system in the game engine with those effects in mind. It sounds short now, but it was a huge task in terms of man-months. The system was settled in stone and any change in the customization features like adding an extra layer in the skin, or a different morphing parameter would have serious implications in the programming side.

    With time, I realized that it is very important to give the control of what can be customized to the the artists so that they can define all the construction process of the assets without requiring of additional programming work. The only way to do this is with a data-driven process: by turning the construction process of the objects into data itself. A little bit like what happened with programmable shading in the GPU: instead of adding stages to the rendering pipeline, at some point, the GPU designers realized it was much better to give us shaders.


    Levels of detail

    In an MMO you may have many characters on-screen, but only a few will be close enough and require many pixels in the final rendered frame. The traditional approach to reduce the cost of complex scenes is to use several levels of detail (LOD) for an object and use cheaper ones when they are far away. Cheaper objects have simpler meshes and smaller textures. In the case of customizable characters it is necessary to build this LODs specifically.

    Imagine the case of a necklace. In the highest LOD you probably want to model it with a mesh and a special metal material. In the next LOD it may be enough to model it as a morph of the mesh and a blended path on the torso color and normal maps. In the last LOD you may want to ignore it completely. Having this support for LOD adds complexity to the customization system but it can greatly improve the performance of the resulting data and the build process.


    The real-time updates in the lobby

    Imagine the case in the customization lobby when the player is changing the skin color of complex character. The player is moving a slider handle, and he or her is looking at the 3D model to see how it looks, expecting real-time visual feedback. What is going on under the hood?

    In this case you are using the maximum detail character and you are using the highest resolution textures, maybe a couple of materials with 2048×2048 texture sets including color, normal and specular. Whatever way you decide to use to customize the color it will involve some per-pixel operations like interpolations, soft-light or hard-light effects etc. Moreover, you probably have additional layers on top of the skin, like moles, hair, tattoos, garments modeled as texture effects (like socks, or tight t-shirts), etc, that you need to bake. This adds up to millions of arithmetical and memory operations that you need to do in a few milliseconds to sustain the frame rate.

    What can you do? Well, the answer is obvious in the 21st century: use the GPU. It is not difficult to move this operations to a shader and just update its parameters while the player changes the skin color. Of course, you would only use this shader in the customization lobby, and you would bake everything when using the character in-game. But if you have complex customization it will not be possible to move all of it to the shader, so you will have to make several shaders depending on what parameters are being edited of your model. Moreover, you will have to specifically encode the process to generate the “partially baked” resources that your shaders will need, for every case.

    This is what we did in some of the systems in the past, and it worked great. But any change in the customizable features of the object implied a lot of work in order to adjust all these processes and shaders, which makes this incompatible with giving the control to the artists as discussed in a previous point.


    The memory constraints in the In-game use case

    When you are in-game, you are probably using all of your resources, trying to push the quality to the maximum. Suddenly requiring 2048×2048 pixels x 4 bytes x 3 images to apply and image effect between two images onto a third one, for a character you need to build in the background because he is joining the area, may be a problem. On a PC requesting too much memory is not that terrible: you have a thick OS that will virtualize and swap in and out for you, but it will still be slow. In some consoles and smaller devices though, you will crash if you exceed the available memory.

    You have to split all the operations into smaller tasks and organize your code and data to use the minimum amount of memory. This can take some time and will slow down the object construction, but it is not especially difficult. However, again, it depends on what operations you require for each object, and when these change, you may need to review these tasks as well.


    A possible approach

    My latest attempt to resolve this requirement tension is to use a kind of virtual machine approach. The artist define a diagram with blocks connecting player-controlled parameters and meshes and textures to create an object hierarchy. This is compiled into a set of operations and constant data. This “program” can then be reorganized automatically for the several scenarios described in this post: to have maximum performance (trying to generate shader fragments automatically), to use the minimum memory, and optimised for the different cases where subsets of parameters are modified at run-time.

    The virtual machine runs this program in different ways for different scenarios, and it has operations like texture packing, image layer effects with small blocks, etc. It can easily run tasks in parallel and it can automatically apply memory constraints to the program execution.


    To be continued…

    In future posts I will try to discuss the specifics of the common modifiers like mesh merge, texture pack and image effects, as well as discussing some open problems. Sometimes I will focus on the context of our approach, but in many cases the information may be useful for general development.