Saturday, 28 June 2014

Finding bugs through autotesting

Some bugs and issues can only be found by playing the game for ages in a single play session, or by triggering lots of random situations. As a small studio we don't have the resources to hire a ton of people to do such tests, but luckily there is a good alternative: a fun method is to hack automated controls into the game and let the game test itself. We have used this method in both Swords & Soldiers and Awesomenauts and found a bunch of issues this way.

Autotests are quite easy to build. The core idea is to let the game press random buttons automatically and leave it running for many hours. This is very simple to hack into the game. However, such a simplistic approach is also pretty ineffective: randomly pressing buttons might mean it takes ages to simply get from the menu to actual gameplay, let alone ever actually finishing any levels. It gets better if you make it a little bit smarter: increase the likelihood of pressing certain buttons, or even just automatically press the right button in certain menus to get through them quickly. With some simple modifications you can make sure the autotesting touches upon many different parts of the game.



This kind of autotesting has very specific limited purposes. There are a lot of issues you will never find this way, like if animations are not playing, texts are not displaying, or characters are glitching through walls. The autotester does not care and keeps pressing random buttons. Basically anything that needs to be seen and interpreted is difficult to find with autotesting, unless you already know what you are looking for and can add a breakpoint in the right position beforehand.

Nevertheless there are important categories of issues that can be found very well through autotesting: crashes, soft-crashes and memory leaks. A soft-crash is a situation where the game does not actually crash, but the user cannot make anything happen any more. This happens for example if the game is waiting for a certain event but the event is never actually triggered. Memory leaks are when the game forgets to clean up memory after usage, causing the amount of memory the game uses to keep rising until it crashes. Especially subtle memory leaks can take many hours before they crash and are thus often never found during normal development and playtesting.

Another category of issues that can be found very well through autotesting is networking bugs. This one is very important for Awesomenauts, which has a complex matchmaking system and features like host migration that are hard to thoroughly test. Our autotesting automatically quits and joins matches all the time, potentially triggering all kinds of timing issues in the networking. If you leave enough computers randomly joining and quitting for long enough, almost any combination of timings is likely to happen at some point.

Recently we needed this in Awesomenauts. After we launched patch 2.5 a couple of users had reported a rare crash. We couldn't reproduce the crash, but did hear that in at least one case the connection was very laggy. Patch 2.5 added Skree, a character that uses several new gameplay features (most notably chain lightning and spawnable collision blocks). This made it likely that the crash was somewhere in Skree's netcode.



We tried reproducing the crash by playing with Skree for hours and triggering all kinds of situations by hand. To experiment with different bad network situations we used the great little tool Clumsy. However, we couldn't reproduce the crash.

I really wanted to find this issue, so I reinvigorated Awesomenauts' autotesting system. We had not used that in a while, so it was not fully functional anymore and lacked some features. After some work it functioned again. I made the autotester enter and leave matches every couple of minutes. Since I didn't know whether Skree was really the issue I made the game choose him more often, but not always. I also made the autotester select a random loadout for every match and made it immediately cheat to buy all upgrades. The autotester is not likely to accidentally buy upgrades otherwise, so I needed this to have upgrades tested as well.

I ran this test on around ten computers during the soccer match Netherlands-Australia. While we beat the Australians our computers were beating this bug. Using Clumsy I gave some of those computers really high artificial packet loss, out-of-order and packet duplication.

Watching the computer press random buttons is surprisingly captivating, especially as it might leave the match at any moment. Simple things like a computer being stuck next to a low wall become exciting events: will it manage to press jump before quitting the match?

Here is a video showing a capture of four different computers running our autotest. The audio is from the bottom-left view. Note how the autotester sometimes randomly goes back to the menu, and can even randomly trigger a win (autotesters are not tactical enough to destroy the base otherwise):



And indeed, after a couple of hours already three computers had crashed! Since I had enabled full crash dumps in Windows, I could load up the debugger and see exactly what the code was doing when it was crashing.

The bug turned out to be quite nice: it required a very specific situation in combination with network packets going out of order in a specific way. When Skree dies just after he has started a chain lightning attack, the game first sends a chain lighting packet and then a character destroy packet. If these go out of order because of a really bad internet connection, the character destroy packet can arrive first. In this case the Skree has already been destroyed when his lightning packet is received. Chain lightning always happens between two characters, so the game needs both Skree and his target to create a chain lightning.

Of course we know that this kind of thing can happen when sending messages over the internet, so our code actually did check whether Skree and his target still existed. However, due to a typo it also created the chain lightning if only one of the two characters existed, instead of if they both existed. This caused the crash. Crashes are often just little typos, and in this case accidentally typing || ("OR") instead of && ("AND") caused this crash.

Once we knew where the bug was, it was really easy to fix it (the fix went live in hotfix 2.5.2). Thus the trick was not in fixing the code, but in reproducing the issue. This is a common situation in game programming and autotesting is a great tool to help reproduce and find certain types of issues.

Saturday, 14 June 2014

Solving path finding and AI movement in a 2D platformer

When we started developent of the bots for Awesomenauts, we started with the most complex part: how to do path finding and movement? When people think of path finding, they usually think of A*. This well-known standard algorithm indeed solves the finding of the path, and in games like an RTS that is all there is to it, since the path can easily be traversed. In a platformer however the step after the finding of the path is much more complex: doing actual movement over the path. Awesomenauts features a ton of different platforming mechanics, like jumps, double jumps, flying, hovering, hopping and air control. We also have moving platforms and the player can upgrade his jump height. How should an AI know which jumps it can make, how to time the jump, how much air control is needed? This turned out to be big challenge!

Since there are so many potential subtleties in platforming movement, my first thought was that handling it in our behaviour trees might not be doable at all. Behaviour trees are good at making decisions, but might not be as good at doing subtle controls during a jump. Add to that that the AIs only execute their behaviour trees 10 times per second because of performance limitations, and I expected trouble.



The solution I came up with was to record tons of gameplay by real players and generate playable path segments from this. By recording not just the player's position but also his exact button presses, I figured we could get enough information to replicate real movement with precise control. Player movement would be split into short bits for moving from one platform to another. The game could then stitch these together to generate specific paths for going from A to B.

To perform movement this way, the behaviour tree would choose where it wants to go and then executes a special block that takes control and fully automatically handles the movement towards the goal. Very much like playing back a replay of segments of a players' previous movement. The behaviour tree could of course stop execution of such movement at any given time to engage in combat, which would again be controlled entirely by the behaviour tree.

While the above sounds interesting and workable, the devil is in the details. We would have to write recording and playback code, plus a complex algorithm to analyse the recorded movement and turn that into segments. But it doesn't end there. There were six character classes at the time, each with their own movement mechanics. They could buy upgrades that make them walk faster and jump higher. There are moving platforms that mean that certain jumps are only achievable in combination with certain timing of the platform position. All of these variations increase the complexity of the algorithms needed, and the amount of sample recordings needed to make it work. Then their would need to be a way to stich segments together for playback: momentum is not lost instantly, so going into a jump while previously going to the left is not the same as when going into that same jump while previously going to the right.

The final blow to this plan was that the levels were constantly changing during development. Every change would mean rerecording and reprocessing the movement data.

These problems together made this solution feel just too complicated and too much work to implement. I can still imagine it might have worked really well, but not within the scope of a small indie team building an already too complex and too large multiplayer game. We needed a simpler approach, not something like this.

Looking for a better solution we started experimenting. Programmer Bart Knuiman did an internship at Ronimo at the time and his internship topic was AI, so he started experimenting with this. He made a small level that included platforming, but that did not need path finding because there were no walls or gaps. Bart's goal with this level was to make a Lonestar AI that was challenging and fun to play against, using only our existing behaviour tree systems. Impressively, he managed to make something quite good from scratch in less than a week. Most Ronimo team members lost their first battle against this AI and took a couple of minutes to find the loopholes and oversights one needed to abuse to win. For such a short development time that was a really good result, so we concluded that for movement and combat, the behaviour trees were good enough after all.



The only thing really impossible with the systems we had back then was path finding in complex levels. We designed a system for this and Bart built this as well. The important choice we made here was to split path finding and movement into a local solver and a global solver. I didn't know that terminology back then, but someone told me later that it was a common thing with an official name. For finding the global route towards the goal we used path finding nodes and standard A* to figure out which route to take over them. The nodes are spaced relatively far from each other and the local solver figures out how to get to the next node.



The local solver differs per character class and can use the unique properties of that type of character. A jumping character presses the jump button to get up, while one with a jetpack just pushes the stick upwards. The basics of a local solver are pretty simply, but in practice handling all the complexities of platforming is a lot more difficult, yet still doable.



The complex recording system outlined at the start of this post was incredibly complex, while the solution with the local and global solvers is so much simpler. The reason it could be so simple is that although platforming mechanics in Awesomenauts are diverse, they are rarely complex: no pixel-precise wall jumps or air control are needed like in Super Meat Boy, and moving platforms don't move all that fast so getting on to them doesn't require super precise timing. These properties simplify the problem enough that creating a local solver in AI is quite doable.

One aspect that I haven't mentioned yet is how we get the path finding graph. How do we generate the nodes and edges that A* needs to find a route from A to B? The answer is quite simple: our designers place them by hand. Awesomenauts has only four levels and a level needs well below one hundred path finding nodes, so this is quite doable.



While placing the pathfinding we need to take the different movement mechanics into account. Some characters can jump higher than others, and Yuri can even fly. Each class has his own local solver to handle its own movement mechanics, but how do we handle this in the global solver? How do we handle that some paths are only traversable by some characters? Here the solution is also straightforward: any edge we place in the path finding graph can list included or excluded classes. When running A* we just ignore any edges that are not for the current character type. This was originally mostly needed for flyer Yuri: all other classes were similar enough that they could use the same path finding edges.

A similar problem is that falling down from a high platform is possible even if it is too high to jump towards, and jumppads can also only be used in one direction. These things are easy to include in the path finding graph by making those edges only traversable in one direction.

Creating the path finding graph by hand has a couple of added benefits. The first is that we can exclude routes that may work but are too difficult for the local solver to traverse, or are not desirable to use (they might be too dangerous for example). Placing the nodes and edges by hand adds some control to make the movement look sensible. Another nicety is that we can add markers to nodes. The AI can use these markers to decide where it wants to go. For example, an AI can ask the global solver for a route towards "FrontTurretTop" or towards "HealthpackBottom".

The path finding nodes were placed by our designers and they also built the final AIs for the bots. Jasper had made the skirmish AIs for Swords & Soldiers and he also designed the bot behaviours for Awesomenauts.

Awesomenauts launched with only six classes and only four of them had a bot AI. Since then many more characters were added, but they also never received bot AIs. Now that modders are making AIs for those we will probably have to update the edges to take things like Swiggins' low jump and the hopping flight of Vinnie & Spike into account.

To me the most interesting aspect of pathfinding in Awesomenauts is that after prototyping we ended up with a much simpler solution than originally expected. This is a good reminder that whenever we think about building something complex or something that is too much work, we should spent some time prototyping simpler solutions, hoping one of them might work.

PS. The AI tools for Awesomenauts have been released for modders in patch 2.5. Modders and interested developers can try them out, including our pretty spectacular AI debugging tools. They are free to use for non-commercial use, check the included license file for more details. Visit our basic modding guide and modding subforum for more info on how to use these tools.

Sunday, 1 June 2014

The AI tools for Awesomenauts

With the next Awesomenauts patch (patch 2.5) we will release our AI editor and enable players to load modded AIs in Custom Games. The editor is in beta right now and a surprisingly large amount of new AIs have already popped up. Other game developers can also use our AI editor for non-commercial purposes, or contact us to discuss the possibility of using our tool in a commercial product. This all makes for a great occasion to discuss how we made the AIs and what kinds of tools we have developed for this.



Anyone who wants to give making AIs for Awesomenauts a try can check this little starting guide that explains the basics.

I have previously discussed in two blogposts how we made the AI for Swords & Soldiers (part 1 and part 2). Since then we have changed some of the fundamentals and those blogposts are well over three years old now, so I will write this blogpost assuming you didn't read them.

When people think about "AI" they usually think about advanced self-learning systems, maybe even truly intelligent thinking computers. However, those are more theory than practice and attempts in that direction are rarely made for games. AI that really comes up with new solutions is incredibly difficult to build and even more difficult to control: what if it uses lame but efficient tactics and thus kills the game's fun? The goal of game AI is not to be intelligent, but to be fun to play against. As a game developer you usually need control over what kinds of things the AI does. Nevertheless, some games have used techniques that can be described as real AI, especially Creatures and Black & White are known for this. I suppose for them it worked because the AI is at the very core of the game.

What almost all games use instead is an entirely scripted AI. The designer or programmer creates a big set of rules for how the AI should behave in specific circumstances and that's it. Add enough rules for enough situations, plus some randomness, and you can achieve a bot that seems to act very intelligently, although in reality it is nothing but a big rulebook written by the developer.



Awesomenauts is no different. The AI system is a highly evolved version of what we made for Swords & Soldiers. The inspiration for it came from an article Bungie wrote about their behaviour systems in Halo 2. Something similar was also presented at GDC years ago as being used in Spore and a couple of other games that I forgot the names of.

The basic idea in our AIs is that they are a big if-else tree, connecting conditions and actions. If certain conditions are met, certain actions are done. For example, if the player is low on health and enemies are near, he retreats to heal. If he also happens to have a lot of money, he buys a bunch of upgrades.

These big if-else structures are shaped like a tree and are quite easy to read. Certainly much easier than reading real code. The whole principle is best explained by a screenshot from the AI editor:



Before we made our AI editor we tried some other approaches as well. In an old school project I programmed the AI in C++, and for our cancelled 3D action adventure Snowball Earth we used LUA scripting. We were quite unhappy with both: although programming gives the most flexibility, creating such big sets of if-then-else rules is just very cumbersome in a real programming language. The endless exceptions and checks quickly become an enormous amount of confusing code.

So we set out to make a tool specifically for making AIs. Our AI editor is structured entirely around these combinations of conditions and actions and makes the problem a lot more workable. It is true that our AI editor is less flexible than code and cannot do certain things (most notably for-loops), but being faster and clearer to work with makes it possible for us to make much better AIs in the same amount of time.

Each type of action and condition in our AIs corresponds to a class in C++. For example, the condition "canPayUpgrade" corresponds to a C++ class called "ConditionCanPayUpgrade". This class looks up the price of the upgrade and the amount of money the player currently has to determine whether the player has enough money to buy the upgrade.

Since the blocks are programmed in C++ they can do very complex things. A core principle is that we try to hide the complexity and performance inside the blocks. If we need to do something in an AI tree that is not possible with simple if-else trees, then we can always add a new type of block that can do that. A great example of this is our block "isCharacterInArea", which under the hood does a collision query and checks for things like line of sight, class and health. There is quite a bit of code behind that block, but to the AI designer it is a simple and understandable block.

Our AI editor evolved and changed significantly from Swords & Soldiers to Awesomenauts. The two biggest differences are the debugging tools and the general structure. At the time of Swords & Soldiers our designers could not see any information on a running AI. To find and debug AI problems they just had to play the game and observe what the AI was doing. AIs in Awesomenauts contain thousands of blocks, so better debugging tools became necessary. Therefore we added the AI observer, internally known as "the F4 editor", since it is opened by pressing F4. The AI observer shows the state of the AI, and we even added a real debugger that can be used to step through AI updates and see the exact path through the AI.



The structure of the AI changed as well when we adapted them for Awesomenauts. In Swords & Soldiers the AI trees where "priority trees", similar to those in Halo 2 and Spore. This means that the goal of the tree is to find one action to perform, for example "flee", "attack", "reload" or "seek cover". The top-most action that has all its conditions satisfied is always executed, and nothing else is.

Priority trees are great when an AI should do only one thing at a time, but they turned out to be way too rigid for us. In practice an AI might want to move somewhere and shoot at whatever it passes and observe the situation to make a choice later. Our designers wanted to perform more than one action per tick so badly that they ended up making all kinds of weird workarounds, so for Awesomenauts we ditched the whole concept of priority trees and instead turned it into simple if-else trees. These are not only more flexible, but also much easier to understand.



The original version of our AI editor was programmed by Ted de Vries, who did an intership at the time and later joined us as a full-time programmer (he currently works on Assassin's Creed at Ubisoft). The AI observer and debugger were also programmed by an intern: Rick de Water.

Next week I will dive into a surprisingly complex aspect of AI: path finding and navigation in a 2D platformer. While standard path finding is pretty easy and can just use A* and that's mostly it, adding platforming mechanics and different movement mechanics per class made this topic much more interesting that we had expected beforehand. Double jumps, jetpacks, kites, moving platforms: we needed something that could handle all of it.

Wednesday, 7 May 2014

Proun cloning controversy: why indies should complain less about clones

My game Proun was recently 'cloned' by an iOS game called Unpossible. Unpossible isn't the first 'clone' of Proun: games like Synesthetic and Polyrider also copy the core gameplay and many obstacle shapes. Polyrider even goes so far as to copy-paste my main marketing text (DOH!). However, the timing of Unpossible is much more painful as Proun itself is also finally coming to iOS (and Android and 3DS): together with Engine Software I am working on a bigger and better Proun+. Proun+ coming to iOS means that it will be a direct competitor of its alleged clone Unpossible. I saw quite a bit of controversy online about whether Unpossible is a clone or not and whether that would be bad or not, so here is my own take on the matter.

I think most people are worrying way too much about most clones. The only clones that should be considered a big problem are direct rips that add nothing and are clearly only intended as a moneygrab. Most infamous clones are not of this type and indie developers should complain less and instead spend that time making more and better games.

One aspect that seems completely absent from how indie developers react to clones is pride. Usually your game will only be cloned if you make something very good and different from what already exists. Somehow most indies seem to only notice the negative and don't realise what a fantastic compliment being cloned really is. One of the things I am most proud of in my game development career so far is that Swords & Soldiers and Proun have both been 'cloned' many times and that De Blob has directly inspired other games. Indies should focus more on this positive side of cloning!



I think there are three types of clones:

-The "Asset Ripper" is the worst. These are clones that not just copy game mechanics, but even go so far as to rip assets directly from a game. Graphics, sounds, animations: they are just copied directly. We have actually encountered these ourselves once: a Swords & Soldiers clone had used some of our sound effects in their game. However, this was a small Flash game that was played by hardly anyone so we ignored it and didn't take any action. The "Asset Ripper" is the only type of clone that is definitely illegal, since it violates copyright laws.

-The "Art Replacer" is the next step: practically all game mechanics are copied directly and the only things that are changed are the visuals and audio. This is still clearly a clone, but we are already entering a grey area here. If the visuals are changed from a happy fairytale to gritty sci-fi, then the feel of the game itself also changes a lot. It is still lame to copy mechanics directly like that, but this isn't a complete clone anymore and might cater to a different audience. A famous example in this category is Yeti Town, which ripped Triple Town. They switched to a different setting, but the happy feel remained the same. Add that to a literal copying of every game mechanic and rule in Triple Town and it becomes extremely lame.

-Finally there's the "Mild Changer". This is the category Unpossible belongs to, as do famous 'clones' like Ninja Fishing. Mild Changers have a lot of similarities to another game, but add not just their own visuals: they also change the gameplay in some small ways. The core game keeps feeling the same, which is why people cry 'foul' so often for these kinds of games, but they do change things and thus create something new.

Unpossible is a clear example of this latter group. At first glance this game seems to play exactly like Proun, but it does change some things. The camera goes from third person to first person. This doesn't really make the game feel very different, but it is a real change nevertheless. The cable is thicker in Unpossible, so it is more difficult to see what is coming around bends and what is at the other side of the cable. And most importantly: in Unpossible the game immediately restarts when you hit something, while in Proun you can keep racing. This changes the game quite a bit.

I think developers should complain much less about such "Mild Changers". Genres evolve by copying from existing games and improving and adding to that. This is how we as an industry grow and discover new places. Truly new things are hardly ever made without grabbing lots of elements from existing things, in games as in most other subjects. Classic RTS Dune II lacked group control and regrowing fog of war. It is a good thing that other games 'cloned' Dune II and added these crucial elements, bringing the genre forward. A clone that is significantly better than the original is a great evolution and should not be frowned upon. (Not that I feel Unpossible is significantly better than Proun, but it still is an evolution.)

Strangely, if a game is part of a genre with lots of existing games, it is considered okay to only change some small things. But if a type of game has so few games that it can hardly be called a genre yet, then suddenly it is cloning and considered problematic. It seems like the general public thinks 'clones' are a problem if few games do it, but if lots of games all do it for years, then we call it a 'genre' and it is okay.

An interesting question in such cases is whether the developer of a clone knew the original at all. On the Touch Arcade forums Acceleroto claims to not have know Proun until Unpossible was already playable: "I didn't know about Proun until I shared an Unpossible build with some dev friends." My first inclination is to believe him: when I started working on Proun I also didn't know about F-Zero's Cylinder Wave track. (In case anyone is wondering about 2009's Boost 3D: Proun's first versions are way older than that, as can be seen in this forumpost from 2006.) Despite Acceleroto claiming to not have known Proun, when I actually played Unpossible I started to strongly doubt that. The similarities in types of obstacles and game feel are so incredibly strong that I really doubt whether Acceleroto really didn't play Proun until most of Unpossible was designed, especially considering how 'original' his other games are.

At Ronimo we are currently developing Swords & Soldiers II. Do I worry about the many similar games and even clones out there? No, I don't. We are trying to make Swords & Soldiers II better and more fun than the others. If we succeed at that, then the clones won't hurt us. If we don't succeed at that, then we should have made a better game and have only ourselves to blame. Ridiculous Fishing is a great example of this: Vlambeer made their sequel to Radical Fishing so incredibly good that it became a gigantic hit despite so-called 'clone' Ninja Fishing launching much earlier.



Another part of handling clones is that indies should simply make better business decisions. Unpossible will compete with Proun+ on iOS, but this is entirely my own fault: Proun has been out for three years now so I should have jumped on iOS ages ago. Had I done that, then Proun+ would have released well before Unpossible and the whole issue would not exist. This is a situation that often arises around clones: the original developer is very slow at bringing his game to other platforms and when he finally does, he complains that clones have sprouted in the meanwhile. This is especially important with games that are simple to build, like Proun. Ronimo's Awesomenauts and my own Cello Fortress are technically so complex that clones are much less likely to come quickly, if at all.

Before we learned of Unpossible we were already adding more variation and originality to Proun+ by introducing new game modes, more music and new visual styles. Seeing Unpossible can only strengthen our resolve to make the best game we can. If we fail at making a better game, then that is entirely our own fault. If we succeed, then I doubt the existence of so-called 'clones' will be all that relevant to sales of Proun+.

Sunday, 13 April 2014

Why free to play games are inherently less fun

Designing a free to play game with microtransactions is a huge challenge. It is incredibly difficult to find the perfect balance between giving players a strong incentive to pay something while still making the free experience good enough that they keep playing. This challenge is crippling to the game itself. It is impossible to make a game as fun as it could be for both paying and non-paying players. At least one of those groups gets a game that is less fun.

Game design is all about making a certain concept as much fun as possible. By tweaking things like difficulty, flow, reward systems, variation and complexity the game designer tries to create the best experience possible. This "best experience" is an invisible target: you can never know whether you have reached it, or whether tweaking some things would make the game slightly better. It is also something that differs depending on the target audience. Some players like a challenge, others like a more relaxed experience. Some players want to drown all their time in a virtual world, others want a short and condensed experience.

The amount of "fun" in a game can be envisioned as a graph. Design the game in a certain way and you are at the very top of the graph, at the most fun experience. During development you try to tweak the game to get closer and closer to that very top, to that ultimate game. This is of course a theoretical graph: you can never know exactly how it runs. Also, there are many peaks, for the many different game concepts possible and for many different target audiences.



When designing a free to play game, the game designer looks for the best experience, just like when designing a 'normal' paid game. However, when designing for free to play the game designer needs to juggle two balls: some players pay money, others do not, and both groups need to get a good game. Especially the progress and reward structures in the game become very different for paying players. Non-paying players usually get very slow progress, while if you pay you immediately jump ahead. For example, in The Simpsons: Tapped Out you can wait many hours for a building to complete, or pay some real money to have it finished immediately.

Designing a good progress and reward structure is very important for most games. A good RPG usually becomes much less fun if you unlock new skills at half the speed, since it becomes too much of a slow grind. Unlocking things twice as fast does not make a good RPG better either: the player will feel less satisfaction when getting something new, will care less about each new item and might not even try a lot of them because they unlock so quickly. More rewards is not automatically better. There is a perfect rate of progress: not too fast, not too slow.

In most free to play games, the paying players get rewards much faster than the non-paying players. It is impossible that they are both at the top of the "fun" curve. So the designer gets a choice: make them both a bit less fun, or make one of them the ultimate experience and the other a lot less fun. In other words: it is impossible for a free to play game to make both paying and non-paying players have the ultimate experience.

This argument is not just valid for reward structures. It also works for all other aspects of the game: whenever gameplay is sold with real money, it is impossible to make that gameplay perfect for both non-paying and paying players.



The second reason why free to play games cannot achieve the best experience possible is that they are constantly nudging the player towards doing something they don't want to do. Players want to play a game, they don't want to spend money. They might be willing to spend money, but most players would rather not.

This means that the average free to play game is full of things that try to push the player away from doing what he wants to do, pushing the player towards paying real money. This can again be seen in an example from The Simpsons: Tapped Out: when you try to build something with in-game currency, the game often first lists all the items that you can only build with real money. You need to scroll through long lists of things you cannot build, until you get to the things that you can. In a 'normal' game, the game designer would design these menus to help you find what you want to build as quickly as possible. Here the game does the very opposite because it needs to tease you with all these items that it wants you to pay real money for.

Of course the player needs to pay for paid games as well, but in a paid game she pays up front, outside the game. After that the game tries to give here the best experience it can, instead of all the time trying to sell her something.



The above arguments do not mean that free to play games cannot be fun. I imagine that some readers might want to counter my arguments by giving examples of free to play games that are fun. However, my point is not that free to play games cannot be fun. My point is that free to play games could be more fun if they were not damaged by the free to play design.

Despite these problems free to play can still sometimes be a good idea. Especially multiplayer games can sometimes benefit greatly from free to play. This is because multiplayer games automatically become more fun when more people play them. The more players there are, the better the game can match players of similar skill to play together. The more players, the better lag can be reduced by matching those who are geographically close to each other and the higher the chance that your friends are also playing so you can play together with friends instead of strangers. With more players the game can also offer more game modes while still making sure everyone immediately finds opponents to play with.

In short: multiplayer games are better when more people play them. Free to play generally draws a larger crowd and thus often improves the game. So despite that the free to play model damages the game itself, the improvement from having more players might mean that the total effect of free to play can be a plus to such games.



Free to play and microtransactions are also sometimes needed purely from a business perspective. In some game genres and on some platforms players are so used to free to play that many simply refuse to pay upfront for a good game, even if it is a better game. In that case free to play might be the only way to make a successful game. Another business reason to include microtransactions might be that support, running servers and developing patches are all expensive to do. The developer might simply need the additional income from microtransactions to be able to keep supporting the game after launch.

Free to play games are inherently less fun because paying and non-paying players cannot both get the best possible experience, and because making money purely through microtransactions requires constantly pushing the player towards doing something she does not want to do. In case of multiplayer games having more players might add more fun than is lost due to free to play, but that doesn't change the fact that designing a game around microtransactions always damages some of the fun.

Saturday, 5 April 2014

How we solved the infamous sliding bug

Last month we fixed one of the most notorious bugs in Awesomenauts, one that had been in the game for very long: the infamous "sliding bug". This bug is a great example of the complexities of spreading game simulation over several computers in a peer-to-peer multiplayer game like Awesomenauts. The solution we finally managed to come up with is also a good example of how very incorrect workarounds can actually be a really good solution to a complex problem. This is often the case in game development: it hardly ever matters whether something is actually correct. What matters is that the gameplay feels good and that the result is convincing to the player. Smoke and mirrors often work much better in games than 'realism' and 'correctness'.

Whenever the sliding bug happened, two characters became locked to each other and started sliding through the level really quickly. With higher lag, they usually kept sliding until they hit a wall. I have recorded a couple of mild examples of this bug, where the sliding stops quite quickly but still clearly happens.


Note the weird way in which the collision between Froggy and the worms happens.

To understand why this bug happened, I first need to explain some basics of our network structure. Awesomenauts is a purely peer-to-peer game. This means that the simulation of the game is spread out over all the players in the game: every computer is responsible for calculating part of the gameplay. Particularly, each player manages his own character. The result is that character control is super fast: your computer can execute button presses immediately and there is no lag involved in your own controls, since there is no server which has final say over your own character. Of course, lag is still an issue in interactions with other characters that are managed on other computers.

Spreading out the simulation like this is simple enough, until you starting looking at collisions. How to handle when two characters bump into each other? Luckily Awesomenauts does not feature real physics, which would have made this even more complex. Our solution is simply that each character solves only his own collisions. So if two players bump into each other, they solve only their own collision (in other words: they move back a bit to make sure they don't collide anymore). They don't interfere with the other character's position at all. This works pretty well and is very easy to build, but it does become difficult to control the exact feel of pushing a character, since lag is part of that equation.



This works because normally both players will try to resolve their collision in the opposite direction: the character to the right will move to the right and the character to the left will move to the left, thus moving them away from each other.

Which brings us back to the sliding bug. This bug happens when the computers disagree on who is standing to the right and who is standing to the left. If both computers think their player is standing to the right, then they will both try to resolve the collision by moving to the right. However, since they both move in the same direction the collision is not actually solved, so they keep sliding together until they hit a wall.



It is clear how this would cause sliding, but why would the computers disagree on who is standing to the right? This requires both lag and a relatively rare combination of timing and positioning. This is a difficult one to explain, so I'll first explain it in words and then in a scheme. I hope the combination makes it clear what is happening.

Let's look at the situation when two players are both moving to the right. Lonestar is in front and Froggy is behind. Froggy is moving faster, so Froggy is catching up with Lonestar. Now Froggy jumps and lands on top of Lonestar. Because of lag, the jumping Froggy sees a version of Lonestar that is slightly in the past. Since Lonestar is moving to the right, his past version is still a bit more to the left. The resulting positioning is such that Froggy thinks he is further to the right than Lonestar, so Froggy starts resolving his own collision to the right. Lonestar on the other hand sees a past version of Froggy (again because of lag) and thinks he himself is to the right. The lag makes both Froggy and Lonestar think they are on the right side.



We originally thought this would be a very rare bug, but in practice it turns out that it happened often enough that most Awesomenauts players encountered it occasionally. In fact, there was one top player who was able to aim Froggy's Dash so well that he could trigger this bug almost every time. He used it to attach his opponents to him to do maximum damage with the Tornado after the Dash. Impressive skills! Gameplay mechanics that are so difficult to time are cool because they raise the skill ceiling in a game, but it was a bug so we did want to squash it.

Since we thought it was rare and since we couldn't think of an obvious solution, we first ignored the bug for quite a while, until a couple of months ago I managed to finally come up with an elegant solution. Or at least, so I thought...

The solution I came up with was to turn off collision handling for one of the players whenever the sliding bug occurs. This way they stop sliding together, and the character who still handles collisions will resolve the collision for both of them by moving himself a bit further than he normally would. The collision is only turned off between these two characters and only for a short amount of time.

This requires knowing when the bug is happening, which is not obvious because the bug is happening on two different computers over the internet. To detect occurrences of the bug we added a new network message that is sent whenever two players collide. The player with the lowest objectID sends a message to indicate which side he believes he is on. This is a very simple message, simply saying "I am Froggy, I am colliding with Lonestar and I think I am to his right". Lonestar receives this message and if it turns out to be inconsistent with what he thinks is happening, then Lonestar turns off his own collision handling and lets Froggy resolve the collision on his own.



This is simple enough to build and indeed solves the basic version of the sliding bug, but it turned out to feel pretty broken. There are two reasons for this. The first is that in the above situation, if often happens that a character starts resolving his collision in one direction, and then in the other direction. This felt very glitchy, as the character moved in one direction for a bunch of frames and then suddenly moves in the other direction.

The second and bigger problem is that our collision resolving is done at a relatively low speed. We do this deliberately, because this way when you jump on top of a character, it feels like you slide off of him, instead of instantly being pushed aside. This is a gameplay choice that makes the controls feel good. However, this means that collision resolving is not faster than normal walking, so it is possible for Lonestar to keep walking in the same direction as in which Froggy is resolving the collision. This way the collision is never resolved and Froggy keeps sliding without having control. This may sound like a rare situation, but in practice player behaviour turned out to cause this quite often, making this solution not good enough.



Seeing that this didn't work, I came up with a new solution, which is even simpler: whenever the sliding bug happens, both characters turn off their collision, and it is not turned on again until they don't collide any more. In other words: we don't resolve the collision at all!

This sounds really broken, but it turns out that this works wonders in the game: players rarely stand still when that close to an enemy, so they pretty much instantly jump or walk away anyway. In theory they could keep standing in the same spot and notice that the collision is not resolved, but this hardly every happens. Moreover, even if it does happen, it is not that much of a problem: teammates can also stand in the same spot, so two enemies standing in the same spot does not look all that broken.



This solution has been live on Steam for over a month now and as far as we know, it is working really well.

As you might have noticed, this has been a pretty long and complex blogpost. The sliding bug is just one tiny part of network programming, so I hope this makes it clear how complex multiplayer programming really is. There are hundreds upon hundreds of topics at least as difficult as this one that all need to be solved to make a fast-paced action MOBA like Awesomenauts. Also, this solution is a very nice example of something that seems really wrong and way too simple from a programming standpoint, but turns out to work excellently when actually playing the game.

Saturday, 29 March 2014

Guys With Guns: concept art for a cancelled Ronimo project, or: Being asked to pitch for a publisher's project

I have previously discussed pitching your games to publishers, but that was from a very specific perspective: if you already have plans for a game and are trying to make a publisher interested in that. Once a studio has gained some credibility the opposite can also happen: sometimes a publisher is looking for a developer for a game they want made, and they ask a bunch of game studios to pitch for that project.

Around four years ago (early in development of Awesomenauts) this happened to us twice, in both cases with well-known publishers looking for a fresh and different take on a famous franchise they owned. Today I would like to show the concept art our artists made for one of those, and explain how these kinds of pitches work.

Note that things like this are almost always under NDA, so I cannot mention the name of the publisher or the franchise involved. An NDA ("Non-Disclosure Agreement") is a common type of contract used in the games industry in which two parties agree to keep something secret.

When a publisher has a project for which they are looking for a developer, they often ask several developers to pitch for that, usually three to five. This way they can choose the plan that best fits what they are looking for, and can give each developer a chance to show what spin they would give to the game.

We knew that we were not the only developer in the race, so we tried to come up with something special. Most of us had played and loved the franchise when we were younger, so we were really excited to try our hands at bringing a new and modern spin to a classic.

We created a budget overview and wrote a design document in which we explained how we would evolve and change the gameplay. Our artists created concept art for several possible visual styles the new game could have. Since we wanted to give the publisher some choice, we decided to create art for more than one visual style, so that they could pick what best fitted their vision of the game. I think these all look great, so this blogpost is really just one long excuse to show you these awesome drawings by the Ronimo art team!


Style G, by Olivier and Gijs


Style A, by Tim


Style B, by Gijs


Style H, by Martijn

We sent the art, budget overview and design document to the publisher and then waited for their answer...

...

Nope! Unfortunately, they didn't choose us and went ahead with another developer.

(On second thought: knowing now how great Awesomenauts turned out, I am actually very happy that our pitch wasn't chosen!)

Not winning a bid like this is all part of the game, of course, but we were very disappointed at the reason given: they told us that both our game concept and our art were the best they had received, but they didn't believe we could technically pull off a multi-platform 3D game. This felt very lame, since they could have figured that out before they even asked us to pitch. Not that I think they were wrong: we had never worked on those platforms before and had never released a commercial 3D game, so this would have been a big challenge and I doubt we could have finished it on time and on budget with the experience we had back then. Nevertheless it felt lame that they didn't figure this out before asking us.

Still, I can imagine why they asked us anyway. My theory is that they were probably curious to see whether a young and innovative team like us would come up with something very special and insanely awesome. Instead, we 'only' came up with something that was better than the competition, but probably not enough to make them willing to take the risk of working with such a technically inexperienced team. So if our pitch had been even better, we might have gotten the job despite the technical risks. Of course, this is only my own theory, since we never complained to them about the procedure and thus never got a further explanation.

Despite that we didn't get the gig, it was an interesting experience to go through this process and we still have this awesome concept art! ^_^

Below are some more sketches made by our artists. These didn't make it into the actual pitch, so we never sent these to the publisher. Which makes me curious: which of the styles in this blogpost do you like best?


Concept art F1, by Gijs


Concept art F2, by Gijs


Concept art F3, by Gijs


Concept art D, by Olivier