Sunday, 21 February 2021

Blightbound's approach to individual storytelling in a coop game

An interesting design challenge during development of Blightbound was how to combine personal storytelling with coop gameplay. In Blightbound three players play together, but they might be at different spots in the stories of their respective characters. How to make room for storytelling without spoiling stories for other players who might not be there yet, and without making players wait for each other? Today I’d like to dissect the approach we’ve used to solve this problem.

Before I continue, note that this blogpost will spoil some story moments. However, I’ve made sure to only take examples from early in each character’s story, so the videos in this post will only be a minor spoiler to the game.

Blightbound is a coop dungeon crawler, but unlike games like Diablo it doesn’t have a linear campaign. Instead, the player repeatedly plays a number of dungeons while learning each character’s story and gathering loot and more characters. In that sense it’s structured more like the end-game of Diablo, or like a looter shooter.

Blightbound is a 3 player game, with both online and local coop.

Another important aspect of Blightbound is that it has a lot of heroes. Instead of maxing out one hero for dozens of hours, the player gets to unlock a bunch and is invited to try them all. We would like players to vary what character they play. For this reason we wanted to tell a lot of small stories, interweaved with gameplay, instead of telling one big story. Each character has their own little story they go through.

Each hero in Blightbound has an individual story that’s told through the dungeon runs with that character. When 3 players play together, they each play a different character so they’re supposed to get different stories. This is where the issues that I’m discussing today arise. Players shouldn’t see the stories that the other players are getting, because they might not have progressed to that point with that character yet (spoilers!), or might have seen that bit already. Also, Blightbound is a fast-paced game so we don’t want players waiting for each other’s cutscenes.

The solution we came up with is that the storytelling is entirely private. When one player gets a bit of story for their character, other players don’t notice this at all. They might even simultaneously be getting different stories for different characters!

Now how do we make that work without letting players wait for an invisible cutscene? The simplest solution is used a lot in Blightbound: dialogue while gameplay continues. The player gets served portraits, speech bubbles and voice acting while the gameplay continues. So there’s no need to stand still and watch a cutscene.

Two examples of basic story beats during gameplay.

This does introduce a problem: getting extensive dialogue during combat is going to be very distracting and the player likely won’t be able to follow the text while focussing on the fight. Luckily, we needed some pacing anyway. A dungeon that’s 15 minutes of constant combat is both too intense and boring. So we should introduce some quieter moments anyway, and we make dialogue happen during such moments as much as possible. This way players can keep exploring but also have the mindspace to digest the storytelling.

If the player keeps moving, who is that dialogue with then? Here we’ve tried to vary it as much as possible. The simplest ‘dialogues’ are inner monologues of the hero. In other cases there’s a dialogue with an an NPC who’s standing in the level. Here too the other players can’t see this non-player character. To avoid waiting, the player can just keep walking: the dialogue starts when interacting with the NPC, but after that the player can run away and the dialogue will continue.

Gameplay continues during dialogues, so teammates don't have to wait for you.

To spice things up, we’ve added a fun trick here. The player is playing with two others who are random other heroes. To make the dialogues more dynamic, these other heroes will actually respond to it. But 21 characters all responding to all the story moments of all 21 other characters is way too many combinations. So instead we’ve defined a bunch of standard responses that each character has unique versions of. These are referred to in the dialogue scripts and then inserted dynamically during gameplay.

We have a total of 11 types of such standard responses and each character has their own line for each type. This way the responses can fit their personality. For example, these three lines show how different characters fill in the symphathy response:

  • "I feel your pain."
  • "You have the clan's sympathy."
  • "Would a moisturizing salve help?" (lol wut)

All eleven types of responses for one of the playable heroes.

What we have so far is cool, but still rather basic: all storytelling is done through dialogues and monologues and that’s it. From there we’ve looked into how we could spice things up without adding pauses. We should also not introduce too much work: Blightbound has more than 100 such story moments so efficiency is important.

The first element that adds some variation is how story beats are triggered. In some cases it’s simple: enter an area that’s marked and the storytelling starts. Some story events can trigger in any such area, while others need to take place in specific dungeons since they’re linked to that setting.

The more interesting ones require some form of interaction. For example, the mage Korrus looks for vases to investigate, so you need to spot them and interact with them. Other characters see someone standing in the dungeon and need to interact with the character, or see a hallucination. When there’s a hallucination hanging in the level, it’s only visible to the player who’s story beat this is for. To their teammates, it’s just a regular spot in a dungeon. Again we’ve made sure that we only use short and simple interactions, so that your teammates don’t need to wait for you.

One of the more fun types of story events are elite enemies. Some heroes need to fight specific characters to progress their story. To make this work, we’ve made special elite versions of some enemies that only trigger as part of someone’s story. The party of 3 players fight the elite enemy together. The nice thing here is that to your teammates, it’s just another elite enemy with some special skills (we have plenty of those outside story moments as well), while to you this specific enemy is a story moment and triggers dialogue. So this is a coop battle that is also a single player story moment. Quit neat, I think!

An example of a story beat in which we encounter an elite enemy.

Unlocking new heroes to play is a big part of Blightbound’s progression system. Here too a goal was to add some variation. Many heroes are simply found and rescued from a dungeon, but some others can only be unlocked by paying the merchant, by making your village prosper, or by finishing the story of another character.

For the simplest case, where a hero is rescued from a dungeon, we again ran into the problem of this being a coop game. My teammate might already have saved the hero that I’m rescuing, how does that work? To solve this problem, we’ve introduced the concept of survivors. These are basic characters that you can’t play. When you rescue a survivor, you get a little prosperity bonus to your village and that’s it. Now whenever one player is unlocking a hero by rescuing them in a dungeon, the other players see this as rescuing a generic survivor. This way it never looks like a teammate is reviving thin air: there’s always someone lying on the ground, ready to be saved. It might just happen to be a different character for each player.

Unlocking a hero compared to rescuing an anymous survivor.

One more nice trick that I would like to mention is one that we’ve so far unfortunately not finished implementing (Blightbound is still in Early Access on Steam). Since levels have these areas that are ideal for triggering story beats (no combat, some space for exploration), it would be nice to do something with that even when there’s no story beat for your specific character triggering in that run. So Roderick (the writer on Blightbound) has written a bunch of dialogues and banter that add some world building and personality building, but aren’t tied to any moment in the story. This way we can always trigger something. Since this is really a non-essential bonus feature we haven’t gotten around to actually implementing these lines yet, but I really like this feature so I can only hope that we’ll find the time to add this at some point.

Before concluding this blogpost I’d like to also have a short look at the implementation. With 21 heroes all having story moments and shouts and banter, Blightbound has a LOT of voice lines. There are more than 100 story events and over 6,000 voice files, ranging from short barks to several sentences. Working efficiently with such large volumes requires some proper tools, so Ronimo programmer Jeroen Stout (known for having made Dinner Date before joining Ronimo) implemented a system he calls Voice-A-Tron.

Voice-A-Tron was tailor-made for Blightbound and it’s a very neat tool that provides a bunch of features, including:

  • A spreadsheet that automatically presents all lines in two formats: per character and as it appears in dialogue (so alternating between lines of different characters).
  • Some simple scripting rules that allow defining dialogues, what portrait to use, where to trigger context-sensitive responses, triggering animations and even triggering related achievements and unlocks.
  • A system that automatically executes these scripts in-game, so that our game designers need to do less work to implement story beats.
  • Special objects that can be placed in levels to control where and when story beats are triggered.

Using Jeroen’s Voice-A-Tron system, Roderick Leeuwenhart wrote and defined all the story beats. Roderick is the writer for all Blightbound dialogues, as well as a series of short stories set in the Blightbound universe. The next step was that Ronimo game designer Thomas van der Klis did the actual implementation, for example defining where story events can trigger and creating special items and elite enemies.

The result of all this work is that in Blightbound, each character has their own story beats and these are told without interrupting the flow of gameplay and without spoiling teammates. At the same time the system is simple enough that it was doable to implement for all 21 heroes in Blightbound. While Blightbound isn't a storytelling game at its core, I feel this adds a lot of personality and purpose to the experience.

Sunday, 6 December 2020

Softening polygon intersections in Blightbound

Our new game Blightbound features many types of foggy effects: mist, dust, smoke and more. Such effects are often made with planes and particles, allowing us to draw fog and effect textures by hand and giving us maximum artistic control. However, one issue with this is that the place where fog planes intersect with geometry creates a hard edge, which looks very fake and outdated. My solution was to implement depth fade. This is a commonly used technique for soft particles, but we use it on lots of objects, not just on particles.

In today’s blogpost I’ll explain how depth fade rendering works. I’ll also show just how widely this technique can be applied, by going through a bunch of examples from Blightbound.

First, let’s have a look at what problem we’re trying to solve here. When putting partially transparent planes in the world, the place where they intersect with other objects creates a straight cut-off line. Sometimes that’s desired, but often those transparent planes represent volumetric effects. They’re not supposed to look like flat planes, but that’s just the easiest and most efficient way of rendering them. This is fine when there are no intersections, but when there are then the hard lines where they touch other objects break the volumetric illusion.

There’s a simple solution for this that’s used in a lot of games: depth fade. The idea is to simply fade out the plane near the intersection. This produces an effect similar to how real fog works: objects that go into the fog seem to smoothly fade out. However, actually figuring out all polygon intersections takes too much performance, so we want a rendering trick instead.

This screenshot from Blightbound shows a fog plane just above the ground. In the top image it is rendered in the standard way, resulting in hard intersections with the characters, rocks and cart. At the bottom the intersections are smoothened by depth fade.

The trick to rendering with depth fade is to first render all normal geometry, excluding any transparent objects. This fills the depth buffer, so for every pixel we know what distance it has from the camera. Then when rendering the objects that need depth fade, the pixel shader looks up the distance in the depth buffer and compares that to its own distance. If these are close to each other, then we assume that we’re near an intersection and fade out this pixel. The nearer, the stronger the fade out, until the object is entirely invisible at the point of intersection.

This technique has a few neat bonus features on top of just smoothing out intersections. By simply setting the distance over which the fade occurs, we can modify the density of the mist. Also, objects don't need to actually intersect with the fog plane to get depth fade applied. Being just beneath the fog plane also makes the effect visible. Thus depth fade is more than just a way to smoothen intersections.

The fog's density setting determines the width of the smoothing of the intersections. At a very high density the smoothing is almost lost. At a very low density the fog almost disappears because the ground is now also considered 'close' to the fog plane.

While this technique is traditionally mostly used for particles, it can easily be used for all objects with transparency. Since the world of Blightbound is covered by the blight (a thick, corrupting fog) we have a lot of types of fog in our game, including many fog planes and particles, as well as smoke and special effects. Our artists can apply depth fade rendering to all of those, not just to particles.

Depth fade is also great for hiding the seams of moving objects, like this fog wall.

A nice property of depth fade is that it doesn't cost all that much performance compared to traditional alpha blending. For each pixel of a particle or fog plane that we render, it costs one extra texture look-up (in the depth buffer) and a distance calculation. Compared to more advanced volumetric techniques, like voxel ray marching, that's a very low price. Since the performance impact of depth fade is low, our artists can use this technique on many objects, not just on the few that really, really need it.

Depth fade can also solve problems with camera facing glow planes. The glow on this torch is always oriented towards the camera, but that makes it intersect with the wall behind it under certain angles. Using depth fade, the intersection can be hidden. This animation shows alternating with and without depth fade.

When I implemented depth fade, I thought I was being pretty clever: I had only ever seen this technique used for soft particles, not for generic object rendering. However, while searching the web a bit to write this blogpost, I found out it's actually a standard feature in the Unreal engine. For Unity I only found the option on particles, but it might exist in a more generic form there as well.

Now that we know how depth fade works, let’s have a look at a bunch of example uses from Blightbound. Special thanks to my colleague Ralph Rademakers, who made most of the levels and is thus the prime user of depth fade in Blightbound. Ralph gave me a nice list of cool spots to show:

A compilation of examples from Blightbound where depth fade is used to great effect, showing both with and without depth fade.

Another application of depth fade is to hide seams of VFX with the world. In this example the smoke effect intersects with a black ground plane.

When I initially implemented depth fade in Blightbound, I thought it would mostly be used on fog planes that float just above the ground, to give the impression of heroes walking through a low hanging, milky fog. As soon as our artists got hold of this technique however they started using it on tons of other objects. This is to me one of the most fun parts of building graphics tech: seeing how much more artists can do with it than I had originally imagined!

Wednesday, 25 November 2020

Arcane Glamour LIVE NOW! Blightbound's biggest update yet!

Today we've launched our biggest update for Blightbound on Steam yet! I'm really proud of what we've achieved with this one, since it not only adds a lot of cool stuff, but also makes big improvements to existing things in the game. Menus have been overhauled, the tutorial has been improved and many minor issues have been fixed. Also, some features that limited players and didn't really achieve their goals have been removed: blight, notoriety and limitations on character select.

Our previous major update is less than two months away and I think it's pretty impressive how much we've cranked out in that time. Now for the most important thing: I hope our players will like the new changes and improvements!

The new character in this update, Roland of Stendhall, also comes with a short story about how he joined the refuge. I had a lot of fun reading how vain Roderick Leeuwenhart depicts him. You can read the story (as well as the other ones) here: I, Roland of Stendhall. I'm especially font of Roderick's invention of the word magesplain: "The geologist saw all sorts of difficulties in this plan and was keen to magesplain them to me."

You can find the full list of changes in the patch notes.

Sunday, 15 November 2020

5 years below minimum wage: the financial history of Ronimo

Starting your own game company is fun and exciting, but it’s also challenging. It takes courage and skill, but above all: patience and perseverance. Some become successful quickly, but in many cases it takes years to achieve financial success and actually make a decent living out of your own game company. It might take long to make your first product or get your first deal, and that first achievement might only be a stepping stone towards a next step that brings financial stability. Today I’d like to show an example of just how long that can take by sharing the financials of the first five years of Ronimo, the company I co-founded with 6 friends nearly 14 years ago.

TLDR: The very short summary is this: it took us 2 years to make any money from Ronimo at all, 4 years to earn (almost) our country’s official minimum wage and 6 years to receive a more decent monthly salary from our own company. During that period we were near bankruptcy twice. Why did it take so long? Read on and you shall know!

A note before I continue: this blogpost is partially about how long it took us to “make a decent living”, but the cost of living differs hugely per country. The Netherlands is a wealthy country so cost of living is relatively high. A quick internet search shows that cost of living is much higher in some countries and only half in others. Since most revenue is worldwide, the same sales might mean financial stability in one place, but not enough to pay the rent in another.

Also, for anyone used to reading US dollars instead of euros: if you just replace the € sign with a $ sign, you’re in the right ballpark (especially given that the exchange rates between dollars and euros have varied a lot over the years).

In our second year of studying at the Utrecht School of the Arts our classmate Fabian Akker brought up the idea of starting a company together, with a group. Around that time we had done a couple of school projects that had failed quite miserably, so my first thought was: “we suck, let’s not.” However, the third year was to bring the first major game project, so we figured that if we could make something awesome there, then maybe we could also start a company making our own games.

The resulting game was De Blob: a huge success! We put it online and got attention from gaming press and even had some publishers contacting us, wondering whether they could buy the rights to De Blob.

(Note that De Blob was not exactly made by Ronimo: of the 9 students who made De Blob, only 5 were part of the 7 founders of Ronimo.)

Convinced by De Blob’s success, we decided to really go through with starting our own company. However, each of us still had to do a 7 months graduation project. We combined them and made starting Ronimo our graduation project. Getting school to approve of that was a bit of a struggle, but once they did, we even got our own office inside school.

At the time, 'indie' as it's known today hardly existed and we had definitely never heard of it. We thought the only way was to make retail games and that required funding from a publisher. So we set out to make a pitchable prototype: Snowball Earth. This was intended to be a Nintendo Wii game and we hoped to find publisher funding once we had graduated. At this point Ronimo didn’t make any money at all, but that was okay since we were still students.

September 2007. We were so focussed on pitching to publishers that we did our graduation stuff on the side and crunched for what came a few weeks later: Games Convention in Leipzig, Germany! There we pitched to at least a dozen publishers. Some were interested and continued conversations with us afterwards. Hopeful, we continued work on the game, looking to improve it and increase our chances of signing a deal.

Our very first presentation was for a then pretty famous person from a big company. He was so excited that… he feel asleep during out presentation. Jet lag. Or disinterest. Or both. When we woke him up, he proceeded to try to sell us his own middleware and hardly looked at our game.

By this time we had graduated and had moved to our own office in Utrecht. A very small office for 7 people, but it was cosy and exciting. What we didn’t have, however, was money. We did some minor work-for-hire jobs, but since we weren’t fully committed to that, we hardly made any money there. Just enough to pay the rent of our office, but definitely not enough to provide ourselves with any income.

This is something I've seen quite a lot: studios who want to make their own games and finance that with work-for-hire rarely succeed at both. Either they hardly make any money from the work-for-hire, or they spend so much time on that that they can hardly focus on their own game. Often the result is that the game takes many years to build and turns out mediocre because of the lack of focus and time. The reason for this is simple: doing work-for-hire well and making it lucrative is hard and it's rare for that to work as an aside, especially for inexperienced recent graduates.

So, we didn’t make any money and we weren’t students anymore. How did we not starve? This varied amongst the founders. First of all, in September 2007 we managed to sell all the rights to De Blob to THQ, a then major publisher that’s now defunct. (Note that THQ Nordic is a different company that later bought the rights to the name and games of THQ, including De Blob.) For this we were each paid a nice amount (can’t disclose it due to NDA unfortunately), enough to pay the rent for quite a while. However, only 5 of the 7 Ronimo founders were part of the De Blob team, so 2 others didn’t have this.

Six of the founders had an additional source of income: the now defunct WWIK government subsidy. This paid recent art graduates around €600 per month. That’s less than half of the official minimum wage in the Netherlands at the time, but enough to not starve. To live cheaply, three of Ronimo’s founders rented an apartment together with one more person.

I personally didn’t get WWIK because I had some savings and thus didn’t qualify, so I went even cheaper: I kept living with my mum until I was 26 years old. I have a lovely mum though so I totally didn’t mind. Thanks, mum!

This is also a good moment to mention how privileged we are to be doing this in the Netherlands. In many places in the world all of this would have been much harder.

So, how did the pitching go? A few publishers were interested and one even flew over to do due diligence: judging whether we would really be able to make the full game. In the end none of them actually offered us a deal because Snowball Earth was too unique and we were too inexperienced to be trusted with that much money. We were asking for €1.5m development budget. Not much for the big game we envisioned, but definitely too much to give to a bunch of students who had so little clue about business and production processes.

Snowball Earth was too big a game to finish without funding, so we ended up cancelling it altogether. Years later we did release our prototypes, which you can still find here together with videos and screenshots.

What next, then? By this time indie was on the rise and we had managed to get Nintendo Wii devkits. We decided to make something small that we could finish and publish ourselves: Swords & Soldiers for WiiWare (the predecessor of the current Nintendo eShop).

We estimated we could make this game in 3 months. One year later, we finished and launched it. I’m still impressed that we managed to make something of that size and quality in just one year, and I’m even more impressed that we were stupid enough to think we could make something like that in just 3 months...

Throughout this year we still didn’t make any money, but we did hire interns. In the Netherlands internships are a standard part of many schools and are not paid like normal jobs. So for only €200 per month we could have a game student work for us full-time. Despite that low compensation, those interns were getting more money from Ronimo than we were! On average, we had 2 or 3 interns at a time helping with development.

In May 2009 Swords & Soldiers launched on Nintendo Wii. It got critical acclaim, reached the #1 selling spot on WiiWare in Europe and #3 in America. In total it sold 30k copies and made €146k during the first year (and very little on WiiWare afterwards). A big success for us at the time, but not that much money in retrospect.

In August 2009, after 2.5 years of working full-time with seven people, we were finally able to pay ourselves a monthly income. A whopping €600 per month! Oh wait, that’s super little… but it certainly felt like a big step forward!

Something we hadn’t realised yet at the time is the importance of porting our games to different platforms. That is, until Sony offered us money to make a PlayStation 3 port of Swords & Soldiers, including multiplayer.

To make this port and continue work on our next game OMG Space! (which would later be renamed to Awesomenauts) we needed more programmers. Up until this point I had been the only programmer at Ronimo (besides interns) and that wasn’t enough to make a port and a new game. We hired two programmers. Unlike us, those coders were paid real wages (though not very high ones). And so, while we the founders finally made more than our interns, we instead now had employees who made way more than us.

In September 2010 Swords & Soldiers released as a downloadable game on PlayStation 3. Unfortunately it didn’t make break even, so the only money we made from this was the initial porting budget we got from Sony.

Now that we realised that porting is a super important source of revenue, we also ported Swords & Soldiers to Steam and released that in December 2010. Making a port is only a fraction of the effort of making a full game, and every new platform is a new roll of the dice: a new chance at success. And indeed, while the PS3 version hardly sold, the Steam version would make us €120k in its first year and €35k in its second year.

Now that we had employees and paid ourselves a little bit, we had significant monthly costs. Too much to carry ourselves, so we were looking for a publisher for Awesomenauts. Near the end of 2010 this was becoming dire: we were only a few months away from being out of money altogether.

We were saved when we signed a publishing deal with DTP (yet another company that doesn’t exist anymore). The total development budget we got from them was €300k. Not much for a game of this size, but it was a lot for us! As is common, we received that money spread out over milestones and not all at once.

Awesomenauts was a very ambitious project, with complex multiplayer and simultaneously launching on two platforms that were new to us (Xbox 360 and PlayStation 3). We needed more programmers to pull that off. Good thing the budget we got from the publisher allowed us to grow a bit more. In the first half of 2011 we hired two more programmers and a producer, bringing the team’s total size to 12 full time employees. On top of that we usually also had 3 or 4 interns working with us.

The funding also allowed us to finally pay ourselves almost minimum wage: €1400 per month. Still less than our employees got, but at least we felt like we were finally making real money.

In March 2012 we managed to secure some additional income: Swords & Soldiers was included in the Humble Android Bundle and this made us €37k. Towards the end of the year it got included as a bonus in another Humble Android Bundle, making us another €10k.

This money was needed desperately, since Awesomenauts had seen numerous delays at this point. I don’t remember the exact original planned release date, but I think in total the console release got delayed by around half a year. The publisher didn’t give us extra budget for that, so we had to make do with the money we had.

In May 2012 Awesomenauts finally launched on Xbox 360 and PlayStation 3. But not before our publisher DTP went insolvent a mere week for launch. This made everything extremely complex and we didn’t know whether we would see any royalties at all. We got lucky: we had some unreleased DLC they wanted so we managed to strike a deal with the trustee for the insolvency so that the DLC would be released and we would still get royalties.

Nevertheless, Awesomenauts initially didn’t sell all that well on consoles and it took long before we got any royalties at all. We were nearly out of money but had one more card to play: a Steam port of Awesomenauts. Finances were so tight that we couldn’t pay ourselves anymore for a short period. We continued to pay our employees though, so only the founders were hit.

Then in August 2012 Awesomenauts launched on Steam and this version turned out to sell way better than the console versions. We were saved! And we had gotten lucky again with our publisher: since DTP was insolvent, they couldn’t pay for development of the Steam port of Awesomenauts, and thus we got the full rights to that version.

Awesomenauts kept doing very well so we supported it for 5 more years with tons of additional content. It also allowed us to finally switch what type of company we were: we switched from being a V.O.F. to a B.V. These are Dutch legal terms so let's not go into the details here. What it comes down to, is that a V.O.F. is strongly tied to the owners’ personal finances. If the company goes bankrupt, so does the owner personally. Being a B.V. is much safer, since now the company can go bankrupt without giving creditors the right to come after your personal belongings as well.

Being a B.V. does come with a requirement here in the Netherlands: unless you have good reason not to, the company needed to pay the active owners at least €2300 per month (after taxes). So in February 2013, six years after we started the company, we finally started to make a wage significantly higher than minimum wage. And even then it wasn’t that much: this excludes some insurances that are standard for employees but not for owners, and for me personally as a programmer: I’m pretty sure I could have made more had I worked elsewhere.

As far as I can tell, most game startups take several years to become financially successful. I think it might have taken us longer than most, but we made it at all and that’s already special. In fact, since the 'indiepocalypse' happened a few years ago most people who start a game company never manage to make a living at all (as I've previously said: the future of indie is amateur). With Ronimo we were lucky that we happened to start our company at a time when indie was hip and happening and it was relatively easy to reach players. Today, competition is much tougher than it was when we started, so chances of success are lower as well.

What’s the moral of this very long story? It’s simple: to start a game company, you don’t need just skill, vision and bravery, but also perseverance and a willingness to make little money for a long while.

Friday, 6 November 2020

Combining 2D and 3D in Blightbound's VFX

An important focus during development of Blightbound is that we we wanted to achieve a 2D look but have 3D movement and a 3D camera. A particular challenge are special effects: we had tons of experience with 2D special effects, but now the special effects also needed to communicate depth. For example, how far does that area-of-effect damage reach exactly? In this blogpost I will show a number of tricks we used to combine 2D and 3D in the Blightbound VFX.

This blogpost is based on an conversation I had with Ronimo VFX artist Kees Klop. Unfortunately Kees won’t be staying at Ronimo: for budget reasons we can’t keep him on after November. So, if you’re looking for a stellar VFX artist, be sure to send him a message on his Art Station or email.

Let’s start by having a look at the Gravity Well skill that some of the mages have. This is a spell that draws in all the enemies near it, making for an excellent combo with area-of-effect attacks by the Mage’s teammates. Like many of the VFX in Blightbound, this effect combines several types of geometry. Here's what it looks like, and a breakdown of some of its elements.

A video of the Gravity Well effect in-game and in our animation editor.

The next effect I would like to discuss is the Deck of Daggers skill that some rogues have. Several knives are thrown, and those that hit get stuck in the enemy for a while, causing damage over time. These knives are rotated out of the enemy's plane to add depth. It's a subtle effect, but having many of such subtle 3D effects in the game in total adds a lot of depth to the 2D drawings.

The Deck of Daggers effect or, as Triss would say it: "I'm a fan of knives!"

Also, our animation tech helps here: since Blightbound plays skeletal animation in real-time (as opposed to rendering it down to spritesheets as we previously did), objects like these knives can really stick to a body part and move along with it. While that's an obvious option to have on 3D characters, it's a lot less common to be able to do this with 2D animation without using a rather stiff animation technique. Our animation workflow however is a topic for a separate, future blogpost.

One of the most screen-filling effects in the game is the victory at the end of a dungeon, after defeating the dungeon's boss. Here we see a combination of many elements, including one that's not used often in our special effects because it's so all-encompassing: colour grading. The colours of the whole scene get changed during this effect.

The victory effect in-game and in our animation editor. Blight (a corrupting mist) is an important theme in Blightbound. Since this effect marks the end of a successful run, it sucks in and dissolves a lot of mist.

Many special effects in Blightbound are made 3D by overlapping flat planes under various angles.

A lot of the visual effects also feature bits of frame-to-frame animation. Many of those were made by Ronimo while many others were taken from the RTFX Generator pack, including this particular one.

The warrior's Warcry ability shows another technique for combining 2D and 3D. Warcry is an area of effect ability that buffs teammates. Since the range of the ability is very important, the starting point here is a circle on the ground. However, that's very flat and becomes less readable when sticking through things like grass or small stones. So to make the effect more 3D, a vertical cylinder with an animating swirl is added.

The Warcry skill in-game and in our editors.

Here's a little teaser: update 0.5 (coming this November) adds the new Tamed Wolf sword to Blightbound. A special perk of this sword is that when the warrior uses his shield, his teammates are also shielded. This effect visually overlaps with the Warcry skill. As you can see here, this shield effect makes the verticallity above the circle even stronger.

On to the assassin's Chakram ability! This is a large projectile that flies, hangs still for a while and then comes back. Unlike most other projectiles it's visible long enough that it's much more than just a flash. Also, it deals damage in an area of a meter or two, so it's quite large.

The Chakram skill in Blightbound.

The Chakram is a horizontally flying circular thingy. With the relatively low camera of Blightbound it was seen under too extreme an angle, making it look too flat. We didn't want to turn it into a 3D model though since we wanted to maintain the 2D, handpainted feel of the graphics. The solution is simple: the Chakram is tilted towards the camera a bit. This is subtle enough that it feels like the Chakram is horizontal, but it adds just enough angle that it looks a lot less flat. This is a technique that we use a lot in Blightbound.

While the main graphic of the Chakram is entirely flat, it's made more spatial by animating the pitch and adding particles and swooshes that move out of the plane.

The special effects shown in this blogpost were all made by Kees Klop, whose work I wanted to celebrate today. However, he is not the only special effects artist who worked on Blightbound: Koen Gabriƫls and Luuk van Leeuwen also made a lot of VFX. Koen did most of the early work of establishing the style for the Blightbound effects. Currently Koen and his intern Ayrthon van de Klippe are working on making VFX for upcoming Blightbound updates.

Finding the right combination of 2D and 3D in the VFX for Blightbound was quite a search during early development of Blightbound. Our artists ended up combining a lot of different techniques, including hand-drawn art, frame-to-frame animations, meshes, particles, intersecting planes, screen distortions and even colour grading. I think the end result works really well in-game: it looks good, fits the style and communicates the gameplay well.

Saturday, 24 October 2020

Bending Blightbound's world to lower the horizon

In 3D games perspective is often treated as a given; a law of nature. But it doesn’t have to be that way: with some shader trickery or clever modelling, perspective can be manipulated to achieve certain compositions that may not be realistic, but look more interesting and are still convincing to the player. One such example is how we kept the horizon on screen in our new game Blightbound by subtly bending the world.

At Ronimo we come from a world of 2D games. In 2D, composition can be whatever you like. That’s why our art director Gijs Hermans may sometimes want to ignore standard perspective rules and instead looks at what he wants to achieve visually. Thus early in development Gijs came to me and said he wanted the camera to look down quite a bit, but still have the horizon in view. In fact, he wanted the horizon to be quite a bit below the top of the screen. His reasoning was that visuals look much better when you don’t see just the floor most of the time. Het position of the horizon is an important tool for shaping a composition.

The origin of this request is a clash that often happens in game development: pretty visuals versus gameplay clarity. Our artists spend a lot of time on achieving both goals simultaneously. A very successful example of this is the way gameplay objects and backgrounds are drawn in a different style in our previous game Swords & Soldiers 2, as I described in this blogpost.

In a game where depth matters, like Blightbound, a low camera is problematic because it makes it difficult to see whether you are standing in front of an enemy or behind them. A high camera solves this, but a high camera removes the horizon from view, making the image a lot more boring.

When Gijs came to me with this request, I thought of two possible solutions: either give the camera a wider field-of-view, or bend the world to move the horizon down. We tried the easiest solution first: wide field-of-view. However, it turned out this needed to be set so wide that the entire perspective looked skewed. Extreme field-of-view often isn’t very pretty and it definitely wasn't in Blightbound.

The alternative I came up with is bending the world down the further it is from the camera. This is an effect that’s used in a bunch of games to create a sense that the world is very small, making the world feel cutesy and funny. However, Blightbound is intended to be a dark fantasy game, definitely not something cute and funny, so we didn’t want anything that extreme. I figured that with some tweaking it might be possible to achieve a more subtle version of this that still keeps the horizon in view but doesn’t have the funny vibe.

My implementation of this effect is quite simple. In the vertex shader I bend down the world depending on the Z-position of the vertex in the world. The nice thing of implementing it this way is that our gameplay code and level design tools can assume a flat world, making them a lot simpler. The bending only exists during rendering, so gameplay logic doesn't need to take it into account.

A minor challenge in implementing this bend is how to handle lighting and shadows. When the camera moves forward and the world bends, we don’t want the lighting on objects to change, since that would make the bending very obvious and would make the player focus on the backgrounds instead of on the gameplay. Also, objects in the background shouldn't be more bright because they are rotated towards the light by the bend. My solution was to calculate all lighting, shadows and fog as if there is no world bend.

Also, a little technical note: since the bend happens on the vertices, objects need to have enough vertices. A big square plane for the ground with no vertices in between can’t be bent. Occasionally this caused bugs where a small object would float above a big object because the big object didn’t have enough vertices to be bent correctly.

The bend effect is quite fun to see in action when set to an extreme value. However, any kind of geometric deformation is quite noticeable when the camera moves, so we chose to fix the bend in the world instead of letting it move with the camera.

A few different settings for the bend, including the final one used in Blightbound.

As you can see in the video, the bend effect is kept quite subtle in Blightbound. We didn’t want that cutesy/funny effect at all, since this is intended to be a dark fantasy game. Our level artist Ralph Rademakers tweaked the effect and the camera a lot until he got it to a point where it felt like there was no bend at all, just a natural camera. However, if you compare with and without bend, you can see that the bend makes a huge difference in what you actually see. And that’s exactly how it was intended: achieve the desired composition but don’t make it look like anything weird is going on.

And then came the fog! The bend effect was implemented when we hadn’t figured out the lore of the world yet. We didn’t know then that we would want to have so much fog. In fact, the working title of the game used to be AwesomeKnights instead of Blightbound! Once we finally decided on the lore we knew that the world of Blightbound is covered in “blight”, a corrupting fog. To match that, Ralph added a lot of fog to all the levels. This creates a great atmosphere, but… hides the horizon!

Does that make the world bend useless? No, definitely not. It's still used in quite a few levels to change the perspective and have a more horizontal view on the background, even if we can’t see as far as before. It’s a more subtle tool than originally intended, but still a very useful tool.

I think the bend effect we used here is a wonderful example of the kind of graphics programming I enjoy most: looking at what’s needed from an artistic standpoint, and then making tech that achieves that. I’m personally not very interested in realistic rendering: 3D is just a tool to make cool art, whatever the shape or type. The bend technique used here makes no sense whatsoever from a physical standpoint, but it adds to making Blightbound a prettier, more compelling game.

Sunday, 18 October 2020

Screen Space Reflections in Blightbound

An important focus during development of our new game Blightbound (currently in Early Access on Steam) is that we want to combine 2D character animation with high quality 3D rendering. Things like lighting, normal maps, depth of field blur, soft particles, fog and real-time shadows are used to make it all gel together and feel different from standard 2D graphics. One such effect I implemented into our engine is SSR: Screen Space Reflections. Today I’d like to explain what SSR is and what fun details I encountered while implementing it into our engine.

A compilation of places with reflections in rain puddles in Blightbound.

Reflections in games can be implemented in many ways. The most obvious way to implement reflections is through raytracing, but until recently GPUs couldn’t do this in any reasonable way, and even now that GPU raytracing exists, too few people have a computer that supports it to make it a feasible technique. This will change in the coming years, especially with the launch of Xbox Series X and Playstation 5, but for Blightbound that’s too late since we want it to look good on currently common GPUs.

So we need something else. The most commonly used techniques for implementing reflections in games are cubemaps, planar reflections and Screen Space Reflections (SSR). We mostly wanted to use reflections for puddles and such, so it’s important to us that characters standing in those puddles are actually reflected in real-time. That means that static cubemaps aren’t an option. A pity, since static cubemaps are by far the cheapest way of doing reflections. The alternative is dynamic reflections through cubemaps or planar reflections, using render-textures. These techniques are great, but require rendering large portions of the scene again to render-textures. I guessed that the additional rendercalls and fillrate would cost too much performance in our case. I decided to go for Screen Space Reflections (SSR) instead.

The basic idea behind SSR is that since you’re already rendering the scene normally, you might as well try to find what’s being reflected by looking it up in the image you already have.

SSR has one huge drawback though: it can only reflect things that are on-screen. So you can’t use a mirror to look around a corner. Nor can you reflect the sky while looking down at the ground. When you don't focus on the reflections this is rarely a problem, but once you look for it you can see some really weird artefacts in reflections due to this. For example, have a look at the reflections of the trees in this video of Far Cry 5.

So we’re looking up our reflections in the image of the scene we already have, but how do we even do that? The idea is that we cast a ray into the world, and then look up points along the ray in the texture. For this we need not just the image, but also the depth buffer. By checking the depth buffer, we can look whether the point on our ray is in front of the scene, or behind it. If one point on the ray is in front of whatever is in the image and the next is behind it, then apparently this is where the ray crossed the scene, so we use the colour at that spot. It’s a bit difficult to explain this in words, so have a look at this scheme instead:

Since SSR can cast rays in any direction, it’s very well suited for reflections on curved surfaces and accurately handles normal maps. We basically get those features for free without any extra effort.

A demo in the Blightbound engine of SSR, with a scrolling normal map for waves.

At its core SSR is a pretty simple technique. However, it’s also full of weird limitations. The challange of implementing SSR comes from working around those. Also, there are some pretty cool ways to extend the possibilities of SSR, so let’s have a look at all the added SSR trickery I implemented for Blightbound.

First off there’s the issue of transparent objects. Since the world of Blightbound is covered in a corrupting mist (the “blight”), our artists put lots of foggy layers and particles into the world to suggest local fog. For the reflections to have the right colours it’s pretty important that these fog layers are included in the reflections. But there are also fog layers in front of the puddles, so we can't render the reflections last either.

The solution I chose is simple: reflections use the previous frame instead of the current frame. This way we always look up what's being reflected in a complete image, including all transparent objects. The downside of this is of course that our reflection isn't completely correct anymore: the camera might have moved since the previous frame, and characters might be in a different pose. However, in practice the difference is so small that this isn't actually noticeable while playing the game.

Transparent objects pose another problem for SSR: they're not in the depth buffer so we can't locate them correctly. Since Blightbound has a lot of 2D animation, lots of objects are partially transparent. However, many objects can use alpha test. For example, the pixels of a character's texture are either fully transparent of not transparent at all. By rendering such objects with alpha test, they can write to the depth buffer without problems.

This doesn't solve the problem for objects with true transparency, like smoke, fog and many other special effects. This is something that I don't think can be reasonably solved with SSR, and indeed we haven't in Blightbound. If you look closely, you can see that some objects aren't reflected at all because of this. However, in most cases this isn't noticeable because if there's an opaque object closely behind it, then we'll see the special effects through that. While quite nonsensical, in practice this works so well that it seems as if most explosions are actually reflected correctly.

Transparent objects like this fire aren't in the depth buffer and thus can't be found for reflections. However, if another object is close behind, like the character on the left, then the transparent object is reflected through that. The result is that it seems as if many special effects are properly reflected.

Having perfectly sharp reflections looks artificial and fake on many surfaces. Reflections in the real world are often blurry, even more so the further the reflected object is from the surface. To get reflections that accurately blur with distance I've applied a simple trick: the rays get a slight random offset applied to their direction. This way objects close remain sharp and objects get blurrier with distance. Artists can tweak the amount of blur per object.

However, this approach produces noise, since we only cast one ray per pixel. We could do more, but that would be really heavy on performance. Instead, to reduce the noise a bit, when far enough away I also sample from a low-resolution blurry version of the scene render-texture. This is a bit redundant but helps reduce the noise. Finally, by default in Blightbound we do supersampling anti-aliasing (SSAA) on the screen as a whole. This results in more than one ray being shot per screen-pixel. Only on older GPUs that can't handle SSAA is this turned off.

Another issue is precision. For really precise reflections, we would need to take a lot of samples along the ray. For performance reasons that's not doable though, so instead we make small jumps. This however produces a weird type of jaggy artefacts. This can be improved upon in many different ways. For example, if we would render the reflections at a lower resolution, we would be able to take a lot more samples per ray at the same performance cost. However, with how I implemented SSR into our rendering pipeline that would have been quite cumbersome, so I went for a different approach which works well for our specific situation:
  • More samples close to the ray's origin, so that close reflections are more precise.
  • Once the intersection has been found, I take a few more samples around it through binary search, to find the exact reflection point more precisely. (The image below is without binary search.)
  • The reflection fades into the fog with distance. This way the ray never needs to go further than a few meters. This fits the world of Blightbound, which is basically always foggy.
The result of these combined is that we get pretty precise reflections. This costs us 37 samples along a distance of at most 3.2 meters (so we never reflect anything that's further away than that from the reflecting surface).

Any remaining imperfects become entirely unnoticeable since blur and normal maps are often used in Blightbound, masking any artefacts even further.

A challenge when implementing SSR is what to do with a ray that passes behind an object. Since our basic hit-check is simply whether the previous sample was in front of an object and the next one is behind an object, a ray that should pass behind an object is instead detected as hitting that object. That's unintended and the result is that objects are smeared out beyond their edges, producing pretty ugly and weird artefacts. To solve this we ignore a hit if the ray dives too far behind an object in one step. This reduces the smearing considerably, but the ray might not hit anything else after, resulting in a ray that doesn't find an object to reflect. In Blightbound we can solve this quite elegantly by simply using the level's fog colour for such failed rays.

By default, SSR produces these kinds of vertical smears. Assuming objects have a limited thickness reduces this problem greatly, but adds other artefacts, like holes in reflections of tilted objects.

This brings us to an important issue in any SSR implementation: what to do with rays that don't hit anything? A ray might fly off the screen without having found an object, or it might even fly towards the camera. Since we can only reflect what we see, SSR can't reflect the backs of objects and definitely can't reflect anything behind the camera. A common solution is to have a static reflection cubemap as a fallback. This cubemap needs to be generated for each area so that the reflection makes sense somewhat, even if it isn't very precise. However, since the world of Blightbound is so foggy I didn't need to implement anything like that and can just fall back to the fog colour of the area, as set by an artist.

The final topic I would like to discuss regarding SSR is graphics quality settings. On PC users expect to be able to tweak graphics quality and since SSR eats quite a lot of performance, it makes sense to offer an option to turn it off. However, what to do with the reflecting objects when there's no SSR? I think the most elegant solution is to switch to static cubemaps when SSR is turned off. Static cubemaps cost very little performance and you at least get some kind of reflection, even if not an accurate and dynamic one.

However, due to a lack of time that's not what I did in Blightbound. It turned out that just leaving out the reflections altogether looks quite okay in all the spots where we use reflections. The puddles simply become dark and that's it.

For reference, here's the shader code of my SSR implementation. This code can't be copied into an engine directly, since it's dependent on the right render-textures and matrices and such from the engine. However, for reference when implementing your own version of SSR I expect this might be useful.

SSR is a fun technique to implement. The basics are pretty easy, and real-time reflections are a very cool effect to see. As I've shown in this post, the real trick in SSR is how to work around all the weird limitations inherent in this technique. I'm really happy with how slick the reflections ended up looking in Blightbound and I really enjoyed implementing SSR into our engine.