Devil in the Dark
The Winter Festivals

Direct Game Design

Adaptation_2003Is it possible for a game to tailor itself to an individual? To adapt its play (or its narrative) to a specific player? To become more suited to the play needs and sensibilities of a given individual? This is the grail that is direct game design. But is it the future, or just a mirage?

The purpose of our psychological and cluster-based explorations of the gaming audience, and indeed the work of Nicole Lazarro, Katherine Ibister and others, is to understand patterns in play. But models like Lazarro's Four Keys or our DGD1 are just models. They give us new tools for game design, but this 'cluster based' approach (although Lazarro's is derived from direct observation, and even DGD1 is a hybrid of cluster analysis and psychological theory) is a stepping stone, not a destination. Marketing moved beyond cluster analysis and into direct marketing - with all the horrors that entails (I'm talking to you, spam) - can game design move into direct game design?

Let me pause to say that direct game design is not an attempt to make all games work for all people. This will never happen. Rather, it is an attempt to find ways to make games work for all people whose play needs et al are within a reasonable distance of the games implementation - therefore, an attempt to maximise the audience capture for any given game, and minimise undesired audience frustration. Both seem like worthy goals for commercial games.

There are, in essence, two approaches to direct game design (although they decompose into many variants, of course): firstly, we can have individual games adapt to the player's needs on an ad hoc basis, which I shall term an adaptive gameplay approach. Secondly, we can track  a player's play style and needs in a common file, which I shall term a play style passport approach.

Adaptive Gameplay

The barrier in adaptive gameplay is the asymmetry of the data channels associated with games. The game transmits a tremendous volume of data to the player in the form of visuals, audio, textual and contextual information. Plus, by teaching the player the rules or abstractions of its game world, the player learns to adapt to the game - a situation made possible by the high bandwidth with which the game communicates with the player.

Conversely, the player's capacity to communicate with the game is quite limited. Although we can interpret the player's direct actions (move left, jump etc.), attempts to guess at the player's underlying motivations for their actions are several orders of magnitude more complex. We can pause to ask the player what they like and don't like, but I hope we can all agree that this is a massively intrusive approach with very little merit. We need to be able to derive information about the players needs in situ, and this is one hell of a challenge.

For example, the tutorial for Ghost Master was intended to be adaptive. It would observe the actions the player took in context, and provide more information when the player was struggling, whilst automatically fast tracking the player if they showed obvious capability.
There was only one problem with this: it didn't work. Despite the efforts of several talented people endeavouring to make this dream come true, there were numerous insurmountable problems. Actions the player took by accident were occasionally interpreted as meaningful; different players learned at different rates, and the game was unable to automatically interpret when learning had been concluded (relying instead on the achieving of goal states to demonstrate learning) - with the related problem that the game was unable to identify the cause of misunderstanding or incomplete understanding on the part of the player, and therefore incapable of supplying the necessary information to assist the player. Had we had years to develop just the tutorial, we could have made it work, but in the end we had to scale back our ambitions and patch it together as best as we could.

To my mind, the clearest sign that we are not yet ready for adaptive gameplay (at least, not ready for anything particularly complicated in this area) is that the attempts to create autonomous agents for the maintenance of preference settings in operating systems have failed. This is a much larger commercial opportunity than adaptive gameplay, and as such has been subject to more research. To the best of my knowledge, this entire research field has failed. (I welcome anecdotes or papers to the contrary!) The problem with these agents is that they need training. They are almost universally based upon neural networks (although there are some rule-based approaches which I believe are even less successful), and neural networks have to be taught everything. Instead of creating intelligent agents that correctly predict the user's preferences, what were created were agents which required extensive training by the user to the extent that training the agent was more work than maintaining the preferences independently would have been!

An example of the failure of neural networks in this regard can be seen in spam filtering. I use a spam filter to sort the hundred or so messages a day I receive into the twenty meaningful instances and the eighty slices of prime quality spam. But despite the fact I can tell spam from meaningful email in a heartbeat, my spam filter never quite gets it right. There's always a few that slip the net, even on the highest settings. This is because a neural network is an associative mechanism and what is needed is a cognitive mechanism. Or to put it another way, we need intelligent AI, and we are still a far cry from achieving this.

AngerAll is not lost, of course. Looking at the work of Paul Ekman, for instance, shows that we can derive emotional information from facial expressions (which, incidentally, was a cornerstone of Lazarro's technique) - through the use of a game camera such as an EyeToy, we might be able to gather more information about the player by analysing facial expressions (effectively increasing the bandwidth of information the game can acquire about the player). This is a task suitable for a neural network, in principle at least. However, we are once again dealing with a technology with broader commercial applications - and the technology hasn't been developed. Chances are there are serious problems (including the likely problem that deviation in faces requires independent learning by the neural networks in question), although they will eventually be soluble.

Even if we see the player's reactions, though, we still have a long way to go. For instance, if we see the player is frustrated, do we make the game easier? It depends upon their psychological needs. If they are a fiero-seeker (Hard Fun/Type 1 Conqueror), we absolutely do not want to make the game easier, as it is overcoming the challenge which is the reward for players who fit this pattern. In this way we can see that the research into psychological patterns of play is probably a pre-requisite for implementing adaptive gameplay.

Of course, simpler solutions for adaptive gameplay are plausible. For instance, a couple of years ago we designed a concept document for a game in which the player is faced with a choice near the beginning of a camera or a gun - a choice which determines whether the game would be non-violent with an avatar who cannot be killed (camera) or violent with death and fail-repeat gameplay (gun). (I am omitting the narrative context for brevity, although it is relevant to how this choice would function).This is a simple adaptive gameplay solution; not perfect, but at the very least interesting and worth exploring. (The back of the box - an overlooked part of the design of a game - would need to foreshadow this choice for optimum effectiveness).

Adaptive gameplay is therefore still in its infancy. Worth exploring, certainly, but we still have a vast conceptual gulf to cross before anything substantial can be produced in this line.

Play Style Passports

PassportThe alternative approach is that we learn about the player's preferences over the course of many different games and store this information in a central file, or passport. This approach may also have wider applications to interfaces in general, but it is marginally easier to see how it will work in games  (where the passport can be stored in the memory of a particular game console) than in interfaces in general (where the passport would need to be stored by some central information device - probably a mobile phone, since these appear to be emerging as the dominant personal electronic device).

As a very simple example of how this approach would work, every player has a (single) preference for vertical axis orientation in first person view. Some players naturally push up to look up, some players naturally push down to look up. (Some players, like me, can adapt to both relatively rapidly). Once the player has informed the game of her preference, the passport can store this information for all other games to use.

In order for this approach to work, there must be an agreed template of game preference factors. Therefore, the most likely way to implement such a scheme would be for a platform license holder (Sony, Microsoft, Nintendo) to specify such a template, which individual games can then optionally use to personalise their play to individual needs. Each individual user of a console would therefore have their own passport. An extremely minor side effect: your handle is stored in the passport, so high score charts don't have to ask you for your name every time (unless, of course, that is your preference!)

It is possible that the console would include a tool for calibrating a player's passport to the various different factors (which would be optional - we should not be forcing the player into rat mazes not of their choosing), but even if this was not used, the passport would simply begin with indeterminate states and substantiate their values as and when their values become known.

With this approach, we have come full circle, because in order to construct such a passport it is necessary to know reliable patterns of play, and the most reliable means of investigating this is probably a combination of direct observation and psychologically-motivated studies.

In the Distant Future...

At some point, perhaps not in our lifetimes, there will probably be a single device (or implanted chip or equivalent techno-McGuffin) which records our choices and preferences and narrowcasts this information to other devices according to what we prefer. Don't like receiving calls while you're eating dinner but don't want to have to remember to turn off your phone? No problem. Your preference device will automatically block calls while you're dining, except for cases you have expressly stated. This is fanciful stuff, but it's not hard to imagine.

In the distant future of games, I believe it will be viable for games to tailor themselves to an individual's needs, and for play and narrative to adapt to specific players. I would note, however, that there will always be a place for non-adaptive narrative, as author-driven and player-driven narratives are very different animals, and meet very different needs. That aside, there are no theoretical barriers to adaptation of games to player needs. Direct game design is the future. The only question is: how far in the future?

The opening image is 'Adaptation' by Sheary Clough Suiter, taken from Backdoor Designs. No copyright infringement is intended; I will happily take the image down if asked. The second image is taken from - these images of faces displaying emotions are on sale here. Its use should be considered free promotion for this website; again, I will happily take down the image if asked.


Feed You can follow this conversation by subscribing to the comment feed for this post.

I knew this post would come, I was only uncertain about how far in the future;)

I could ramble on for a book's lenght on the topic, but here are some condensed points:

1) The "conceptual leap" we need to make for adaptive gameplay is the memetic content paradigm, that is using recombinant data patterns with implicit autopoesis. This provides the potential for dynamic content construction and seems to imply evolutionary computing of some sort.

2) Ghost Master probably wasn't a good ontology to base experimental efforts in adaptive gameplay, given that it was a commercially funded title and adaptive gameplay is indeed uncharted territory. A project that shows promise is Craig Perko's "The Machine City", which takes a basic shootemup design and attempts to make its content and difficulty adaptive. Check it out:

3) While "ad-hoc" designs are, in general, on the spectrum from flawed to worse than useless, it is possible to use hueristics in a consistent manner as a content creation tool, mainly for providing constaints on algorithms with large search spaces. A way to make heuristics not "ad-hoc" would be to formulate a language which can assemble a wide range of heuristics. The construction of such a language would be a nessecary step. Erasmatron uses a similar paradigm to build in game sentances, but Etron content is non-adaptive.

4) The passport idea has some merit, but strikes me as too homogenous to be robust. It relies on the assumption that all users will consistently posess a host device, and that their play styles, much less their personalities, are fairly static.

5)The sort of preferences you've listed here are entirely pertinent to a model of gameplay which, I believe, will be obsolete by the time interactive storytelling gets its act together. By then a narrative preference and a difficulty tuning will hinge on the same mechanism, a mechanism which demands some degree of adaptivity to cater to then entire range of participants.

6)Not all adaptivity must be run-time functionality. When I first read "play style passports" my first though was of the Fireball structure, where the player can essential grab one to three of three passports down different tracks, each of which provides a distinct sort of gameplay. This is adaptive gameplay at its core, and can be hardwired into the code.

7)"how far into the future?" I have an implementation scheme in mind that would enable any genre, from FPS to the myriad flavors of interactive story, to be build by an aesthetically minded coder and would allow levels/stages, agents and overall plot direction to be "steered" by the drama manager AI in real-time, in cue to "memes" the player exibits along dynamisms coded by the cotent creator. We have the computational power to have this engine work today, all I need is the money to fund R&D.

It sounds fascinating - although possibly more ambitious than could be achieved without significant funding. I hope you have more luck finding research money than we did for our interactive narrative projects like FreeSpeak. They seemed like such a good idea, but alas, because we could not convincingly demonstrate their relative commercial value, they were ultimately doomed.

Do you think you will have to sell yourself temporarily into corporate servitude to get funds, or are you planning to pursue the soul destroying path of seeking angel investment?

I plan on forgetting about hollywood, if I can do that then I join the international club of people with ideas and capital.

God sends no angels, therefore they must come from below.

I would consider, consider, with tremendous reflection, working for International Hobo, as long as I have freedom to play. I might have not get a very good contract to use Erasmatron, so I might be in corporate servitude that way.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)