Can genre be used as a skeleton key for learning about how and why people play games? Or is it too vague and variable a system to have any research value?
Pre-requisites: You may need to have read DGD2: How do you play games? to get the most from this post.
I'm still searching for viable options for gathering data for the DGD2 audience model. Case studies are too slow by themselves; we need to get a lot of data together quickly in order to test whatever the initial hypothesis will be, then follow up with case studies. To do this, we need fields of information we can tabulate. I've therefore been wondering whether or not we can use people's capacity to employ genre definitions freely as a skeleton key for gathering play preference data.
The principle is this: if we ask people to provide a parameterised response to different genre categories - like, dislike, don't care - we will get some sorting effects which will allow us to produce statistical categories once we have sufficient data. As in the past, running a competition seems to be a relatively good way of getting respondants especially as this method can attract responses from both Hardcore and Casual players - albiet in an unknown ratio.
Since I am still looking at formulating the hypothesis in terms of Temperament Theory (see here for a briefing on Temperament Theory in the context of DGD2 - it's the same post indexed in the pre-requisites at the start of this piece), a starting point could be to make predictions about the relationships between the skill sets of Temperament Theory (Strategic, Tactical, Logistical, Diplomatic) and genre categories. That everyone uses genre terms slightly differently should not matter as at a statistical level some coherent pattern should still result. (Diplomatic is going to be the problem skill set, as this is much harder to relate to existing games.)
We can then cross refer this with data gleaned from a hypothetical set of 'micro-games' designed to test the degree that the player enjoys and/or employs the different skill sets - although this is another problem that needs working on. Alternatively, it may be that the genre method produces sufficiently coherent clusters that it could be used as a the basis for a new audience model anyway (one cannot have too many in my opinion!)
As a starting point, we need a set of readily understandable genre categories that tell us something about how or why people play games. For example, 'sports' is probably not much help for identifying play style - although it would act as a good test between Casual and Hardcore. However, this is another area I would like to improve upon in DGD2 - move to a frequency of play model rather than a strict 'Hardcore/Casual' split. We might as well try and improve every dimension of the model.
Categories of Response
Before looking at possible genre categories, in what form should we take responses? Several options spring to mind:
- Free Response: gets the richest data, but produces data that cannot be automatically tabulated, so is of lesser value. We could take a free responses in addition to a parameterised response though.
- Simple Scale: perhaps ++, +, 0 (zero), - and -- to indicate a degree of positive or negative response. I favour this over a numerical scale which is prone to greater variety of individual response (witness the way some reviewers consider anything below an 80% score to be not worth playing for some reason!)
- Keywords: a choice of categories of response such as 'Favourite', 'Like', 'Okay', 'Dislike', 'Hate'... immediately this looks like the simple scale converted into words. For this to be worthwhile there would have to be some advantage over the simple scale.
I favour the simple scale, but I welcome other perspectives.
The goal in defining genres is not to produce a taxonomy but to ensure that the genres listed are understandable to the broadest range of people and, ideally, that some pattern of play style or skill set could be connected to the genre. Therefore, the more formulaic the genre category, the more useful it is.
Here are some possiblities:
- Turn-based Strategy: this seems like a shoe in for a Strategic skill set indicator, although Tactical skill set may also apply. I suspect that you must have some Stategic skill set tendencies to enjoy these games though.
- Real Time Strategy: this is a tricky one. The key pattern of play in most of these games is Logistical, but there are exceptions - and some Strategic or Tactical players might enjoy only the Real Time Strategy games which don't require Logistical mechanisms, making it an unlikely indicator.
- Simulation: just no hope at all, as its too diverse a genre.
- Sports: again, not much help. Could be used to distinguish between Hardcore and Casual with a mid to low degree of reliability.
- Driving Simulation (e.g. Gran Turismo): this is probably a reasonable indication of Tactical skill set in men, but less so in women. Also, learning the tracks can be very Logistical. The cultural aspect of cars skews its usefulness.
- Racing: could split out kart racers specifically to cut down the cultural effect of cars. Again, Tactical and Logistical play are both possible.
- Puzzle: another genre with such diversity it's questionable we could use it. However, certain games like Tetris and Bust a Move are so widely known that it might be possible to use individual titles as reference points. Because these are the most common non-violent games, the most likely pattern we'd see might be gender based.
- First Person Shooter: highly formulised, I expect it supports both Logistical and Tactical play.
- Squad Based FPS: I suspect we would lose some of the Logistical play of a typical FPS and lean closer to Tactical and Strategic.
- Survival Horror: this is made tricky by Resident Evil 4 taking such a shift in its play away from focus on the experience (Diplomatic?) and towards gunplay, and Logistial/Tactical play a la FPS (a conscious effort by Capcom to regenerate the brand, no doubt). Silent Hill equally shows different patterns of play with each instance. Plus, people have different attitudes to horror which have nothing to do with how and why they play games.
- Life Sims: might as well just ask people their opinion on The Sims. It would be interesting to test if this is a chiefly Logistical play experience as we suspect.
- Dancing Games: I presume it is people with well developed Tactical skills (and a bias towards Extroversion) who prefer these games. It could be an excellent reference point.
- Eyetoy: probably no skill set implications, but probably a good test for extroverted play style (like the Type 4 Participant) - could potentially ask 'Eyetoy: played alone' and 'Eyetoy: played with friends' for more of a clear indication.
- Adventure: the term has become so devalued and vague that I don't think it offers anything useful any more. Could ask about classic text adventures - but then any young respondants would be largely excluded. We suspect these relate to Strategic skill set, and also potentially Diplomatic.
- CRPG: another tricky one. Hack/Diablo-like games are very Logistical, but the core market for CRPGs appears to be Strategic. There is probably some Tactical appeal in games with sufficiently expressive control mechanics. And the story could support Diplomatic play as well. The diversity of the genre may limit its usefulness.
- Arcade Adventure: suffers from the fact that very few people know what the term refers to!
- Platformer: probably Tactical, maybe with some Diplomatic (they have more co-operative settings than other games).
Having listed a few possible genres, one has to wonder whether it would be more useful to choose a set of 20-50 individual games and use those as the basis for data gathering. There would be less diversity of interpretation. Games within the last ten years should perhaps be chosen to minimise the effect of age on the responses.
If we used individual games, 'Not Played' would have to be added as a response, of course.
It still seems like a vital step in gathering data about skill sets would be to build archetypal games in a simple platform (like Flash) to test how well developed people's Tactical, Strategic, Logistical or Diplomatic skill sets might be. The games would have to be of a style that did not automatically get easier with repeat exposure - although it would be acceptible for the test to only be applicable once, I suppose. Individual variance should disappear once the sample size gets large enough.
I imagine we'd build about 16 micro-games which the player could choose to play for as long as it was enjoyable - then their scoring rates and time spent playing would give us two dimensions of data for each game.
This problem is too big to be covered now; it warrants a seperate post.
The problem with trying to use game genres as a skeleton key for play styles is that each genre groups together instances which may be superficially similar but which may differ considerably on analysis. For instance, the RTS genre largely consists of highly Logistical type games - but a Strategic or Tactical type player could have found several games in the genre that they love causing them to ignore the majority of the titles when assessing at a genre level.
Instead of a genre skeleton key, we might actually do better with a game skeleton key - identifying a set of titles with the maximum chance that the player would have played the game. That some people will like or dislike individual games for purely personal reasons shouldn't matter if the sample size is large enough - the goal is to produce a statistical model, after all.
If we came up with a list of 50-100 archetypal games that are all still in play circulation this might in itself give us a new way of looking at what is
stereotyped by the Hardcore/Casual split as we would have in effect a
'game index' - an analogue to stock market indices such as the Dow Jones Industrial Average, the FTSE 100 or the Nikkei 225. The more games the player has played from the 'game index' - the ihobo100 if you will - the further towards the Hardcore stereotype they would be. It could be a useful tool in and of itself; would probably need a wiki to develop it on, but that could be acquired relatively easily.
Whatever we do, we can't rely on our subjective assessment of the games to relate to skill sets or play styles - it is likely we are going to need the simple Flash games I've proposed to test which skill set people use or prefer. It's not a very appealing task to work on this problem when we don't have anyone available with the skills to program the test games. Perhaps the next time we have some spare capital we could hire someone to do it - it shouldn't cost much. Alternatively, maybe there's a partner company out there who would be willing to participate in return for the free publicity it will generate.
The bottom line is that we are still in the early days of developing DGD2. We need protocols for data gathering, and until we know what data we can get it is premature to formulate a hypothesis - although another alternative would be to look at the skill sets and see if a testable hypothesis can be created for each. It won't matter whether the hypothesis is eventually disproved - DGD1 was very different from the hypothesis we set off to prove. Being wrong is just as useful as being right when you're building statistical models.
That's roughly where my thoughts on DGD2 lie this month. Thanks in advance for your feedback and opinions!