My Advisor, Kurt Squire, has written a new book… and it is awesome. I should probably write a little bit about why it’s awesome, but for now you can buy it here.
I was disturbed when playing “Batman Multiply, Divide and Conquer”. The experience doesn’t make sense. I’m not saying it’s a bad game, some of the side scrolling parts where you control batman are fun, but then you get to a point where you have to solve a math problem to move on. Even when there is some attempt to integrate it into the game, the attempt seems tacked-on. For example, batman has to solve a math problem to open a door. Being a batman nerd, I was somewhat insulted. “I’m the goddamn batman”, why would I need to solve this math problem to open a door?
The reason that this educational game will sell is because it’s asking players to assume the role of batman. This makes simple math problems that batman has to solve stick out like a sore thumb because it doesn’t make sense from batman’s perspective. If batman had to solve a math problem in any other medium (film, television, comics) it would be a joke because we’re assuming Bruce Wayne passed elementary math. This challenge doesn’t make any sense from the enemy’s perspective either. What super villain would lock their doors with simple math problems and not expect them to be opened?
Even the reviews of “Batman Multiply, Divide and Conquer” point out the divide between game-play and content. The reviewer specifically states, “he has to do the math to move on, whether he likes it or not”. In other words, it’s not only disjointed, but actually detracts from the game and is seen as an obstacle to get to the game. It would be like playing Call Of Duty: Black Ops and having to solve a math problem every 3 minutes. In fact, that method might be just as effective.
So, if you have a content goal, how do you tie it back into the game in a way that makes sense, and doesn’t seem disjointed? Make it a necessary in a way that adds to the experience. I’ll offer the example of Bioshock’s hacking system. The experience that one gets from hacking in Bioshock is: hacking is time sensitive, approached as a puzzle, and sometimes doesn’t work. This experience works perfectly with the tone of Bioshock which is also time sensitive and dangerous. In this mini-game the idea is to connect puzzle solving with hacking, something makes sense to those who code. However, If they actually wanted to teach coding, it would have looked quite different.
Going back to the Batman example, lets use math in a way that would make sense for the dark knight. Batman is a detective, and as such solves problems. Rather than a math problem, why not introduce a problem that uses math? For example, make the problem be a cipher. If addition is what you want you can make it easy. The code could be unscrambled by simply adding a constant to the original letter, and having to subtract by the same amount to decode it. It’s still a simple concept, but it makes more sense in the context of the game and becomes a part of the experience. If only more edu-games would bother mixing the content with the experience…
Cost does not necessarily give you grounds to write a bad review. Content, gameplay, etc.
Lately, I’ve been upset with the way that playtesting and prototyping are received in an academic environment. While most designers appreciate the value of getting something reviewed in it’s rough form, others do not. I’m a true believer in prototyping and feel that showing off unfinished products early and often saves you time and headaches. Unfortunately, most people are squeamish when it comes to presenting a work that is unfinished. Even worse is when the people you ask to critique your work expect it to be a finished work of art. I’ve experienced both of these recently and have found that working with these conditions/assumptions can be deconstructive and demoralizing. So? What are we supposed to do about it?
Make sure that they know why you’re prototyping or playtesting.
- The primary objective of this whole process is to identify problems with your assumptions and implementation. You’re not supposed to get it right the first time. In fact, you probably won’t get it right. Take this opportunity to reflect on how other people interact with your prototype and listen to their feedback. Chances are, if your testers have problems working with your prototype, or would like something else incorporated, there are also others who share that opinion. If you get a lot of negative feedback try not to see it as an attack on your skills. Instead be glad that you can address the issues early on.
Make sure people know that things are going to break.
- For some reason testers want your prototype to work flawlessly the first time. When playtesting this demand is unreasonable. So, before you start playtesting make sure that you tell your tester things are going to break. In fact, tell them how rough the prototype is. If it crashes, tell them. If it uses stock art, tell them. If you haven’t proofread text, tell them. It’s better to paint a very accurate picture of the current state of the project than to have bugs that you already know about come out during the playtest.
Make sure your tester’s feedback feels wanted.
- The testers are your friend. Think about it, if it wasn’t for them YOU would have to test the system. This poses a problem because you know the prototype. You know the expected input, and you know how you’re supposed to use your system. Chances are, you will not run into problems that playtesters, who are not familiar with your system, will. If you’re looking for feedback about usability, then there is no better way to get it than with actual users. For all these reasons, make sure your testers feel wanted! Tell them you’d love to hear what they think, and that they’ll be contributing to creating a great project.
Hopefully, when you address the misconceptions you’ll get a lot of mileage out of your playtesting.
To me, video games are experiences. Just like we can have good and bad experiences, however, we can also create good and bad games. This is especially true with educational games. While making educational games it seems that most designers miss the forest for the trees. They become so focused on having the curriculum apparent (perhaps because it makes assessment easier) that they forget to design an experience. This results in a lackluster and forgettable experience comparable to a worksheet with stickers. I want to move beyond that and emphasize both parts equally. I understand that this is a difficult problem. One which brings with it questions that relate to motivation, game design, and story telling in addition to curriculum construction. I am willing and eager to learn more about these subjects so that I can create rewarding, and memorable, experiences that help to enrich a student’s mind.
When thinking about creating experiences in videogames, especially when creating educational experiences, I can’t help but think of books. A book does not have to insert a certain number of words in order to be considered an educational experience. Even when the overt goal of a book is to teach another subject (for example a textbook) there are still good texts (ones that take the time to scaffold, but challenge the students), and bad texts (ones that simply have a fixed number of examples deemed to be important but that have no context).
Due to my recent transfer to C&I, I’ve had to take a look at my research interests and really investigate what I would like to study. I decided that I would like to focus my energy on trying to answer questions that deal with creating meaningful educational experience. How can we present educational concepts while maintaining the player’s interest? How can we make educational games things that people are excited about and would want to play? How much can we suspend belief before the ideas become too abstract? How explicit do we have to be about concepts before they can be transferred? I’m sure that there are quite a few texts that deal with these subjects and I’m excited to read more about them.
There is a disconnect when we think about probability and thought. To think that not only do we calculate the probability of events, but that we do it many times a day subconsciously can be quite suspect. Still, this idea makes sense. If we see that the ground is wet we immediately deduct that something made it wet. We then take the information of our environment to guess the cause. If the sun is shining there is a strong chance that it did not rain. If we see a sprinkler close by there is a strong chance that it could have gotten the ground wet. Even though we do not formally think P(sprinkler/wet) = (P(wet)*P(wet/sprinkler))/P(sprinkler) this formula does yield similar results. It seems that we do something close enough so that this model accurately reflects (or at least describes) some of our thought process. This is evident in today’s expert systems, some machine learning algorithms, and computer vision. What interest me more, however, is how we arrive at these prior probabilities, and how often we change them.
The calculation of some probabilities seem straight forward. If it is raining things will get wet would imply that P(wet/rain)=1. Yet, if something is blocking the rain (like a roof or a tarp) this is not true. The condition of wet remains the same, but P(wet) is either dynamically calculated or we do not calculate P(rain/wet) but instead P((rain/wet)/tarp). It seems to me that something other than just Bayes theorem is being calculated. Do we calculate heuristics? Do we repeatedly nest bayes rule? And how often must we recalculate p(some event)? If we see a purple giraffe after seeing 100 yellow ones do we change p(purple/giraffe) to ½? Or do we treat this a a rare event and still conclude that most (99%) of giraffes are yellow? How many anomalous events would we have to see to change this? How do we decide how much to change this number by? If it directly effects our well being do we change our probabilities more?
While reading the assigned papers I was reminded of my first attempt to define intelligence. I was taking an independent study course in artificial intelligence, and I had decided to find out what intelligence was before I attempted to create it myself. The definitions I found were very vague. It seemed everyone had their own idea of intelligence. Needless to say, I did not find a concrete definition.
Perhaps my inability to find a definition was the reason that E.G.Boring’s proposal to define intelligence stood out. While I don’t agree with defining intelligence by the measures used in standard IQ tests I do feel that a more fleshed out definition would help focus our efforts to create intelligence. The proposed notion that intelligence is both “the ability to adapt to one’s environment” and “the ability to learn from one’s experience” is one that I generally agree with. However, as revisionists have noted this definition does not address certain factors (such as speed) that we associate with intelligence. Furthermore, there are many ways in which people can adapt or learn. Who is to say that one of those is better than any other?
Sure, defining intelligence, and using that definition to create assessments, would have some advantages. With this information we could identify students who may need help and provide them opportunities that may otherwise not be available to them (remedial classes etc). Unfortunately, anything that can be measured seems to bring with it elitism. If a standard definition for intelligence were to be established today I’m not sure our society would be altruistic. From a employer’s perspective it may seem easier to fire an employee who has a low intelligence score and higher another more “intelligent” employee. From a school administrator’s point of view it may be very tempting to add an intelligence requirement to the admission process. I believe that for many this magic number would be terribly influential. The test would not only measure whatever we defined intelligence to be, but could also determine one’s fate.
Perhaps a single number that embodies intelligence wouldn’t bring about such a dystopia. Still something doesn’t sit well when boiling a person’s cognitive ability down to a single number. For that reason I see myself gravitating towards Gardner’s idea of multiple intelligences. Einstein and Van Gohg are generally seen as geniuses in their own domains. Is there a magic “g” that would have declared them destined for greatness? To me it seems more likely that each had different set of skills that while different were equally impressive.
It is difficult to think of human intelligence as simply one mechanism that that can accurately distinguish about seven inputs. It is also very difficult to think that the only way one can get better is by nesting these mechanisms. I’m not saying that Miller’s magic number has no ground, but I do wonder if he simply discovered a mechanism humans use when they first come in contact with a new problem. When the novices and experts were mentioned I started to think about what makes a computer program more efficient when compared to another. Given a task, one computer program is usually deemed superior if the time for it to execute is faster, if it can keep this speed with a bigger data set, and if it consistently produces accurate results. I would argue that the same criteria can be applied to determine the difference between an expert and a novice. Experts solve problems faster, usually can handle greater data sets, and are rather consistent in terms of their out put (they don’t make many mistakes). In computer programs you generally see a performance boost when a faster algorithm is applied. A substantially modified bubble sort (the worst kind of sorting algorithm save randomly sorting) does not perform much better than a regular bubble sort. Similarly I would like to think that experts develop a structure that is more sophisticated than the 7 input algorithm that Miller proposes we use.
I like the idea of using heuristics to find an optimal solution to a problem. When you have many ways to tackle a problem finding the method that is more useful will definitely increase your performance on a task. In terms of intelligence, this approach also fits with the idea that intelligence is partially defined by how well we can adapt. The problem I see is the generation of these heuristics. Where do they come from? How do we create them? If intelligence is how well we adapt to a new obstacle then surely the generation of these heuristics must be an important component. Even with Miller’s idea of chunking we have to wonder how we create the definition of a chunk. Yes chunking allows us to process more information faster, but at some point biologically, or by some other means, we must identify patterns that we can then use as chunking criteria. For some reason I don’t see the addition of dimensions to be a sufficient answer.
For my story board I chose to represent the task of a player playing king of the hill offensively. There are 3 ways our group discussed in which a player can play offensively. One is of course to score. In order to score players must get to the hill and stay there as long a s possible. The more time on the hill the more points you get. The next way to play offensively is to use virtual weapons. There are 2 types of virtual weapons, bombs and mines. Bombs can be used immediately and affect the surrounding players. Mines can be set at a given location and later detonated from a distance. In all cases players must move to a predefined area whether it be to get on the hill, or to pick up a weapon. These areas are found by searching for their icon in the in game map. Weapons are used/detonated by pressing the weapon’s image on the screen.