If you’ve been following along you may have noticed that the first day was a bit slow for me at first, but got better, and then much better as it went along. The same can be said for the content of the second day. I did not make it to the morning keynote, which did look interesting however not terribly relevant to my work, so I started at rock bottom.
For the first breakout session of the day, the sixth of the summit, I attended Michael and Chen’s Beyond Q&A: Assessment Methods for the Next Generation of Serious Games. Michael and Chen are the co-authors of Serious Games: Games that Educate, Train, and Inform, the book I read on my way out to DC. Unfortunately, this was a round table, so like Prensky they did not prepare a presentation. They did however prepare some questions to prompt the participants.
It’s worth noting that this session was very full and I was trying to take notes while standing in the back. Neither the powerbook nor the blackberry proved terribly comfortable for this, and I wasn’t feeling too well, so I have considerably fewer notes for this session than the others. I don’t think I missed much, though.
The moderators lead in with this question: “What do you see as the future of assessment in serious games?”
Howard Phillips of Microsoft replied that many games have been nothing more than assessment (speaking of the edutainment variety I presume), and that in the future we will assess fuzzier elements of knowledge. This was something of a running theme throughout the summit. Jim Belanik (spelling?) from the Army Research Institute reported that they are doing to do just that; they are trying to create technology that will be able to make judgement calls without the human present… but right now the human is necessary, he concluded.
Others were very interested in the use of pre-assessment to level students before beginning a game. At which point someone jumped in with the question of how we will get picky teachers to use this? Later, someone offered the opinion that our biggest barrier is teachers… and that if we can help them feel comfortable with games, then we’re in. Much later someone suggested that games should be promoted as tools for teachers, and that after school programs might be the gateway to the schools.
This was followed by the question of the difference between a test and an assessment, which oddly enough, occupied them for quite some time. I hadn’t seen that conversation since I was in my credential program. An English professor named Dennis put this one to rest with the very math teacher comment that “tests are a subset of assessments.”
Dennis also suggested that success in a game could be a form of assessment. Someone echoed this later by saying that “games are assessing the player all the time, you just don’t notice it.” John Fairfield (of Rosetta Stone?) pointed out that there will be a wide variety of skills displayed in success of any given game and that successful players do not necessarily acquire the same skills.
Erick Lauber was sitting up front, though I didn’t see him, and he brought up what he called a serious issue… the transfer of training is not being dealt with head on. He’s fascinated by the power of serious games to jump this hurdle we’ve been facing for so long. None of these lines of conversation led anywhere, but perhaps they will spark someone elses’ thinking if I share them here…
Someone from simSchool mentioned that they are trying to represent how a learner grows… from an AI standpoint.
A representative from the Navy asked another important question… what will you get out of a game that you won’t from traditional training?
Owen (and that’s all of his name he was gonna tell us) said that we want to teach knowledge, skills, and confidence… efficaciousness.
About the time I noted that I was getting bored (my fault, not theirs), Jake Troy, who is involved in language learning games, said “there is a war for kids attention” (a metaphor that resonated with me), and that “we can assess when they are getting bored” by tracking when they start exploring instead of pursuing a goal, or when they stop playing, or switch games, etc.
This question went unanswered… what methodologies are game developers using?
A strand of discussion did come up around Chen’s prompt: when you bring games into the classroom, how do we deal with cheats? And if you put things in player logs, how can we protect these from hacking? I think the most hopeful response was that playing the game has to be the easiest and most engaging way to learn things. There was also a comment that this is where the instructor comes in.. if a student is running around invulnerable (or whatever) then they are clearly not getting anything out of the game, and the teacher can clearly see that; the teacher must provide a context such that playing the game legitimately is the most rewarding. Finally, someone suggested using biometrics to combat cheating. Ha! That way lies policing, something I am not at all interested in; with policing comes cheating.
In a related comment, someone expressed their feeling hat students must know if and how they are being assessed. That way exploring is not punished. Then, too, they will be more motivated to play by the rules… because of the context.
Another brilliant suggestion was that we can track players’ access to reference or help in game. (Oh, and by the way, this guy said, games are cheating… you get to restart and try again!)
Toward the very end, some discussion of multiplayer games arose. How will we assess? A game like WoW stores amazing amounts of data, but we are back to fuzzy issues when it comes to assessing or evaluating that data. One participant said that in his project he found MMO data overwhelming, but having sat with the guys from Linden Lab at the Games Learning and Society conference I have a sense of how a good programmer can make use of overwhelming data to draw conclusions. (See this previous post.) Someone did point out that the same computer that stores an MMO’s data can be used to sort through that data. Another even suggested a way to sort and analyze the data by defining flag points and comparing players’ and experts’ paths through the game. Others also said this data was just what they wanted, complete with text to speech and speech to text!
Ha ha ha… I just stumbled on Dennis’ weblog, and he has some very detailed notes on the sessions he attended – they rival or exceed mine. In the case of this particular session, they definitely exceed! Check them out. Dennis is D. G. Jerz of Seton Hill University. Check out his blog and his other Serious Games Summit posts as well. The link to his RSS feed is a bit hidden on the page, so I’ve offered it here too.
Thanks for reading. I once again hope some of you find this helpful… even if the link to Dennis’ weblog is the most useful part. ;)