Last week we finished building our module and asked a few people to be our pilot test subjects. Two of the participants were SL users without backgrounds in Geology. Our third participant was our content expert. The participants were emailed on Friday asking for them to participate, and they were asked to complete the evaluation by Sunday evening so that our group could discuss the results prior to our presentation. From the three evaluations, the results were largely inconclusive. Each reviewer commented on different things. One focused on the visual design elements, another was mostly happy, and the other thought that there were a few good points but felt that it lacked interactivity. With such diverse backgrounds, this isn't completely surprising that there wasn't a major consensus. The only thing that they all agreed upon, was the helpfulness of the yellow arrows.
The questions is, what do you take from this? Do the three reviews cancel each other out? Meaning, although one person pointed something out, if it was something major, why didn't the other two share the same opinion? Or, perhaps since two of the reviewers weren't very familiar with Second Life, does this bias their views? At this point, I'm not too entirely sure what to take from it. Perhaps the next step would be to capture these responses, and see how the other respondents react to it. It could just be a case of not noticing everything.
Friday, December 10, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment