Thursday, March 3, 2011

Reading Response 6---Thoughts on evaluation issue

"From this selection of examples (a word processor, a cell phone, a website that sells clothes an online patient support community), you can see that success of some interactive products depends on much more than just usability. Aesthetic, emotional, engaging, and motivating qualities are important too (p 322).

Evaluations done during design to check that the product continues to meet users' needs are know as formative evaluations. Evaluations that are done to assess the success of a finished product, such as those to satisfy a sponsoring agency or to check that a standard is being upheld, are know as summative evaluation. (p323)

The HutchWorld case study--The evaluator also asked the participants to fill out a short questionnaire after completing the tasks, with the aim of collecting their opinions about their experiences with HutchWorld. The questionnaire asked (p330):                                                                       
What did you like about HutchWorld?
What did you not like about HutchWorld?
What did you find confusing or difficult to use in HutchWorld?
How would you suggest improving HutchWorld?

Some practical issues that evaluators routinely have to address include (p336):
  • what to do when there are not many users
  • how to observe users in their natural location (i.e., field studies) without disturbing them
  • having appropriate equipment available
  • dealing with short schedules and low budgets
  • not disturbing users or causing them duress or doing anything unethical
  • collecting "useful" data and being able to analyze it
  • selecting techniques that match the evaluators' expertise"

Frankly, I have never systematically or carefully thought about the evaluation issue until I read this chapter. I love the HutchWorld case study. It shows us a vivid example with very detailed process how the team carried out evaluation from the earlier stage to the last stage. I once again realized that how important the evaluation is and there are indeed too many things to be considered regarding evaluation. No joking that I once took it for granted that evaluation or assessment usually need to be done at the end of a project or study (at least until last year). However, my idea was gradually corrected since I took the user-based design class last semester, which is so useful that widens my horizon. Yes, both formative and summative evaluation are indispensible for design. That is so heuristic that makes me review my group design product—Feed Me Well. I still remember we applied an User testing questionnaire as below for the user testing last December:
1. How do you like the watch? Like it, dislike it, or neutral?
2. Any confusions when you interact with it?
3. Any suggestions or any comments?

Compare it to HutchWorld’s questionnaire, I joyfully found they are so similar (I swear I did not read it and copy it thenJ). But it should not be a coincidence why they are similar as we trust and followed Preece’s ideas which had already been mentioned in the earlier chapter that the goals of interaction design include both usability goals and user experience goals. No wonder the evaluation should integrate these goals to make a project as a complete and coherent picture. Feed Me Well now goes to a new phase—production phase, in which we mainly focuses on its tutorials. We’ve almost done it and will go to the next evaluation phase. We agree that the success of interactive products highly depends on both usability and user experience—such as aesthetic, emotional, engaging, and motivating qualities. In view of these factors, back to Feed Me Well, in the evaluation phase, we should pay more attention to these questions: how do the tutorials work? Whether and how users like the means of multimedia that we apply to reinforce the effect of tutorials? How does the application of multimedia help the tutorials?, etc. 

An evaluation paradigm is an approach in which the methods used are influenced by particular theories and philosophies. Four evaluation paradigms were identified:
1. "quick and dirty"
2. usability testing
3. field studies
4. predictive evaluation
Methods are combinations of techniques used to answer a question but in this book we often use the terms "methods" and "techniques" interchangeably. Five categories were identified:
I. observing users
2. asking users
3. asking experts
4. user testing
5. modeling users' task performance
The DECIDE framework has six parts:
1. Determine the overall goals of the evaluation.
2. Explore the questions that need to be answered to satisfy the goals.
3. choose the evaluation paradigm and techniques to answer the questions.
4. Identify the practical issues that need to be considered.
5. Decide on the ethical issues and how to ensure high ethical standards.
6. Evaluate, interpret, and present the data.
Drawing up a schedule for your evaluation study and doing one or several pilot studies will help to ensure that the study is well designed and likely to be successful.
                                                                              ---Key points of chapter 11 (p357)
I boldly attached the whole summary contents of chapter11 here as it is so hard for me to choose ones and abandon others and I cannot tell myself how useful the whole chapter is. I think it is just like a timely rain for our evaluation phase. I assume my FMW group members would reach agreement to use the above key points as a guideline for the evaluation phase. Especially, the DECIDE framework with six parts reminds us how to logically organize the tasks of evaluation and avoid the possibility of neglect. I was also impressed by the elaboration of pilot studies. As Preece argued that “it is always worth testing plans for an evaluation by doing a pilot study before launching into the main study”, a peer review is exact a good pilot study to ensure the project on the right track in time. Thanks to peer reviews, we FMW group got many valuable feedbacks and comments from them since last semester. It is such a quick and inexpensive way that saves a lot of trouble later, bringing new and good ideas to the design group. In a word, all these elements regarding the evaluation undoubtedly ensure the success of the evaluation, and to a large extent, ensure the complete success of the product. 
Here is another link about “Basic Guide to Program Evaluation (Including Outcomes Evaluation) which might be applicable for general fields in case it would be useful in our workplace settings: http://managementhelp.org/evaluatn/fnl_eval.htm

No comments: