Archive for May, 2005

WikiHome

Saturday, May 21st, 2005

WikiHome

Awesome… a wiki about student blogging!

Disadvantages of likert scaling

Saturday, May 21st, 2005

From another class thread…

Disadvantages of Likert scaling–
• participants may not be completely honest – which may be intentional or unintentional
• participants may base answers on feelings toward surveyor or subject
• may answer according to what they feel is expected of them as participants
• scale requires a great deal of decision-making
• can take a long time to analyze the data

I am a fan of judicious use of the Likert scale in surveys, but I found one key disadvantage missing from this list. Unless there is some kind of descriptive rubric involved, respondents may interpret the scale differently from one another, such that one person’s four might be equal to another’s 5… and still another’s 3. I have often gotten an all fives review for a course I know was sub-par… but perhaps it was not as bad as the last experience that participant had been through. Similarly, I have gotten 4′s from people that I know enjoyed and got a lot out of a session I felt was stellar, but in talking to them I realize that there is very little that would ever cause them to use a five rating… that they are saving it for something better than anything they have seen before.

Still, on the flip side, including a rubric of some kind can drasticly increase the decision-making time needed to respond to a question. In fact, in he CTAP2 iAssessment I’ve mentioned in a few other posts here, teachers used to complain that it was too long, at 45 multiple choice, likert-like, questions. Now, it has been reduced to only 15 questions (or so), but the detailed rubric now makes the assessment at least as time consuming!

-Mark

More on student feedback… and two kinds of evaluation

Saturday, May 21st, 2005

From the same thread…

This has turned out to be an interesting thread!

Lisa, you bring up an interesting point when you say that you “like to give the students their rubric (if I am using one) before they begin their assignment.” When I was in my credentialing program, a professor of mine promised to never grade us on anything for which we did not already have the rubric. Though I recognized that this limited the number of things he could grade us on, it did not reduce the quality of tangential discussions we had, and it clearly created a secure and stable environment within which we could learn with far less anxiety. I was so happy with it that I have continued to try to hold myself to the same standard since that time. At this point it seems to me as if this aught to be a sort of law of teaching… that if you are going to grade a student on something, they ought to know what it is they are being graded on and what you expect of them.

This of course is an entirely separate issue from evaluating the effectiveness of your own teaching. With their ASSURE model of instructional design, Smaldino, Russel, Heinich, and Molenda (2005) recommend two forms of evaluation: assessment of learner achievement, and the evaluation of methods and media.

-Mark

Reference

Smaldino, S.E., Russell, J. D., Heinich, R., and Molenda, M. (2005). Instructional technology and media for learning. Upper Saddle River, NJ: Pearson Merrill Prentice Hall.

A bit on student feedback

Saturday, May 21st, 2005

Another response to a thread in class…

I think encouraging student feedback as part of your assessment is an excellent idea

Chris and Mia,

I agree, and this thread has prompted me to share an anecdote that occurred earlier this week.

At the OCDE, which is an event planning machine if nothing else, there is an event called “Children at Work Day.” On this day, employees bring their children to work. The original intent was to allow children exposure to their parent’s work environment (and apparently it grew out of an attempt to expose girls to the various careers available at the county office), but has degenerated into a day of the various departments taking turns entertaining children. At anyrate, apparently in recent years they’ve had trouble getting secondary aged students to the event. Another lady who is new to the committee and I had some definite ideas about returning the day to its roots and trying to provide secondary students (at least) with exposure to the work environment. We had some creative ideas, and there was some disagreement about what the students would like. I suggested (of course) that we could survey the employees children and see what they would like to get out of the day. The reactions in the room were astounding. Some reacted as if I had just uttered the solution to achieving world peace, but were quickly sort of shushed by others who said “no, no, we can just…”

Initially I was shocked that they were shocked. They I was amazed that the idea was shot down. I don’t know why we should not always include our students’ feedback as part of our evaluation process whenever possible… they (especially at the secondary level) may know best whether or not our instruction is effective.

-Mark

More on Survey Monkey

Saturday, May 21st, 2005

Another response from class…

This discussion topic has made me realize that as a library staff, and more specifically, the supervisor of bibliographic instruction and Head of Reference, I really need to look more closely at assessing or evaluating our services. Currently, our assessment of each is very informal; almost non-existent.

Charnette,

I was thrilled to read this example of powerful reflection in online discussion!

One service I bring up often, and which I have already brought up in this class this week, but which I think might prove to be a valuable tool for you, is Survey Monkey. I pay $19.99 a month to be able to implement unlimited online surveys of an unlimited amount (though I am charged more if I go over 1000 responses in a month), but teachers can use this for free within the limit of 10 questions and up to 100 responses per survey, which should be plenty for many educator’s needs. For instance, you could easily do a monthly (or even weekly) survey of your staff to evaluate your services. After a good Google search – or when one is not appropriate, an online survey is now one of my first responses when I require data, or simply wonder something.

I am not attempting to sell anything here, just passing on a link to a service that I have found valuable in many contexts – for my research and for my work. :)

-Mark

More on the CTAP2 iAssessment

Saturday, May 21st, 2005

A response to a classmate…

I would like to use a bi-annual online district survey (pre- and post-training participation) to get an idea of the effectiveness of training at each grade level and content area.

Evelyn, I too am a fan of the online survey, and use Survey Monkey regularly. In my initial post in this forum, I included a sample course evaluation survey. In California, the state has also provided a tool for collecting and reporting data on teacher and student use of educational technology. You can explore this at http://ctap2.iassessment.org if you are interested.

Though I have found this tool useful, I am afraid that many sites simply complete it because they have to (it is a requirement for many state programs and grants) and then never make use of the data to “determine levels of integration and to identify areas of need” and to use “the results would [as] rationale for staff development initiatives for the upcoming years” as you suggest. Naturally, I consider this a waste, and try to encourage people to both use it, and use the data.

-Mark

Educational Technology Assessment – Part II

Saturday, May 21st, 2005

My response to the second prompt of the week in “Management of Technology for Education”…

Congratulations! You have been instrumental in implementing technology in your school. Thanks to your vision, technical expertise, successfully funded proposal, rapport with senior administrators, technology training and staff development efforts, your school has been using technology in almost all subject areas for the past year. It is now time to evaluate your efforts.

a) Describe the methods and procedures you will undertake to evaluate assessment of technology in teaching. How will you go about collecting data… Questionnaires, personal interviews, observations, small group meetings?? How will you determine if technology has made a difference?

b) Provide detailed information on your program assessment plan. Include sample questions you will incorporate in collecting data from participants in your study.

As always, support your comments with research, and also comment on other students’ posts.

*******

For the purposes of answering this prompt, I could answer as if I were once again running the educational technology program at a school site, however I think this will be an even more meaningful exercise for me if I look at this as an opportunity to develop the advice I could give a site technology coordinator who asked how to evaluate their programs.

This topic overlaps quite a bit with the previous one, so I will sometimes refer back to my previous post.

In fact, to begin with, as I stated earlier this week, the evaluation of a program depends heavily on the initial needs assessment for the program. Why was the program implemented? Did it do what it was meant to do? Did it meet the identified need(s)?

An effective evaluation also depends on good evaluation design. Oliver (2000) suggests six formal steps to evaluation design:

“1. Identification of stakeholders
2. Selection and refinement of evaluation question(s), based on the stakeholder analysis
3. Selection of an evaluation methodology
4. Selection of data capture techniques
5. Selection of data analysis techniques
6. Choice of presentation format” (Oliver, 2000)

I also found his articulations of the three elements of evaluating an educational technology to be helpful as an initial focus.

҉ۢ A technology
• An activity for which it is used
• The educational outcome of the activity” (Oliver 2000)

This focus, and the above suggested steps are good as broad starting points and a formal framework, but there are many “challenges presented by the evaluative contexts… [and] a large number of possible contextual variables, operating ependently and interactively at several levels” (Anderson et al, 2000) within a school’s educational technology program; these are made all the more complicated by the various implementations that will occur depending on the subject, department, or faculty member.(Anderson et al, 2000)

Two methods for addressing these challenges are to form a multidisciplinary research team and to implement a multi-method research design. Anderson et al (2000) describe the benefits of these evaluation methods:

“There were two fundamental (and familiar) aspects of our approach to evaluation which we felt – both in prospect and in retrospect – put us in a good position to tackle the general challenges outlined above. The first was to have a multi-disciplinary research team, whose members would bring to the investigation not only knowledge about educational technology, evaluation, and learning and teaching in higher education, but also sets of research skills and approaches that were distinctive as well as complementary. The second broad strategy was to have a multi-method research design, which involved capitalising on documentary, statistical and bibliographic materials already in the public domain, reviewing records held and reports produced by the projects themselves, as well as devising our own survey questionnaires and interview protocols in order to elicit new information.” (Anderson et al, 2000)

The authors also suggest a variety of slightly more specific evaluation strategies:

  • tapping into a range of sources of information

  • gaining different perspectives on innovation
  • tailoring enquiry to match vantage points
  • securing representative ranges of opinion
  • coping with changes over time
  • setting developments in context
  • dealing with audience requirements” (Anderson et al, 2000)
  • I would specifically recommend some techniques that have worked for me.

    1. Observations – Walk around the campus. How are computers being used in the classroom? How are they being use in the library or computer labs? What assignments are students completing with their computers, and products are students creating? How are these things different from the way students were learning and creating in the past? Much can be learned (and many a dose of reality swallowed) when an educational technology coordinator gets into the field on a random day to see how the rubber meets the road.

    2. Focus groups and/or informal interviews – A survey or other more formal evaluation instrument can be severely biased if the evaluator is simply asking for the things he or she thinks he needs to be asking. Conducting observations can go a long way towards helping an evaluator understand what needs to be formally evaluated, but not only can he or she only observe a limited subset of the program’s entire implementation, but the issue of altering the observation by the mere presence of the observer comes into play as well. By including others (ideally an multi-disciplinary group, see above) the evaluator can gain valuable insight and create a more effective survey in turn. The following are examples of questions from a focus group discussion I lead during a district technology leaders meeting during which we had representatives from each district present.

    In what ways do districts currently use OCDE Educational Technology training facilities and trainers?

    In what ways could our training facilities and trainers be used to better meet district technology staff development needs?

    Specifically, what classes would you as district technology leaders like to see offered in the county training labs?

    Specifically, what classes would you as district technology leaders like to see offered through our “custom training” program?

    In what ways can our training programs best affect positive changes in student learning?

    3.) Conduct surveys – Online survey’s in particular are now easy to administer (particularly when learners are sitting in front of computers anyway) and to analyze (most services will do simple analysis for you)… though their design should not be approached in any less careful a manner. I use Survey Monkey as an evaluation follow each professional development event that I manage. There is no reason a site tech coordinator or classroom teacher couldn’t do the same thing. Samples of the question we use at the OCDE can be found at this sample survey.

    4.) Focus groups again – A single-minded and easily biased analysis of the data by the evaluator is not nearly as valuable as organizing, or re-convening, a multi-disciplinary team to review the results.

    In California, any technology coordinator can also use the CTAP2 iAssessment survey for teachers (and for students) to evaluate technology use on their campus.

    Of course, when it comes to the effectiveness of any instructional program, one can look to state test scores as well, though the evaluator may be more concerned about other learning outcomes than what a standardized state test measures. In this respect, it might be best to develop an evaluation strategy for tracking authentic outcomes. In many ways, for this to be effective it might require continued communication with students for years after they leave a school.

    Finally, Shaw and Corazzi (2000) share a set of nine “typical purposes of evaluation” that might be kept in mind when designing an evaluation process specific to a given school site.

    “1. measurement of achievement of objectives of a programme as a whole
    2. judging the effectiveness of course or materials
    3. finding out what the inputs into a programme were- number of staff, number and content of contact hours, time spent by the learner, and so on
    4. ‘mapping’ the perceptions of different participants – learners, tutors, trainers, managers, etc
    5. exploring the comparative effectiveness of different ways of providing the same service
    6. finding out any unintended effects of a programme, whether on learner, clients or open learning staff
    7. regular feedback on progress towards meeting programme goals
    8. finding out the kinds of help learners need at different stages
    9. exploring the factors which appear to affect the outcomes of a programme or service” (Thorpe, 1993, p. 7, as cited in Shaw and Corazzi, 2000)

    Again, I suspect this may be too lengthy to be terribly valuable to others in the class, but it has been a valuable exercise for me.

    -Mark

    References

    Anderson, C., Day, K., Haywood, J., Land, R., and Macleod, H. (2000). Mapping the territory:issues in evaluating large-scale learning technology initiatives. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/anderson.html

    Oliver, M. (2000). An introduction to the evaluation of technology. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/intro.html

    Shaw, M. and Corazzi, S. (2000). Avoiding holes in holistic evaluation. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/shaw.html

    Educational Technology Assessment – Part I

    Saturday, May 21st, 2005

    This post is a bit too “research based” for my tastes, but as I summarized for my classmates, it was a valuable exercise…

    Teaching Assessment: Describe briefly how you assess your teaching performance in the classroom (or any instruction you give as part of your job). Are you satisfied with this method. What are some of the advantages/disadvantages of the method(s) you currently use.

    I am currently teaching professional development courses in educational technology at the Orange County Department of Education, and have just completed the process of assessing and planning the summer schedule, so I will consider this process from start to finish in answering this prompt.

    In my formal and informal assessments, I make an effort to use both qualitative and quantitative assessments. Oliver (2000) supports this philosophy.

    “On the one hand, quantitative methods claim to be objective and to support generalisable conclusions. On the other, qualitative methods lay claim to flexibility, sensitivity and meaningful conclusions about specific problems. Quantitative evaluators challenged their colleagues on the ground of reliability, sample validity and subjectivity, whilst qualitative practitioners responded in kind with challenges concerning relevance, reductionism and the neglect of alternative world views.” (Oliver, 2000)

    In reading his paper I also discovered that a “new philosophy has emerged” which seems to mirror my own fierce focus on pragmatism.

    “A new philosophy has emerged that eschews firm commitments to any one paradigm in favour of a focus on pragmatism. Rather than having a theoretical underpinning of its own, it involves a more post-modern view that acknowledges that different underpinnings exist, and adopts each when required by the context and audience.” (Oliver, 2000)

    Though it may not be appropriate for the work we will do in academia for Walden, the philosophy Oliver elaborates validates many of my decision making priorities as a practitioner.

    “Central to this view is the idea of evaluation as a means to an end, rather than an end in itself. Methodological concerns about validity, reliability and so on are considered secondary to whether or not the process helps people to do things. Patton provides various examples of real evaluations that have been perfectly executed, are well documented, but have sat unread on shelves once completed. In contrast, he also illustrates how “quick and dirty” informal methods have provided people with the information they need to take crucial decisions that affect the future of major social programmes.” (Oliver 2000)

    Most importantly, he describes such a pragmatic practice as requiring ”
    the creation of a culture of reflective practice similar to that implied by action research, and has led to research into strategies for efficiently communicating and building knowledge” (Torres, Preskill, & Piontek, 1996, as cited in Oliver, 2000).
    In implementing this philosophy I begin with what Scanlon et al (2000)would consider the “context” of the evaluation. In order to evaluate the use of educational technology “we need to know about its aims and the context of its use” (Scanlon et al, 2000). Ash (2000) also suggests that “evaluation must be situation and context aware.”

    In order to understand the context of my evaluations, I first performed a broad needs assessment via focus groups (such as the quarterly district technology leaders meeting) and survey (using surveymonkey.com and a listserv) to set the goals for the professional development schedule. I use course descriptions developed in partnership with the instructors and others in my department to further determine the goals of individual courses. Finally, on the day of a course (and sometimes before the first day via email) I always ask the participants to introduce themselves, explain where they work, and what they hope to get out of the class. This helps me to tailor that specific session to the individuals in the room. (I also ask all of the other instructors to do the same.)

    During a class I monitor what Scanlon et al (2000) might call “interactions.” (Scanlon et al) because “observing students and obtaining process data helps us to understand why and how some element works in addition to whether it works or not.” I often check for understanding, and always include “interactive modes of instruction” (NSBA, n.d.).

    Due to my initial and ongoing assessments, following a course I am able to focus on what Scanlon et al (2000) might call the “Outcomes” of a course. “Being able to attribute learning outcomes” to my course can be “very difficult…  [so] it is important to try to assess both cognitive and affective learning outcomes e.g. changes in perceptions and attitudes” (Scanlon, 2000). I use formal evaluations, which include both likert scale questions and open ended questions. For some special events, such as the Assistive Technology Institute – which we put on for the first time this spring, I will follow up the initial evaluation of the session by an additional online survey a week later. The real test of my success, though, is an authentic one… it is whether or not the teachers and administrators return to their sites and apply what they have learned. A dramatic example of this sort of authentic evaluation came following the blogging for teachers classes I ran over the past two months. After the first few weeks, it was clear that teachers were not using their blogs (for I had subscribed to all of them using my feed reader). Bringing this up in subsequent training sessions lead to productive discussions of the barriers, and eventually (and primarily simply because we followed up, I believe) they began using them, and I am now often greeted when new posts when I return to my reader.

    Ultimately, “good assessment enhances instruction” (McMillan, 2000), and I believe that such authentic assessments are the only way for me to know the true impact of my programs. I hope to be able to include more such authentic follow-up assessments in the coming months.

    Because the county programs operate largely on a cost recovery model, by which districts pay for services rendered, cost is also a factor in my assessment of the professional development programs I manage. “An organisation is cost-effective if its outputs are relevant to the needs and demands of the clients and cost less than the outputs of other institutions that meet these criteria” (Rumble, 1997, as cited in Ash, 2000). To determine the cost effectiveness of a program…

    “evaluators need to:

  • listen and be aware of these aspects and others;
  • focus the evaluation towards the needs of the stakeholders involved; and
  • continue this process of communication and discussion, possibly refocusing and adapting to change, throughout the study (what Patton refers to as “active-reactive-adaptive” evaluators).” (Ash, 2000)
  • Unfortunately, “the area is made complex by a number of issues that remain open for debate” (Oliver, 2000).

    “These include:
    • The meaning of efficiency. (Should it be measured in terms of educational improvement, or cost per day per participant, for example?)
    • The identification of hidden costs. (Insurance, travel costs, etc.)
    • The relationship between costs and budgets. (Are costs lowered, or simply met by a different group of stakeholders, such as the students?)
    • Intangible costs and benefits. (Including issues of quality, innovation and expertise gained.)
    • Opportunity costs. (What alternatives could have been implemeneted? Moreover, if it is problematic costing real scenarios, can costs be identified for hypothetical scenarios and be used meaningfully as the basis for comparison?)
    • The use of ‘hours’ as currency. (Are hours all worth the same amount? If salary is used to cost time, how much is student time worth?)
    • Whether something as complex as educational innovation can be meaningfully compared on the basis of single figures at the bottom of a balance sheet.” (Oliver & Conole, 1998b, as cited in Oliver, 2000)

    I work in a strange hybrid of a business and public institution with further complicates this issue, such that sometimes it is not best to be cost effective as long as a service is valuable, or for political reasons, is perceived as valuable.

    This has been a valuable reflection for me. I hope the large blockquotes did not make it too difficult to read, and I look forward to any of your comments.

    -Mark

    References

    Ash, C. (2000). Towards a New Cost-Aware Evaluation Framework. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/ash.html

    McMillan, J. H. (2000). Fundamental assessment principles for teachers and school administrators. Practical Assessment, Research & Evaluation. 7(8). Available http://PAREonline.net/getvn.asp?v=7&n=8

    NSBA. (N.D.) Authentic learning. Education Leadership Toolkit: Change and Technology in America’s Schools Retrieved May 20, 2005 from http://www.nsba.org/sbot/toolkit/

    Oliver, M. (2000). An introduction to the evaluation of technology. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/intro.html

    Scanlon, A. J., Barnard, J., Thompson, J., and Calder, J. (2000). Evaluating information and communication technologies for learning. Educational Technology & Society. 3(4). Available http://ifets.ieee.org/periodical/vol_4_2000/scanlon.html

    Macworld: News: E3: ESA outlines vision for future of gaming

    Friday, May 20th, 2005

    Macworld: News: E3: ESA outlines vision for future of gaming

    I really like the sounds of most of what is said here… and the potential educational implications.

    Trump sounds off on World Trade Center architects

    Thursday, May 19th, 2005

    Trump sounds off on World Trade Center architects

    Hallelujah, Mr. Trump.