Researcher’s Log 2008-02-18

Data collection for my Delphi study was completed as planned on February 8th. (As if on cue, Clark was born the next morning!)

I received 12 completed surveys in response to the final consensus check. Thankfully, this was the minimum number I set out to gather – so the level of attrition was acceptable, especially through rounds 2 and 3 and the final consensus check. Twenty-four experts initially agreed to participate in the study. Of these, only 15 people actually completed round one. Thirteen completed round 2, and twelve completed both round 3 and the final consensus check. So, following round 1, the attrition rate was only 20%. Those who left the study were not among the significant dissenting opinions.

In the final consensus check, consensus was defined in the following way:

For the purposes of this study, consensus is defined as the state in which the results are “at least acceptable to every member [of the expert panel], if not exactly as they would have wished.” (Reid, 1988, as cited in Williams & Webb, 1994, p. 182).

Participants were then asked to rate their level of consensus with each of six summaries on the following scale (Participants were also invited to leave additional comments after rating their level of consensus with each summary):

  • 5. Complete Consensus – I am in agreement with everything stated in this summary. The results are acceptable to me, if not exactly as I would have wished.
  • 4. High Level of Consensus – I agree with most of what is stated in this summary, and I disagree in only minor or insignificant ways. The results are acceptable to me.
  • 3. Moderate Level of Consensus – I agree with much of what is stated in this summary, but I also disagree in some ways. The results are acceptable to me.
  • 2. Low Level of Consensus – I agree with some of what is stated in this summary, but I also disagree in some major or significant ways. However, the results are still acceptable to me.
  • 1. No consensus – I disagree with most or all of what is stated in this summary. The results are not acceptable to me.

These ratings were used to find a level of consensus between the participants. Though many participants selected “5. Complete Consensus” in response to individual items, no items received that rating from all participants, so it would be inaccurate to report that there was complete consensus on any of the summarized themes. However, the participants ratings were averaged and the following scale was used to determine the level of consensus among the participants:

  • 5.0 Complete Consensus
  • 4.50-4.99 Very High Level of Consensus
  • 3.50-4.49 High Level of Consensus
  • 2.50-3.49 Moderate Level of Consensus
  • 1.50-2.49 Low Level of Consensus
  • 0.00-1.49 No Consensus
By this scale, there was a high level of consensus on four themes and a very high level of consensus on two additional themes. Out of 72 individual responses (12 participants responded to 6 summarized themes), 34 were “complete consensus,” 27 were “high level of consensus,” 10 were “moderate level of consensus,” and only one was “low level of consensus.” In other words, only one participant responded with a low level of consensus, and even then only responded in this way to one theme. No participants selected “no consensus” in response to any themes. Most dissenting opinions were minor and all will be addressed in the final report.

At this point in the process I have several new tasks ahead of me.

  • Final Data Analysis: I will code and complete analysis of the Final Consensus Check comments. These will be used to modify the summaries used in the final consensus check for their final appearance in my report.
  • Confirmability Measures: I will work with two of my colleagues, both of whom have recently completed in a doctoral dissertation focused on educational technology. They will aid me in a peer debriefing, in serving as devil’s advocates, and (in one case) in serving as an external auditor of my Delphi methodology.
  • I will complete a literature realignment to account for the months between completion of my proposal and the beginning of this dissertation draft.

Then I can move on to rewrite chapters 1 through 3 to reflect the actual implementation of this study – and to compose chapters 4 and 5 to report my results and discuss their implications. Though I had hoped to have a draft of my dissertation by March 1st, I don’t think that is possible at this point. However, I still hope to have a draft completed sometime in March. I think it’s time to take on no new work until this is done… especially with a new baby in the house.

Researcher’s Log 2008-01-02

Some of my research log cannot be shared here – at least not until the Delphi process is finished. In particular I can’t share the entries about the content of the participants responses. However, here is my latest entry, describing the logistics of the study, which may be of interest to those who may conduct a similar Delphi study or to those interested in the progress of my particular study.

Here is where I stand with my study. I have sent out 71 invitations to participate. Of these, 24 have indicated interest in participating. Only 22 have actually sent in consent forms and received a link to the first round questions. One of these has dropped out. Currently, I have only 11 responses to the first round questionnaire. I have extended the deadline for Round 1 to this Friday due to the holidays. If I receive even one more response, I’ll have what I considered my “minimum” amount. Regardless, despite the fact that responses have varied in quality (of course), there is no question I have plenty of material to proceed with the study.

My preliminary coding process has revealed several topics I did not originally foresee including in Round 2, so the process of beginning with a broad question is working. Also, happily, much of what they have brought up has been related to issues I cut from my literature review during the revisions that shortened my proposal. I expect I may be able to reuse some of that research and material in the discussion of my results in chapters 4 and/or 5.

Once Round 1 has completed on Friday, I plan to complete my analysis and send out Round 2 questions as soon as Monday the 7th or as late as Friday the 11th. This puts me a bit behind schedule (of course), but if I continue to do preliminary analysis as responses come in, I should be able to make up some time between each round.

Thank you again to all of you who offered assistance after my last post. I wish I were able to use more of that. I can’t wait to post the results here and receive open feedback from all of you as well.

Researcher’s Log 2007-12-18

In the methods chapter of my proposal, one procedure I stated I would follow during the data collection and analysis phases of my study was to keep a research log. Because I am not revealing any sensitive data or sharing results that might skew the study, I have decided to share my experiences here as well. (Entries will appear in an edited form in order not to influence the study if participants happen to read this blog.)

UPDATE: It’s now 2008-01-02 and with the conclusion of data collection I am now adding back in a paragraph that does discuss specific results below. It begins with the word “Specifically”.

The first round of the study was originally scheduled to conclude tomorrow. However, I have only collected five responses. Out of more than sixty invitations to participate there are now fourteen confirmed participants, with a potential for two to three more. The good news is that their levels of expertise are very much what I had hoped for, which will add to the credibility of the study, though of course their identities will remain anonymous. However, the minimum number of responses I called for in my proposal was twelve, so I plan to send out reminders today and extend the deadline to Friday the 21st at least. It is a difficult time of year to conduct surveys. I knew this would be the case and I know I will need to be flexible in order to finish in time to graduate this May.

Nevertheless, once I received my first three responses I began to organize and prepare data for analysis. Also, I began early data analysis, using Tams Analyzer for OS X to create an initial coding scheme from the first three responses. Already the categories (and thus potential questions) I may include in the second round of the Delphi have already grown beyond my original six. I’m sure I will need to synthesize and condense the results to allow for a manageable and productive second round.

Specifically, there has been a focus on active learning, depth of learning, and differentiated learning, all of which may fall under my category of constructivist learning, as problem solving might, too. There has been some focus on hard fun, as well as the expected discussion of motivation and engagement. The importance (and inherent educational value) of gameplay has also been mentioned. There has been little mention so far of social benefits, other than some discussion of the natural marriage of games and the ZPD. One category discarded during the proposal stage was 21st Century Skills, but those issues are making an appearance in participant answers, particularly risk taking. Role playing has also reappeared in participant answers as well. Note that this early analysis has focused on question 1, which focuses on the potential benefits of MMORPGs in education. I have not yet begun analysis on question 2, which focuses on the potential drawbacks of MMORPGs in education.

This morning, I will be sending an email to the participants thanking those who have completed the first round and prompting others to complete the survey. A few participants who joined later will be receiving their round 1 questions this morning. And finally, a few others I expect might still want to join will receive an invitation or prompt for response.

I also plan to add two most recent responses to my Tams Analyzer project and add their content to my coding scheme for question 1. I will also begin reading and analyzing responses to question 2.

In addition, as I review my methods chapter I am looking ahead to identifying a colleague familiar with the subject matter to serve as a devil’s advocate to the results, and to identifying a colleague familiar with the method to serve as an external auditor.

Boredom and a Ph.D.

While I’ve always assumed that a doctoral degree is supposed to be somewhere near the pinnacle of intellectual engagement… it turns out that actually writing a lit review is mind numbingly boring, at least the way I’m doing it. It’s very hard to stay focused (this post is evidence of that), and I find it hard to stay motivated by anything other than the thought that “I’m going to be Dr. Wagner.” That, and I want to actually conduct my own study already!

Unfortunately, I’m almost certainly reading and writing more than I have to… and yet still not covering everything I need to – or feel like I should – and still not writing as well as I should – or would like to.

Thankfully, comments like this are motivating, too… so the blog remains more motivating and stimulating than writing a dissertation, even when it’s the same material. :)

Dissertation Guide Book?

I’ve got some more new posts in the wings, but in the meantime… can anyone recommend a good book on the dissertation writing process? I’ve got lots of little questions now that I am formally writing it and would love to have a reference handy. There’s a wide variety available through Amazon, but I’m hoping someone here can recommend one.

I’ve Been Busy Part IV: Now Back to Doctoral Work

Last week, in the online forum Walden University provides for students with the same faculty mentor, I wrote the following about the writing I’ve been doing for my KAMs (Knowledge Area Modules – writing projects leading up to the dissertation… I’m on my last one):

Incidentally, I’ve been reflecting on KAM writing lately, and find that I don’t getting into the flow state as I do when I’m doing other writing. I’m consistently unengaged with and disappointed with my KAMs, and I think I’m realizing why. I’m not writing for an audience (I don’t even have one in mind other than the assessor) and I’m not writing for a purpose (other than to pass the KAM). Also, since the purpose of the KAM is merely to demonstrate my knowledge, I find it difficult to write anything creative (especially when there is so much knowledge to acquire and demonstrate as it is). I’m curious what you all think about this.

I later spent half of Saturday and half of Sunday this past weekend writing about 15 pages of crap for my final remaining KAM. (I’ve been researching, note-taking, and outlining for months in preparation for this.)

More significantly, I brought this up with Eva on Sunday evening and we chatted. I said things like “when I write for my blog or for an article I’m including only what I think is important, but when I’m writing a KAM I’m trying to show that I read all these books and articles.” Of course, she was wise enough to say simply, “you should only be writing what’s important in your KAM, too.” Which, basically, is something my assessors have been trying to tell me for quite some time now. And it’s why I usually write close to 60 pages per section (instead of 30… and each KAM has three sections).

The bottom line is, I had a breakthrough that night, and finally turned a corner I new I was going to have to turn before finishing the degree. I resolved to no longer write crap for my academic writing… to no longer cobble together quotes and references to show how much I’ve read (which is always too much, btw). I’m going to take the extra time for an additional step (after reading and collecting my notes in an outliner) to decide what I have to say, and then prune away my notes until only those things essential to my point remain. This is going to be painful – and it will take time, but in the end I think it will actually save time… and, of course, I think my academic writing will be better for it. I’m glad I’ve still got a chance to write this way before my dissertation. (All my KAMs will likely end up in my dissertation, though.)

So, starting tonight, I am starting an all new outline for this KAM and whittling down my resources to fit it. I’m excited about the new process and look forward to seeing what I can write. I just can’t believe it took me three years to get here! Or five years of grad school, if you want to look at it that way.

In the meantime, I’m sure I’ll continue to blog sporadically, but I guess I need own up to the fact that I can’t maintain a blog with daily new content and daily links, as I tried to back in the first few months of the year. Maybe after the baby Ph.D. is born, though then there might be another kind of baby altogether. Well, I’ll continue to share what I can.

Thanks for reading, as I used to say.

UPDATE: Incidentally, my audience for this final KAM is you guys. :)

UPDATE 2: This is much scarier. Note all my procrastination…

UPDATE 3: A thought from Senge is appropriate… “one of the most painful things in the life of a poet is learning that you often have to leave out your best line in order for the poem to work as a whole.” (Senge et al., 2000, p. 561)

PostDoc Researcher Needed

Is anyone interested in this opportunity? Mark Warschauer at UCI sent this my way and asked me to “please circulate.”

Postdoctoral Researcher Needed
Technology, Afterschool Learning, and Human Development

The Department of Education at the University of California, Irvine seeks a Postdoctoral Researcher for a full-time one-year position in Orange County, California. The position involves a study of learning and human development in a technology-intensive community program. The community center involved has substantial amounts of advanced hardware, software, and other media, and offers a high-quality instructional program focusing on science, technology, engineering, mathematics, and communication targeting Hispanic learners and other low-income minority youth. Youth attend instructional sessions both during the school day (when they are on leave from year-round schools) and after school.

The position involves conducting research at the center, including carrying out observations of instruction and other activities; interviewing participants and staff; examining artifacts and documents produced by the participants and staff; and coding and analyzing qualitative data. The researcher may also be involved in designing a survey of participants, planning a quantitative impact study to be conducted the following year, and conducting discourse analysis of participant interaction. Research will be conducted under the direction of and in collaboration with Mark Warschauer, Associate Professor of Education and Informatics. Principal Investigator for the project is Deborah Vandell, Professor and Chair of Education.

Compensation includes a standard full-time salary and benefits. Prior experience in qualitative research is a must. Other desirable qualifications include an interest in technology-intensive learning; a background in discourse analysis, survey research, or quantitative research; an interest in the education of at-risk learners; an interest in science or technology education; experience in working with Hispanic populations; and outstanding writing ability. Applicants will normally have completed their doctoral studies, but otherwise outstanding candidates without a doctoral degree may also be considered.

The position begins September 1, 2006 and ends August 31, 2007. Dates may be adjusted for an otherwise outstanding candidate. Similarly, outstanding candidates who are not available full-time due to other commitments may also be considered.

To apply, please e-mail a CV; cover letter; writing sample; and the names, email addresses, and phone numbers of three references to Mark Warschauer, markw@uci.edu, with the words “postdoctoral position” on the subject line. Applications will be accepted until the position is filled.

“Research” as Pawn to Support the Status Quo

The Learning Circuits Blog: “Research” as Pawn to Support the Status Quo Clark Aldrich posted for the first time in a while today and offered a perspective on the call for research that I wish I had read a day ago.

An administrator in my AB 75 workshop today was asking me about what research there was to show that the technologies we were talking about (including ipods, blogging, and video games) actually impacted student achievement. My initial response was that there was very little formal research, but that the anecdotal evidence was overwhelming. (I completed this with an emotional appeal along the lines of “when you see the kids eyes light up you know its working.”) I then suggested that perhaps the test scores might not be what we find most valuable, and gave my usual 21st century skills pitch.

Thankfully, Christine Olmstead, my co-presenter, then shared some (anecdotal) evidence of scores going up in her district when new technologies (including blogging) were implemented. I acquiesced that of course there are some technologies that will improve student test scores, too, and then shared some of the studies I did know of. I wish, though, I had been able to share Clark’s perspective: “the phrase ‘we need to do research’ more often than not is a code phrase for, ‘we just don’t want to move ahead’ without having to justify the action, or to appear in favor of something while trashing it.”

And it’s funny. I’m finding writing quality posts difficult tonight. This may be partially due to the weight of wanting to keep up with all the things I want to write about… but it may also be part of the process… thinking and creating is a bit hard. Unfortunately, in the case of this post for instance, I often have already had the ideas, and just need to capture them for the blog and the typing/composition takes time. Unfortunately, that doesn’t feel nearly as productive.

Indexing Scholarly Materials

Indexing Scholarly Materials (Via Blog Juice for Educational Technology.)

“ArchiveGrid allows researchers to discover important content that might normally be hidden when searching on the open Web,” said Ricky Erway, manager of digital resources at RLG, the consortium that designed the database.

With contributions from organizations such as universities and museums I thought this might be useful to some researchers. Any effort to make these collections available online is a good thing. Most of my research interests are already online, however, so on a first run this didn’t turn up any particularly cool new items for me.