Antoher post I wrote for the survey thread in class…
As you know reponse rates for surveys are low. The nursing program likes to have our graduates assess the program after working a few months to identify gaps or weaknesses in our program. This year we got the most responses (although the rate is still low at 30%) from using an online survey tool. To measure valid outcomes what type of response rate would you expect?
Most of the online surveys I am doing have very high response rates, and this is because there is one factor that is very different from what you would ordinarily expect from an online survey… the respondents are all in the same room. For instance, I am using online surveys as evaluations of a class or class session. Most of the trainings I run have participants sitting in front of computers… it is much more efficient to have them enter data and comments directly into an online survey than to have them fill out paper forms. We save paper, the quantitative results are instantly tabulated and reported, and the qualitative results do not need to be deciphered and keyed in by our clerical staff. (More often than not, these things just don’t happen at all with the paper evals… we would just glance at them, maybe make photocopies – for crying out loud – for our supervisors, and then put them in the cabinet for “audit” purposes.)
That being said, I also use them to survey groups like the district technology leaders and our techlink listserv and get fairly high response numbers, but these folks have a long term relationship with us and a vested interest in providing feedback.
I did do one open survey, a needs assessment for our next round of classes, which I asked the district technology leaders to push out to their own site based people and their staff… and got only about a hundred responses… a remarkably small percentage of the tens of thousands of teachers in Orang County. :)
Given my experiences in 8427 and 8437, I think the validity relies more on the representativeness of the cross-section of your population who respond than it does on pure numbers. Unfortunately, when it comes to representative samples, the educators who will reply to an online survey are not at all representative of all educators. This was seen in the bias of the data I received from the needs assessment… there was almost no demand for beginning technology classes -though it is clear that many teachers in the county still require these skills, and a greater demand for the “latest and greatest” than common sense would tell us most teachers have access to.
I hope this was a helpful response. A more direct response to your question, though, is a matter of statistics and measuring the margin or error (or confidence interval) associated with a certain population proportion. I’ll admit I pulled out the statistics book again, but think the discussion is probably best left for a statistics class. :(