Skip to main content

Best Practices

Getting Started

A survey is always a burden to those who receive it, whether they complete it or not. Your job as a researcher is to minimize this burden by only conducting surveys if there is essential information that you cannot obtain in other ways. You must also make sure that you make optimal use of your respondents' efforts by only asking questions that are well-designed and will provide information that can be used in some important way.

Start by asking yourself "What things exactly do I want to know and why?" Don't worry about question design yet. Just think about the information you're after. The "why" part is very important. So many things would be interesting to know, but is that a good enough reason to impose on someone's time? For every piece of information you identify, ask "What will I be able to do with this information?" Only when you have fully thought through your purpose and information needed should you begin to work on questions.

As you start designing your questions you should keep in mind several goals. First, you want to get at the information you need. If a question doesn't contribute to that goal, throw it out and start over. Second, you want everything about the survey to reduce the burden on your subjects. That means the question should be as straightforward and simple as possible. If someone has to think hard to figure out what you mean, they're either going to skip the question, skip the survey, or worse, misunderstand the question and answer it anyway. Third, the question should be worded in a way that doesn't lead or introduce bias. Ask yourself whether persons with opposite responses would feel equally comfortable with the question.

Take a look at similar surveys for examples of how questions are asked. Here is an external link which presents examples of different question formats (e.g. likert scale, semantic differential, etc.). The Survey Research Tools section provides information about survey formats (paper versus electronic).

A crucial final step in designing the survey is "pilot-testing." You should give your survey to a few individuals (similar to your target subjects if at all possible) and see how the survey works. Have them respond to it and then ask what questions were confusing or unclear in any way. Talk with them about their thinking on various items to see if they're answering what you thought you were asking.

How Many to Survey — Population vs. Sample

The number of individuals that you should survey depends on a number of things, particularly the expected response rate, the level of accuracy you require, the sizes of any sub-groupings that you will need to look at (e.g. if you want to look at your data by gender, student class year, or faculty tenure status), and the number of response options in your questions.

Our mathematician friends figured out for us that we don't have to have data for everyone in a population in order to gauge how they would respond to a survey. We can make a pretty accurate estimate with a subset of them that is an appropriate size - called a "sample." There is good news and bad news for surveys of Swarthmore's various populations. The good news is that the rule is fairly easy, but that's because of the bad news. The bad news is that, given Swarthmore's small size, a pretty good rule to follow is "the more the better." But if you have simple questions (e.g. Yes/No), do not care about subgroups, and know you will get a decent response rate, you should consider using a sample rather than surveying everyone.

Take a look at these on-line links for help in estimating the sample size needed, and remember to do everything you can to increase response rates (see below).
Sample Size Calculator by Rasoft, Inc.
Sample Size Calculator by Creative Research Systems

Launching the Survey

When your survey is ready, it's time to think about how you will launch it. If at all possible, you should follow the Dillman method (see below) to enhance responding. This will take some planning, as there are several points of contact and letters to be written. In your cover letter that goes with the survey, you should describe:

  • the purpose of the survey;
  • why they have been selected;
  • who they may contact if they have questions about the survey;
  • how the information will be handled (including whether identifiers will be kept);
  • who will have access to the data; and,
  • to whom summaries of the survey will be provided.

Other correspondence may contain subsets of these items, but always the contact information. Make absolutely sure that you don't make any promises you can't keep (for example, about data handling or providing summaries, etc.), and be sure to keep them!

You should plan the timing of your survey so that it does not interfere with other surveys. You should also consider other events or holidays that are going on and how they may affect responses. For example, a survey about volunteerism might yield different results if administered near Martin Luther King Day than it would at a different time of year.

Increasing Response Rates

The recognized expert on enhancing response rates to surveys is Don Dillman, the namesake of the "Dillman Method." You must strive to make your survey and all correspondence associated with it look as polished and professional as possible. You should make it very easy for your subjects to respond to the survey. Details matter! Here is a summary of Dillman's basic steps in administering a survey:

  • About a week before you will send the survey, send a personalized advance-notice correspondence to everyone in the sample. It should tell them why they have been selected and what the survey is for.
  • Send the survey with a personalized cover letter. The survey should come with instructions on how to return it. A mailed paper survey should come with a stamped return envelope. An electronic survey will have a "Submit" button at the end.
  • Four to eight days after the survey is sent, a brief follow-up correspondence should be sent. It could be in the form of a letter or post-card, or email, but it should offer a thanks to those who responded and a request to those who didn't, to complete the survey.
  • About three weeks after the first survey went out, a second survey should go out with a new personalized cover letter. If you are tracking respondents, this should only be sent to those who have not yet responded.

Depending on your circumstances, you may not be able to follow this completely, but it's an ideal to strive for. There are additional steps you can take, such as putting up posters, placing announcements in appropriate publications (e.g. the Phoenix), having announcements made at relevant gatherings, etc. to help increase responding. The general wisdom is that the more points of contact (and variety of methods) the better. But try not to be annoying! The bottom line is that you should always be respectful of the people of whom you are asking this favor, and express your appreciation for their time.

Handling the Data

You've administered your survey and the responses are pouring in — now what?!

If you're conducting an online survey or electronic vendor survey, chances are that you will receive a summary of the responses by question and/or a data file that contains the responses of each individual to each question. Congratulations, you can skip this part! If you have conducted a paper survey, or an electronic survey that does not go into a database, your first job is to get the responses into an electronic datafile which you will use in the analysis stage. (If you've done a short and sweet paper survey, and only need a talley of responses, you would not bother with this. Doing a hand count might be just as easy.)

If you don't have identifiers on the surveys, it is a good idea to number each survey, and to enter the survey number along with the other data. That will allow you to go back through the surveys if you need to look up something or change the way you've coded your data. It is important to spend time at the front end of data entry thinking through how each question should be coded and entered, writing down your rules, and adding clear labels to the column headers to make data entry easier. The easier and more straightforward it is to enter the data, the less likely you are to be plagued by data entry errors. Enter a survey or two as you're setting up your data entry to make sure you've accounted for all the variations in responding.

At this stage you are merely trying to transfer the responses into a useful format — you should not be interpreting data. You must accurately record how the subject responded, even if it doesn't make sense to you. If it appears that the respondent misunderstood a scale, that's a problem for a later stage. If it appears that many respondents misunderstood a scale, well then, you didn't pilot test your instrument, did you?!

Analyzing the Data

One of the first things you should look at is your final response rate. What is the proportion of individuals surveyed who actually responded to the survey? Across the country and for many different types of surveys, response rates have been declining in recent years. A survey with multiple follow-ups will have a higher response rate than a one-shot survey. Swarthmore students have been very generous in responding to surveys, and college-wide surveys typically achieve a response rate of between 40-60%.

Next, you should compare respondents to the population using whatever demographic information you may have (e.g. gender, race, major, class year...). This will help you to know how representative the respondents are of the population or sample from which they came. If the respondents are very different, you may not be able to assume that their responses reflect those of the target group.

With those findings in mind, you can start summarizing your data, and conducting appropriate statistical analyses. Students who need analytical support should contact their instructor or advisor who is sponsoring the research. Faculty members might go to analytically minded colleagues or, depending on the nature of the survey, to their appropriate IT contact or Institutional Effectiveness, Research & Assessment. (Departments conducting assessment related surveys can come to Institutional Effectiveness, Research & Assessment.) A member of the statistics faculty is given release time to help colleagues with particular projects. Staff members conducting surveys can contact the Office of Institutional Effectiveness, Research & Assessment to discuss analytic support.