This year, I used SurveyMonkey for the first time. Some people in my professional writing organization (STC, Society for Technical Communication) had ushered in its usage a few years ago for the annual salary survey for the area. The version we subscribe to is Unlimited. (The free version allows only 10 questions and up to 100 responses.) The pricing page shows a page of features for Free, Pro, and Unlimited. Except for the billing commitment, there appears to be scant difference between Pro and Unlimited.
In February, I finally dived into using the tool when my request (plea?) for a volunteer to conduct the survey yielded a few polite declines. I logged into our SurveyMonkey's account to see what was there. Fortunately, I did receive a guideline document; otherwise I would have been totally lost. In any case, SurveyMonkey's online help was extremely helpful. I felt the tool itself was extremely well-designed and user friendly—a lot of intuitiveness built in for the intermediate tool user. There were ways to clone surveys, copy questions, move questions, select answer options, …. There are expansive explanations of all the features.
At SurveyMonkey's Design Survey section, I navigated a prototype survey and explored response types. I checked out behaviors for multiple choice (check-all-that-apply), single-selection of multiple options ("radio button", dropdown listbox), and open-end answers ("other", fill-in). There were options for presenting possible responses horizontally, vertically, and grid configuration. The advantage for an x-by-y matrix is saving vertical space.
Another response setup that saved vertical space was using the dropdown listbox for single-selection of multiple options, if there were at least four response possibilities. If fewer than four, there was virtually no vertical space savings. For few-answer-option questions, I felt it was more user friendly that all possible answers were visible at the same time.
Why did I concern myself with vertical space? I wanted to present a survey that appeared to be shorter than if all answer possibilities made it visually long. I sensed an overly long survey might fatigue the participants and maybe decrease the chances they would take or complete the survey. Putting some response possibilities in grids and some others in dropdown listboxes required less vertical space for the entire survey than listing line-by-line selections.
After setting up the survey in March, I sent a dry-run version to the board using the Collect Responses feature, then did a cursory analysis using Analyze Results. After minor tweaking, I set up the survey for the public and launched it in early April, sending out several emails over about three weeks requesting participation from technical communicators.
After I finished collecting the data, having set a shut-off date in SurveyMonkey, I moved to the data analysis stage. At the Analyze Results section, I re-acquainted myself with graph types and appropriate uses. The graphs I used were pie, column, and line. Later, when I presented the results, I concluded that in some instances, bar graphs would have better conveyed information than column graphs. But there needs to be judicious use and consistency for bars rather than columns—the biggest reason being if the column charts showed vertically rotated text.
SurveyMonkey's graphing options were actually fun. At the Create Chart option, the following choices were available:
- Chart shape/type
- Number of answer choices to show
- Sort by answer quantity
- Show or hide a chart title, the default text being the question itself
- Labels for response number, percent, both, or none
- Location of the labels, inside or outside the graphic
Clicking Download Chart created the chart. Right-clicking the selection to create the chart in a new window was effective. If I wanted to vary my selections, it was easy to return to the Create Chart option and try something else. On the other hand, I found that SurveyMonkey's graphing capability lacking with regard to answers to open-ended questions. To make suitable graphs for those responses, I used Excel formulas.
In May, I presented the salary results to the STC Austin chapter meeting. My presentation showed a hybrid of a few updated parts from the previous year's presentation, but mostly graphs of each question's responses for this year. I handed out hardcopies of the report that reflected mainly a subset of responses—those pertaining to salaried, full-time technical communicators.
During the week after my presentation, I created a supplement document that went into detail about the questions that required "other" responses and fill-ins. I also wrote about survey design changes from the previous year and specific areas for possible future handling of some questions. All three survey documents—report, presentation, supplement—are available at http://www.stcaustin.org/employment-mainmenu-30/13-salaries/2-salary-survey-results-available.
Conducting this year's salary survey was eye-opening for methodology, learning SurveyMonkey, writeups, and coming up ways for improving the 2011 version. I have listed some survey resources below:
Excel formulas I used are as follows:
=MIN([cellposition1]:[cellposition2]) <- lowest value
=Quartile([cellposition1]:[cellposition2], 1) <- 1st quartile, aka 25th percentile
=AVERAGE([cellposition1]:[cellposition2]) <- mean
=MEDIAN([cellposition1]:[cellposition2]) <- median, midpoint, aka 50th percentile, also calculable using 2nd quartile
=Quartile([cellposition1]:[cellposition2], 3) <- 3rd quartile, aka 75th percentile
=MAX([cellposition1]:[cellposition2]) <- highest value
=COUNT([cellposition1]:[cellposition2]) <- quantity of responses