Academic Leaders


Course Feedback Dates

Your Feedback Classroom Administration

Supporting Your Feedback Response Rates

Interpreting Questionnaire Feedback

Accessing Past Results



Interpreting Questionnaire Feedback

Recommendation Concerning the Interpretation of the Results from the Revised Student Questionnaire on Courses and Teaching

Based on differences in the items and rating scales between the original Student Questionnaire on Courses and Teaching (Fall 1996 - Summer 2017) and the revised Student Questionnaire on Courses and Teaching (Fall 2017 onward), it is recommended that the results from these two time periods be treated as separate data sources and not aggregated or compared (e.g., in analyses, tables, graphs) for decisions involving these scores (e.g., Annual Performance Evaluation, Promotion and Tenure, Teaching Awards). For more information, please click here.

Interpreting Fall 2016/Winter 2017 SQCT Scores

In Fall 2016, Western's Student Questionnaires on Courses and Teaching (SQCT) moved from paper administration to online administration. A comparison of means for each of the 16 SQCT items for the Fall terms in 2014, 2015, and 2016 showed the Fall 2016 ratings to be in line with the Fall 2015 ratings and slightly higher than the 2014 ratings. The same comparison was conducted for Western's Winter 2015, 2016, and 2017 mean ratings for the 16 items. For the Winter terms, the 2017 means were slightly lower than the 2015 and 2016 item means. Please take the possibility that the move of SQCTs online may have impacted the SQCT scores of individual courses into consideration for decisions involving these scores (e.g., Annual Performance Evaluation, Promotion and Tenure, Teaching Awards). For graphic representations of these scores, please click here.

Contextualizing Student Questionnaire on Courses and Teaching Feedback

Feedback from Western's Your Feedback Student Questionnaire on Courses and Teaching (SQCT) can be used to make decisions about courses and programming and is also considered in regular faculty review procedures, including promotion and tenure.

In any year, it is important to consider the context in which a course is taught. For example, new instructors, instructors teaching a course for the first time, or instructors who wish to try new teaching techniques or innovations may experience a fluctuation in ratings results and types of comments they receive. Such contexts must be kept in mind when reviewing SQCT feedback. In particular, instructors wishing to try innovative approaches to teaching or simply improve their teaching by trying out techniques that are new to them should not feel inhibited to do so because of a fear of lower SQCT ratings feedback (Havita, 2014).

In this transitional year, as we move the SQCT online, it is particularly important to remember that the numerical results is not meant to provide a final assessment of an instructor's teaching; it should be seen as a form of constructive feedback that provides insight into the ways in which students learn and interact with instructors and course materials, promoting conversation, reflection, and action. Combined with other indicators of instructional success, questionnaire numerical results constructs a wider "picture" of individual instruction at Western, while questionnaire comments provide a context for interpreting the numerical results.

What to Expect When Moving SQCTs Online

While research indicates that response rates often drop when introducing an online SQCT, previous research has indicated there is no significant difference between the mean ratings scores on SQCTs administered online vs. in-class. In addition, while there is no statistical difference between the type of comments students provide (i.e., positive, negative, or neutral), students are more likely to leave longer and more thoughtful comments on online SQCTs (Stowell, Addison, & Smith, 2012; Venette, Sellnow, & McIntyre, 2010; Dommeyer, Baum, Hanna & Chapman, 2004).

Tips for Interpreting Numerical Results
  • Look at more than just the mean score. The mean score should be understood in relation to the median score and standard deviation. The mean score is the average of the rating responses for a single question. The median is the middle score of all of the ratings responses.

    As an extreme example, in a class of 41 students, 33 students might rate the overall course experiences as 6 (very good), while 8 students might rate it as 1 (very poor). The mean (i.e., average) for overall course experience would be 5 (good), despite the fact that 80% of the students had a very good overall course experience.

    The median is calculated by laying out all individual ratings for a question from low to high and choosing the score that is the middle number in that range. In this case, there were 41 responses, so a response of 6 (very good) would be the middle rating if all 41 responses were listed from low high. So, in this case a minimum of 50% of students believed the course experiences was at least "very good."

    In both cases, it is important to know that 8 students felt the overall course experience was very poor, but the median shows that most students had a very good overall course experience.

    Standard deviation is an indication of differences of ratings within student responses. A smaller standard deviation, (e.g., 1) indicates that students are generally in agreement in their responses, while a larger standard deviation (e.g., 3) indicates that student answers covered a larger range of ratings.

    For example, if the question, "Overall, how would you rate this course as a learning experience" has a rating mean of 6 with a standard deviation of 0.5, students will be generally in more agreement with each other on this item than if the standard deviation was 1.5.

  • Do not over interpret small changes in the ratings means, and know when changes are statistically significant. Research shows that even instructors and administrators trained in statistical analysis often place too much emphasis on small mean changes or make decisions based on comparing instructor mean scores with only small differences when there is no cause to do so (Boysen, 2016; Boysen, 2015; Boysen, Kelly, Raesly, Casner, 2014). Mean scores for any measurement contain random sources of error. An instructor's "true" scores lie somewhere in a range that expands above or below the reported mean, depending on the amount of this error. Generally, differences calculated to the second decimal point or smaller should be ignored as they do not indicate a statistically significant difference between mean scores.
  • Research indicates that upper year and smaller enrollment courses may have more positive ratings than lower level courses and those with larger numbers. Elective courses may also have higher ratings than required courses. It is particularly helpful to keep this in mind when comparing dissimilar types of courses (Feldman, 2007; Hativa, 2014). For example, mean scores from a first year, large-enrolment, required course should not be directly compared to an upper year, small-enrolment elective course.
  • Global measures are the best indicators of how students experience courses and instruction (Cashin & Downey, 1992; Hativa & Raviv, 1993). On the Your Feedback questionnaire, the global questions are "All things considered, is an effective university teacher" and "Overall, how would you rate this course as a learning experience?" Consider how results from these questions aligns with questions 1-14.
  • If you have questions about or are looking for support with interpreting SQCT feedback mean and median scores and standard deviation, please contact the Centre for Teaching and Learning.

Tips for Interpreting Comments

Deans have access to the Section 5 of the SQCT, which asks students to provide any supplementary comments about the course. These comments can provide useful feedback related to many aspects of course and degree programming, some of which instructors do not have control over, such as when courses are scheduled, class size and format, and repetition of content from other courses. This type of feedback is particularly helpful for faculties completing self-studies for the IQAP process, but can also serve as a source for ongoing results related to course programing, curriculum mapping, and variety of teaching approaches. Keep the following in mind when reviewing course comments:

  1. We tend to focus on negative comments. Be sure to also acknowledge those things that students believed were good about the course.

  2. It is entirely possible for students to provide positive feedback about an instructor and yet provide negative feedback about the course. Students understand that some elements of the course experience are beyond the control of the instructor, so be sure to compare course commentary with the instructor-based numerical ratings results.

  3. Students often write comments when they have a strong opinion about the course or instructor, so compare written feedback against the numerical results to place comments in the context of the class as a whole. For example, while a written comment might indicate one student believed evaluation practices were unfair, numerical results might indicate that the class as a whole felt otherwise. Likewise, a single comment about a student's course experience might be very positive in nature, but the numerical results on the global course rating item might suggest the majority of students disagree.

  4. Look for repeated patterns in written comments and consider whether or not comments are mostly positive or negative. You might find it helpful to group comments into categories. Remember that multiple comments about one aspect of courses and teaching--for example, overlap of course material with other courses--should be viewed as a single point of interest, not as multiple concerns or affirmations from multiple students. This Comments Analysis Worksheet, developed by McGill University, may be of help, although the comments categories are directed more at instructors.

  5. Even though an idea in a comment might only be mentioned once, do not automatically discount it. It may reflect the view of students from a marginalized demographic that are challenged by basic assumptions we make about the nature of course design and programming.

  6. When written comments and ratings results align, comments can be a particularly valuable source of information for how to introduce changes into the course and across a program.


Adams, C. M. (2012). On-line measures of student evaluation of instruction. In M. E. Kite (Ed.), Effective evaluation of teaching: A guide for faculty and administrators (pp. 50-59): Retrieved from the Society for the Teaching of Psychology website:

Boysen, G. A. (2016). Statistical knowledge and the over-interpretation of student evaluations of teaching. Assessment & Evaluation in Higher Education.

Boysen, G. A. (2015). "Uses and misuses of student evaluations of teaching: The interpretation of differences in teaching evaluation means irrespective of statistical information." Teaching of Psychology, 42(2), 109-118.

Boysen, G. A., Kelly, T. J., Raesly, H. N., & Casner, R. W. (2014). "The (mis)interpretation of teaching evaluations by College faculty and administrators." Assessment and Evaluation in Higher Education 39(6), 641-656.

Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in-class and online surveys: Their effects on response rates and evaluations. Assessment in Higher Education, 29(5), 611-623.

Havita, N. (2014). Student ratings of instruction: A practical approach to designing, operating, and reporting (2nd ed.). Oron Publication.

Stowell, J. R., Addison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction. Assessment & Evaluation in Higher Education, 37(4), 465-473.

Theall, M., & Franklin, J. (1991). Using student ratings for teaching improvement. In M. Theall & J. Franklin (Eds.), Effective practices for improving teaching: New Directions for Teaching and Learning (pp. 83-96). San Francisco: Jossey-Bass.

Venette, S., Sellnow, D., & McIntyre, K. (2010). Charting new territory: Assessing the online frontier of student ratings of instruction. Assessment & Evaluation in Higher Education, 35(1), 97-111.