Instructors


FAQs


Course Feedback Dates


Using the Your Feedback System


Core SQCT Questions


Questionnaire Personalization


Supporting Your Feedback Response Rates


Your Feedback Classroom Administration


Helping Students Write Effective Feedback


Interpreting Questionnaire Data


Accessing Past Results


Resources


Contact


Interpreting Questionnaire Results

A main goal of Western's Your Feedback questionnaire on courses and teaching is to help instructors retain those teaching practices that support student learning and provide feedback that can be used to inform potential modifications to courses and instruction.

The numerical results is not meant to provide a final assessment of an instructor's teaching; it should be seen as a form of constructive feedback that provides insight into the ways in which students learn and interact with instructors and course materials, promoting conversation, reflection, and action. Combined with other indicators of instructional success, questionnaire numerical results constructs a wider "picture" of individual instruction at Western, while questionnaire comments provide a context for interpreting the numerical results.


General Tips for Reviewing Questionnaire Results:
  • Set aside some dedicated time when you are in a positive state of mind to review your questionnaire results.
  • Think about the context of your teaching experience when looking over your results. Perhaps you are a new instructor, teaching a new course, or have tried out some new approaches to teaching or course design. Maybe the course has undergone a substantial revision since the last time it was offered. Such contexts may influence questionnaire responses, and you can discuss them with your department Chair or Dean.
  • Research shows that upper level and smaller enrollment courses may have more positive ratings than lower level courses and those with larger numbers. Elective courses may also have higher ratings than required courses. It is particularly helpful to keep this in mind when comparing dissimilar types of courses (Feldman, 2007; Hativa, 2014).
  • Try not to be defensive if your ratings and comments are not as positive as you would like. The Your Feedback questionnaire on courses and teaching is meant to prompt self-reflection on course and teaching that can support your professional development as an instructor. Focus on those elements of the questionnaire feedback that are positive and select one or two areas that you might work on improving in future, rather than trying to address all comments at once.
  • Western's Teaching Support Centre is available to help you interpret your questionnaire feedback and can work with you to devise instructional and course design strategies.


Tips for Interpreting Numerical Ratings
  • Look at more than just the mean score. The mean score should be understood in relation to the median score and standard deviation. The mean score is the average of the rating responses for a single question. The median is the middle score of all of the ratings responses.

    As an extreme example, in a class of 41 students, 33 students might rate the overall course experiences as 6 (very good), while 8 students might rate it as 1 (very poor). The mean (i.e., average) for overall course experience would be 5 (good), despite the fact that 80% of the students had a very good overall course experience.

    The median is calculated by laying out all individual ratings for a question from low to high and choosing the score that is the middle number in that range. In this case, there were 41 responses, so a response of 6 (very good) would be the middle rating if all 41 responses were listed from low high. So, in this case a minimum of 50% of students believed the course experiences was at least "very good."

    In both cases, it is important to know that 8 students felt the overall course experience was very poor, but the median shows that most students had a very good overall course experience.

    Standard deviation is an indication of differences of ratings within student responses. A smaller standard deviation, (e.g., 1) indicates that students are generally in agreement in their responses, while a larger standard deviation (e.g., 3) indicates that student answers covered a larger range of ratings.

    For example, if the question, "Overall, how would you rate this course as a learning experience" has a rating mean of 6 with a standard deviation of 0.5, students will be generally in more agreement with each other on this item than if the standard deviation was 1.5.

  • Global measures are the best indicators of how students experience courses and instruction (Cashin & Downey, 1992; Hativa & Raviv, 1993). On the Your Feedback questionnaire, the global questions are "All things considered, is an effective university teacher" and "Overall, how would you rate this course as a learning experience?" Consider how results from these questions aligns with questions 1-14.
  • If you have taught the same course several times in a row, look for patterns in the results. Conversely, look for changes over time. This might be an indication of shifting approaches to learning.
  • Share your numerical results with a trusted colleague and talk about its implications. You can also book an appointment to discuss your questionnaire results with a member of the Teaching Support Centre

Tips for Interpreting Written Comments

  1. Written comments are a useful indicator of how instructors might plan specific future actions because they provide qualitative feedback on students' experiences within the classroom. Western provides resources to help instructors talk about writing effective comments with their students so that comments are appropriate, specific, and useful.

  2. Instructors tend to focus on negative comments. Be sure to also acknowledge those things that students believe are good about instruction and the course.

  3. Students often write comments when they have a strong opinion about the course or instructor, so compare written feedback against the numerical results to place comments in the context of the class as a whole. For example, while a written comment might indicate one student believed evaluation practices were unfair, numerical results might indicate that the class as a whole felt otherwise. Likewise, a single comment might be very positive in nature, but the numerical rating on a related item might suggest that an element of the course or instruction could be improved.

  4. Look for repeated patterns in written comments and consider whether or not comments are mostly positive or negative. You might find it helpful to group comments into categories. Remember that multiple comments about one aspect of courses and teaching--for example, instructor enthusiasm--should be viewed as a single point of interest, not as multiple concerns on affirmations from multiple students. This Comments Analysis Worksheet, developed by McGill University, may be of help.

  5. Even though an idea in a comment might only be mentioned once, do not automatically discount it. It may reflect the view of students from a marginalized demographic that are challenged by basic assumptions we make about the nature of teaching and instruction.

  6. When written comments and ratings results align, comments can be a particularly valuable source of information for how to introduce changes into instruction and improvement. Discussing the comments with a trusted colleague, your Chair, or member of the Teaching Support Centre can help to foster new ideas for moving forward with instruction and course design.

  7. Comments can help instructors separate concerns about instruction from elements of the course that instructors have no control over, such as scheduling, how often the course is offered, and class size. The latter type of comments can be brought to the attention of Chairs and Deans.

References

Adams, C. M. (2012). On-line measures of student evaluation of instruction. In M. E. Kite (Ed.), Effective evaluation of teaching: A guide for faculty and administrators (pp. 50-59): Retrieved from the Society for the Teaching of Psychology website: http://teachpsych.org/ebooks/evals2012/index.php

Cashin, W. E., & Downey, R. G. (1992). Using global student rating items for summative evaluation. Journal of Education Psychology, 84(4), 563-572.

Feldman, K. A. (2007). Identifying exemplary teachers and teaching: Evidence from student ratings. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: And evidence-based perspective (pp. 93-129). Dordrecht, The Netherlands: Springer.

Hativa, N. (2014). Student ratings of instruction: A practical approach to designing, and reporting (2nd ed.). Seattle, WA: Oron Publications.

Hativa, N., & Raviv, A. (1993). Using a single score for summative teacher evaluations by students. Research in Higher Education, 34(4), 625-646.

Theall, M., & Franklin, J. (1991). Using student ratings for teaching improvement. In M. Theall & J. Franklin (Eds.), Effective practices for improving teaching: New Directions for Teaching and Learning (pp. 83-96). San Francisco: Jossey-Bass.