Pages

Saturday, October 15, 2016

Class Survey: Week 8, Fall 2016

Earlier today I posted the numbers from the Canvas Survey I did in class; below are the numbers from the Class Survey. Total number of students: 78. Number of responses: 76. You can see the survey here: Class Survey.

The information in this survey is really just relevant to my classes (and I have got some good ideas for improvements already!), but I wanted to share the one numerical question that I asked because I used a format option at Google Forms that I thought was really helpful: I asked students to rate their learning in the seven different dimensions of the class, and then I asked them to rate the importance of those dimensions. That allowed me to then identify a rating that reflects the gap between their learning (from 1 low to 4 high) and the importance they place on it (from 1 low to 4 high).

Here are the results, and you can see how I set up the questions below. I thought this was a really useful procedure I've never used before, which is why I am sharing it here:


The "rating" is a calculation based on the difference between the learning and the interest. If a student rated their interest as 4 but their learning as 3, that would be -1. If their interest was 1 but their learning is 4 that would be +3. If their learning was just 2 but their interest was also just 2, then that would be 0. I did that calculation for all students and then summed the result to get the "rating" for each area. It's basically a way to try to quantify whether I exceeded expectations (positive rating) or failed to meet expectations (negative rating).

I need to remind myself also that numbers are always misleading, so while I am number-crunching here, the range of average ratings on learning is from between 3.0 and 3.4 on a scale of 1 to 4, which is to say, the learning is all basically good, and basically the same good really. So too with the range of interest, from 2.8 to 3.5, basically all good too. The numbers are not worrisome; they are just a way (admittedly artificial) to try to help me make the decisions I do have to make about prioritizing the time I spend on course development.

And here are some thoughts about those numbers:

Reading. I was really surprised that people's expectations were so low for the reading. Yet the reading got the highest ranking for learning! As a result, the rating is highest for reading: I exceeded people's expectations mostly because their expectations were so low. The optimistic thing to say is that I have worked really hard on the readings over the past three years, and my efforts have paid off. On the other hand, if students don't really put much importance on the readings for class, I probably do not need to focus too much on making more improvements to the readings; I need to work on other areas instead.

Technology. This was probably my biggest surprise of all. Students' interest in learning about technology was the lowest of all the seven domains! Honestly, I thought it would be the highest. I think this probably reflects the extremely low importance placed on digital literacy at my school. That worries me. In addition, the students report the lowest learning when it comes to technology, but if they say it is their lowest priority, I'm not really sure what to do. I think that means I need to try to make much more explicit why I place such a high importance on learning about technology. The thing is, I feel like I do that already, so I am kind of stumped. I really need to think about this one some more! 

Revising. I was also very surprised that students expressed so much interest in learning how to revise their writing; it was the highest rated area! That means I also fell somewhat short in meeting their expectations, but not by too much. This is an area of the class I have worked hard on over the past two years by implementing growth mindset. With that overall growth framework in place, I think I now need to do some more direction instruction too. If students really are struggling with writing mechanics, for example, I probably need to do more explicit teaching about that, and also more explicit teaching about writing style and writing process. That will be a real pleasure for me, and I think I will make that my focus for next summer. 

Time Management and Learning to Learn. These areas rate on the high side in student interest and the low side in terms of learning, which means I have work to do here. Right now, I address these areas in the Learn by H.E.A.R.T. and Growth Mindset challenges, but those are extra credit options, and most students do not do them. So, I probably need to rethink that. If the students place a high importance on these domains (and they do), perhaps the time has come for me to find a way to integrate that more directly into the class, instead of having it be something that is extra credit. One possibility might be to let students swap out one reading assignment each week for these meta-learning options. But that will require working really hard on building something more like a time curriculum and a learning curriculum (the challenges I have right now are pretty low key and unstructured). Can I manage to do that next summer also? Perhaps. Although that might end up being the project for Summer 2018 instead.

Anyway, I learned SO MUCH from these numbers, in addition to the very helpful comments to the free response questions (as you can see from the survey, the open-ended questions are VERY open-ended: Class Survey). I'm usually not a fan of numbers, but by asking about the importance the students put on the different areas, it gave me a really useful way to interpret what they reported in their learning for those areas.

Here are the questions as presented in the survey:




2 comments:

  1. Very interesting, indeed, Laura. Regarding the low interest that students have in learning about technology. Do you think they perceive "technology" differently than you (and I) do? I often wonder this. People who have grown up immersed in the web their entire lives probably don't see navigating the web as "technology." I think they see tools/devices more as technology. I'm not sure, but it might be worth probing this aspect a bit.

    Thank you, as always, for sharing. I always learn something from your teaching.

    This reminded me of an article (actually, transcript of a speech) I read seven years ago by Jamie Merisotis, of the Lumina Foundation, titled, "It's the learning, stupid" (https://www.luminafoundation.org/news-and-views/it-s-the-learning-stupid). The article argues that improving student learning is at the core of improving the number of Americans with a 4-year degree. The fact that I remember this so clear seven years later shows how much of an impact that idea was to me, at the time. Since then, it's become my passion (as it is yours). But when we step back and compare the questions you are asking on your students' mid-semester eval to the questions asked on formal evals, well, we can see how little has changed.

    Teach on!
    Michelle

    ReplyDelete
  2. Hi Michelle, just having this convo also with Lisa over at G+ re: students and technology. And re: tools, there was far less love for the Canvas mobile app than I expected in the Canvas results. I don't use a mobile phone myself, so I'm not up on the world of apps, but some of them were liking the app very much while others seemed not esp. impressed. I'll let the Canvas people puzzle that one out!

    And I so agree about asking about LEARNING. The idea of evaluating teaching is part of that, but only a part of it. Getting the students to instead think about their own learning, how they determine it, how I can help, etc. is so much more productive than any of the info I get from those pro forma end of semester evals .

    And i need to go read that article; I don't recognize it... and that's a title I would remember! :-)

    ReplyDelete

(I have limited this to Google accounts only, but no word verification; meanwhile, if you want to contact me directly, you can do that too! laura-gibbs@ou.edu.)