Pages

August 3, 2019

Data Analytics... no, I don't dig it

This week Jared Stein wrote a blog post about Canvas data, Power to the People with Canvas Data and Analytics (Can You Dig It?). I'm glad that a conversation is happening, and I have a lot to say in response, especially about an opt-out for those of us who don't want to be part of the AI/machine-learning project, and a shut-off so that we can stop intrusive profiling, labeling, and nudging in our Canvas classes. It's not clear from Jared's post just what kind of opt-out and shut-off control we will have, and I hope we will hear more about that in future posts. Also, since Jared does not detail any specific Dig projects, I am relying on Phil Hill's reporting from InstructureCon which describes one such project: profiling a student across courses, including past courses, and using that comprehensive course data to predict and manage their behavior in a current course. (This use of grade data across courses without the student's express consent sure looks like a violation of FERPA to me, but I'll leave that to the lawyers.)


And now, some thoughts:

1. Not everyone digs it. I understand that some people see value in Dig predictive analytics, and maybe they are even passionate about it as Jared says in his post, but there are also people whose passions run in different directions. As I explain below, my passion is for data that emerges in actual dialogue with students, so it is imperative that I be able to stop intrusive, impersonal auto-nudges of the sort that Dig will apparently be generating. The punitive red labels in the Canvas Gradebook are already a big problem for me (my students' work is NOT missing, and it is NOT late, despite all the labels to the contrary). Based on the failure of the Gradebook algorithms in my classes, I do not want even more algorithms undermining the work I do to establish good communication and mutual trust. So, I really hope Instructure will learn a lesson from those Gradebook labels: instructors need to be able to turn off features that are unwelcome and inappropriate for their classes. Ideally, Instructure would give that power directly to the students, or let teachers choose to do so; that's what I would choose. My students voted by a large majority to turn off the labels (which I now do manually, week by week, using a javascript), although a few students would have wanted to keep the labels. I say: let the students decide. And for crying out loud, let them choose the color too; the labels don't need to be red, do they?

2. We need to target school deficits, not student deficits. I believe that Instructure's focus on at-risk students comes from good intentions, but that cannot be our only focus. Instead, we need data to help us focus on our own failures, the deficits in our own courses: deficits in design, content, activities, feedback, assessment, etc., along with data about obstacles that students face beyond the classroom. This is a huge and incredibly important topic, way too big for this blog post, so I hope everybody might take the time to read some more about the perils of deficit-driven thinking. A few places to start:
For a great example of what happens when you invite students to talk about the obstacles they face, see this item by Peg Grafwallner: How I Helped My Students Assess Their Own Writing. Applying that approach to Canvas: instead of labeling students with red ink in the Gradebook ("you messed up!") and then auto-nudging them based on those labels ("don't mess up again!"), the labels could be more like a "what happened?" button, prompting a dialogue where the student could let the instructor know the reason(s) why they missed an assignment or did poorly, etc., and the instructor could then work with the student to find a positive step forward, based on what the student has told them. That is the way I would like to see data-gathering happen: student-initiated, in context and in dialogue.

3. Dig is not just about privacy; it is about Instructure's unilateral appropriation of our data. Jared emphasizes in his post that Instructure is not selling or sharing our data, but there is more at stake here than just data privacy and data sharing. Instructure is using our data to engage in AI experiments, and they have not obtained our permission to do that; I have not consented, and would not give my consent if asked. Dan Goldsmith has stated that users "own" their data, and one of the data tenets announced at InstructureCon was "Empower People, Don't Define Them" (discussed here). Speaking for myself, as someone who does not want to be defined by Instructure's profiling and predictive algorithms, I need to be able to just opt out. In his post, Jared writes about Instructure being a "good partner" in education, "ensuring our client institutions are empowered to use data appropriately." Well, here's the thing about partnerships: they need to work both ways. It's not just about what Instructure empowers institutions to do; it is also about what we, both as institutions and as individuals, empower Instructure to do. By unilaterally appropriating our data for their own experiments, Instructure is not being a good partner to individual students and teachers. If the people at Instructure "see no value or benefit in technology that replaces teachers or sacrifices students' agency" as Jared says in his post, then I hope they will give us the agency we need to opt out of Dig data-mining and remove our courses from the Dig laboratory.

Okay, everybody knows my blog posts are always too long, so I'll stop here to keep this about the same length as Jared's post (and of course I've written a lot about these topics here at the blog already). I hope this conversation can continue, and I also hope that Instructure will explore the options for both data opt-out and feature shut-off as they proceed with the Dig rollout for 2020. Thanks for reading!