Pages

July 14, 2019

After InstructureCon: Yes, I'm still hoping for that data opt-out!

Last week, I did a round-up post focused on InstructureCon, summarizing my many concerns about Instructure's new AI experiments. Back in March, CEO Dan Goldsmith announced a big shift for Instructure: instead of just giving teachers and schools access to data for traditional statistics as in the past, Instructure itself would be analyzing our students, profiling them in order to create predictive algorithms for future business growth, while doubling their TAM as Goldsmith claimed:


InstructureCon updates on DIG

So, after InstructureCon we know a lot more about this AI project, called DIG. For example, Goldsmith now claims: We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom. 

Personally, I find this claim hard to believe, given that the only data Instructure has to work with is the isolated, low-level data they gather from Canvas activity: log-ins, page views, quizzes, gradebook, etc. Unizin schools add demographics to that Canvas data (which I find even more alarming), but it sounds like Goldsmith is making the claim about Canvas data itself.

In any case, speaking for myself, I do not want Instructure to tell me how to do my job ("we can make recommendations..."), prejudicing my views of students before I have even met them. My school currently does not share a student's GPA with me, and for good reason; as I see it, Instructure's labeling of students in this way is no different than sharing their GPA. In fact, I would suspect that past grade data is a very significant component in Instructure's prediction engine, perhaps even the most significant component. But hey, it's their proprietary AI; I'm just guessing how it might work, which is all we can do with corporate AI/ML experiments.

Notice also the slipperiness of the word "outcome" in Goldsmith's claims about predictive accuracy. When teachers think about outcomes, we are thinking about what students learn, i.e. the learning they can take away with them from the class (what comes out of the class), especially the learning that will be useful to them in their later lives. And that's very complex; there is a whole range of things that each student might learn, directly and indirectly, from a class, and at the time of the class there's no telling what direction their lives might take afterwards and what might turn out to be useful learning along that life path. But the LMS has no record of those real learning outcomes. In fact, the LMS has no real measures of learning at all; there are only measures of performance: performance on a quiz, performance on a test, attendance, etc. So when Goldsmith talks about predicting the "likely outcome" for a student, what I suspect he means is that Instructure is able to predict the likely final grade that the student will receive at the end of a class (which is why I suspect GPA would be a big component in that prediction). But the grade is not the learning, and it is not the only outcome of a class. In fact, I would argue that we should not be using grades at all, but that is a topic for a separate discussion.

What about a data opt-out?

So, now that we know more about the goals of DIG, what about opting out? There was no announcement about an opt-out, and no mention even of the possibility of an opt-out. Goldsmith even claimed in an interview that there hasn't been any request for an opt-out: "We haven’t had that request, honestly." 


Well, that claim doesn't make sense as I myself had a long phone conversation with two VPs at Instructure about my opt-out request. What Goldsmith must mean, I suppose, is that they have not had a request at the institutional level for any campus-wide opt-outs, which is not surprising at all. While it would be great if we had some institutional support for our preferences as individual users, I would be very surprised if whole institutions decide to opt out. Predictive analytics serve the needs of institutions far more than they do the needs of individual teachers or students, and I can imagine that institutions might be eager to see how they can use predictive analytics to look at school-wide patterns that are otherwise hard to discern. Teachers can grok what is going on in their individual classrooms far more easily than provosts and deans can grok what is going on across hundreds and thousands of classrooms. 

Yet... there is hope!

Yet I still have some hope for an opt-out, because I learned from that same Goldsmith interview that individuals OWN their data: One of our first and primary tenets is that the student, the individual and the institution own the data—that’s their asset. 


And he says the same in this at video interview here: we own our data.


This concession about data ownership really caught me by surprise, in a good way, and renewed my hope for an opt-out. If individuals own their data, then we should be able to take our data out of the Instructure cloud when a course is over if we choose to do so. In other words: a data opt-out, perhaps with the same procedure that Instructure already uses to sunset data from schools that terminate their Instructure contract.

In fact, in the context of ownership, it really sounds more like an opt-in is required. If Instructure wants to use my data — data about me, my behavior, my work, my OWN data  then they should ask me for my permission. They should ask for permission regarding specific timeframes (a year, or two years, or in perpetuity, etc.), and they should ask for permission regarding specific uses. For example, while I strongly object to AI/ML experiments, there might be other research to which I would not object, such as a study of the impact that OER has on student course completion. Not all data uses are the same, so different permissions would be required.

Of course, as I've said before, I am not optimistic that Instructure is going to implement an opt-in procedure — even though they should — but I am also not giving up hope for a data opt-out, especially given the newly announced Canvas data tenets.

Canvas Data Tenets

In addition to this surprising concession about data ownership, we learned about these new Canvas data tenets at InstructureCon. In the video interview cited above, Goldsmith promised a post about data tenets coming soon at the Instructure blog, and there was already this slide in circulation at InstructureCon, which I assume are the data tenets Goldsmith is referring to in the interview (strangely, even the Instructure staff keynotes were not livestreamed this year, so I am just relying on Twitter for this information). As you can see, one of those tenets is: Empower People, don't Define Them.


Now, the language here sounds more like marcomm-speak rather than the legal or technical language I would expect, but even so, I am going to take heart from this statement. If Instructure promises to empower me, then surely they will provide a data opt-out, right? It would not be empowering if Instructure were to take my Canvas data and use it for an experiment to which I do not consent, as is currently the case.

My Canvas Data Doubts

Meanwhile, that tension between empowering people, not defining them, is what I want to focus on in the final part of this blog post. I saw really mixed messages from InstructureCon this year, as the big keynotes from Malcolm Gladwell, Dan Heath, and Bettina Love were all about community, peak moments, love and creativity... with a corporate counterpoint of big data and a billion Canvas quizzes as I learned via Twitter:


See also the contradiction where Goldsmith claims in an interview that Instructure is all about "understanding the individuals, their paths, their passions, and what their interests are" and what we see in the data dashboards: there are no passions and interests on those dashboards (but I do know those red "missing" labels all too well):




Impersonal personalization

There's a single word that I think expresses this dangerous ambivalence in ed-tech generally, and at Instucture in particular; that word is personalization. On the one hand, personalization looks like it would be about persons (personal agency, personal interactions, personal passions) but personalization has also become a codeword for the automation of education. Both in terms of philosophy and pedagogy, automation sounds really bad... but personalization: ah, that sounds better, doesn't it?

So, for example, listen to what Dan Goldsmith says in this interview: it's technology inevitablism, literally. (video hereSo when you think about adaptive and personalized learning I think it's inevitable that we as an educational community need to figure out ways of driving more personalized learning and personalized growth experiences.


I'm not going to rehash here all the problems with the rhetoric of personalization; Audrey Watters has done that for us, as in this keynote (among others): Pigeons and Personalization: The Histories of Personalized Learning. (A good all-purpose rule for thinking about ed tech: READ AUDREY.)

Instead, I will just focus here on the impersonality of Canvas data, listing five big reasons why I mistrust that data and Instructure's claims about it:

1. Canvas data measure behavior, not learning. Canvas is an environment that monitors student behavior: log on, log off; click here, click there; take this quiz, take that quiz; this this many words, download this many files, etc. If your educational philosophy is based on behaviorism, then you might find that data useful (but not necessarily; see next item in this list). If, however, your educational philosophy is instead founded on other principles, then this behavioral data is not going to be very useful. And consider the keynote speakers at InstructureCon: none of them was advocating behaviorism; just the opposite. Here's Bettina Love, for example, on liberation, not behaviorism (more on her great work below):


2. Canvas fails to gather data about the why. Even for purposes of behavior modification, that superficial Canvas data will not be enough; you need to know the "why" behind that behavior. If a student doesn't log on to Canvas for a week, you need to know why. If a student clicks on a page but spends very little time there, you need to know why. If a student does poorly on a quiz, you need to know why. For example, if a student got a poor score on a quiz because of a lack of sleep that is very different from getting a poor score because they did not understand the content, which is in turn very different from being bored, or being distracted by problems at home, etc. Just because students completed a billion quizzes in Canvas does not mean Instructure has all the data it needs for accurately profiling those students, much less for making predictions about them.

3. Canvas data are not human presence. The keynote speakers consistently emphasized the importance of people, presence, relationships, and community in learning, but numbers are not presence. Does this look like a person to you? This is how Canvas represents a student to me right now; the coming data dashboard (see above) uses the same numbers repackaged, because that is all that Canvas has to offer me: numbers turned into different kinds of visualizations.


Goldsmith claims that Instructure is different from other learning companies because they are all about people's passions and interests, but that claim does not fit with the views I get of my students in the Canvas Dashboard and the Canvas Gradebook: no passions, no interests; just numbers. I don't need percentage grades, much less the faux-precision of two decimal points. Instead, I need to know about students' passions and interests; that would be exactly the information that will help me do my job well, but Canvas data cannot provide that data.

4. Canvas data does not reflect student agency. The basic pedagogical design of Canvas is top-down and teacher-directed. Student choice is not a driving principle; in fact, it is really a struggle to build courses based on student choice (I will spare you the gory detail of my own struggles in that regard). Students cannot even ask questions in the form of search; yes, that's right: students cannot search the course content. The only access to the course content is through the click-here-click-there navigation paths pre-determined by the instructor. And, sad to say, there is apparently no fix in sight for this lack of search; as far as I could determine, there was no announcement regarding the deferred search project from Project Khaki back in 2017 (details here). Think about that lack of search for just a minute. It's no accident that Google started out as a search engine; the questions that people brought to Google, and people's choices in response to those answers, generated the behavioral surplus juggernaut that now powers Google AI. Netflix succeeds as a prediction engine precisely because it is driven by user choice: lots of options, lots of choices, and lots of data about those choices with which to build the prediction engine. The way that Canvas forestalls student choice, including the simple ability to initiate a search, is why I believe their AI project is going to fail. (Meanwhile, if I am wrong and there was an announcement about Canvas search at InstructureCon, let me know!)

And this last item is actually the most important:

5. Canvas data cannot measure obstacles to student learning. By focusing data collection on the students, Instructure runs the risk of neglecting the social, political, and economic contexts in which student learning takes place. Whether students succeed or fail in school is not simply the result of their own efforts; instead, there are opportunities and obstacles, not evenly distributed, which are crucially important. Does Canvas data record when students are hungry or homeless or without health insurance? Does Canvas data record that a course is taught by a poorly paid adjunct with no job security? As Dave Paunesku wrote in Ed Week this week, "When data reveal students' shortcomings without revealing the shortcomings of the systems intended to serve them, it becomes easier to treat students as deficient and harder to recognize how those systems must be changed to create more equitable opportunities." I hope everybody will take a few minutes to read the whole article: The Deficit Lens of the 'Achievement Gap' Needs to Be Flipped. Here's How. (Short answer: another billion quizzes is not how you flip the deficit lens.)



Of course, this is all a topic for a book, not a blog post, so I'll stop for now... but I'll be back next week to start a new approach to these datamongering round-ups: a commentary on Shoshana Zuboff's Surveillance Capitalism. Of all the concepts in play here, the one that is most important to me is what Zuboff's calls our "right to the future tense." So, I will work through her book chapter by chapter in the coming weeks, and hopefully that will make it more clear just why it is that I object so strongly to Instructure's predictive analytics experiment.

~ ~ ~

I want to close here with Bettina Love's TED talk; take a look/listen and see what you think: I think she is brilliant! More also at her website.


Speaking for myself, I'll take dance and lyrics over data analytics any day. So, keep on dancing, people! And I'll be back next week with Shoshana Zuboff's book and our right to the future tense. :-)