Pages

August 22, 2019

Canvas and the Botched Gradebook Labels: Why haven't they fixed this yet?

I'm taking a Zuboff break this week because I need to document my ongoing battle with Canvas Gradebook labels. The new semester has started, and as of midnight Tuesday, I've been fighting with Canvas to get control of my Gradebook. I think the Gradebook is my space, but Canvas insists on intruding. Yes, it's the labels. If you don't know what I mean, they look like this:


Yep, that would be red ink all over the Gradebook. Here's the story:

Unlike other LMSes I have used, Canvas does not respect the Gradebook space as belonging to teachers and students. Instead, Canvas thinks it knows better than teachers and students what's going on in a class. "MISSING" says the Gradebook in angry red letters (even when the assignment was optional), and "LATE" says the Gradebook (even when the student turned the work in before the deadline). By means of these labels, Canvas is sending negative and incorrect messages to my students.

So, if anyone is curious why it is that I have zero trust in Instructure's use of student data for machine-learning and AI, this is why: Canvas is intruding into the Gradebook with wrong messages for my students... and sending wrong messages to students about grades is just about the worst thing that can happen in a class. It's hard work to turn the Gradebook into a positive, rather than a negative space (my approach: Grading.MythFolklore.net), and Canvas then pulls the rug right out from under me. I tell the students they are in control... but Canvas then tells them the opposite: MISSING shouts Canvas (even when the work is not missing) and LATE shouts Canvas (even when the work is not late).  

Does Canvas have any positive messages to send my students? Nope. Nothing but red ink. MISSING. LATE. Over and over again. And I cannot stop it. 

It's like waking up in the morning to find that someone has thrown garbage on your front lawn.


Luckily, James Jones (the API and scripting guru of the Canvas Community) has written a script that will go pick up the garbage; you can see how he did that here: Removing Missing and Late Labels. Because Canvas built the Gradebook without any course-level control over the labels, the script checks every single assignment item for every single student, adjusting the label data item by item, student by student. Because I use a microassignments approach, that means the script has to check 18,450 records each time, and it does so quickly. Yay for James! Yay for scripts!

But here's the thing: James's script cannot stop the LATE labels from appearing; the LATE labels show up no matter what, and I cannot stop the students from seeing those labels. So I apologize to the students for the incorrect LATE labels and ask them to just ignore them; then I run the script once a week to clear them out.

Picking up the trash off our lawn.

The trash that Canvas put there.

It's not like Instructure doesn't know about this problem. When they first rolled out the red labels in the Beta version of the new Gradebook in September 2017, I documented the problem in great detail at the Canvas Community; that link goes to my blog posts tagged "red ink" and the first one is called "Gradebook Dismay," dated September 9, 2017. I was not the only one who was upset to find Canvas putting labels on my students, and Instructure rolled back that Beta feature from the Gradebook. I was sure they would fix it when we were all forced to go to the new Gradebook, which at my school happened in Spring 2019.

But I was wrong. 

When Spring 2019 began, there were the labels in the Gradebook, just like before. I contacted Canvas support and found out there was nothing I could do about it; I could not disable the labels at the Gradebook level. I could not disable the labels at the Assignment level. I could not change the wording of the labels or the color or alter the algorithm that assigned the labels. All I could do was click on the 18,450 items in my Gradebooks one by one.

So, as I said, James Jones came to my rescue and wrote a script.

But is that really a solution? My guess is that most Canvas users are not going to want to copy a script from GitHub, configure the variables manually in the script, and then run that script in the Javascript Console of their browser separately for each class. And to do that week after week. Yes, it's amazingly cool how it works, and I personally love to watch the network performance monitor go blip-blip-blip as it checks on thousands of records at lightning speed. But I'm a nerd, and you shouldn't have to be a nerd to stop Canvas from putting labels on your students. Especially when those labels are completely inaccurate.

And now, let's talk about why the labels are inaccurate, because that reveals a lot about how the people at Instructure view student learning: Instructure is applying an old-fashioned, deficit-driven approach to education, an approach that is exactly the opposite of what we need in the year 2019 IMO.

What is LATE? Before the new Gradebook, Canvas had a great approach to the late problem: they let you have a soft deadline and a hard deadline. This used to be one of my favorite features of Canvas. The soft deadline is what I tell my students to aim for; such-and-such is due on Tuesday (and I set the soft deadline to Tuesday midnight). But does it really matter if students are finishing up something at midnight as opposed to 2AM? No, that's silly. My students are not Cinderellas riding in pumpkin carriages; midnight is totally arbitrary. So I set up the soft deadline, and then I give everybody a 12-hour extension for every assignment, no questions asked; that is the hard deadline, and it is set for noon the next day (so, noon on Wednesday for an assignment due Tuesday). I call it the grace period. If students make the hard deadline, that is GREAT. That is the whole point; they got the assignment turned in my the deadline: yay! But Canvas does not think so: nope, Canvas thinks the work is late, putting that punitive red label on every assignment turned in during that grace period. I call it a no-questions-asked extension, but with those red LATE labels, Canvas undermines my message. Your teacher may tell you it's okay to use the grace period, but we at Canvas know better: a good student should not need the grace period, and you are not a good student; your work was LATE. I gave the students an extension on purpose, and I want them to use the extension if that helps them to get the work done. But Canvas doesn't care about what I want or what my students need. Canvas is just going to apply its algorithm, using the mind of a machine and trampling our humanity. LATE. LATE. LATE. LATE. As if the students who struggle with time don't already beat themselves up enough as it is, Canvas is going to beat them up some more. I say: students should be praised for getting the work turned in, not shamed.

What is MISSING? So, those Late labels are pretty bad, but brace yourselves: the MISSING labels are even worse. The way Canvas applies the MISSING label means that you cannot let students choose what assignments to do. I repeat: Canvas will not let you make assignments optional. So, if you think that student choice is important (I do!), and if you want to design your course so that students choose what assignments to complete (I do!), then you better learn how to run James's script because Canvas is going to label every assignment that your students choose not to complete as MISSING. And it is going to freak your students out, understandably. That is how I first found out about the labels back in September 2017; one afternoon I started getting panicked emails from students. "You told us that we could choose what assignments to do, but now Canvas is telling me I have to do them all!" I was baffled; how could Canvas tell my students what to do or what not to do? I didn't even understand what the students were talking about because I had no idea Canvas had started putting labels in my Gradebook. But it's true: Canvas really was telling my students that they had missed assignments. Even though the assignments were not required. Of course my students were upset. And here we are, almost two years later, and the Canvas Gradebook still wants to put MISSING labels on all those student assignments. The only thing that saves me is James's magic script.

What is the point of labels anyway? Even if these labels were correct (and they are not correct in my classes; every single label Canvas applies in my Gradebook is incorrect), these labels are still not going to help students. So, this is not just about Laura-and-her-weird-classes. This is instead about a wrong approach to feedback at Instructure. Students need encouraging, actionable feedback to motivate them to improve their performance. They need to know what they got right, and they also need to know what they can work on in order to do better for next time.

The Late label fails because it disregards the fact that the student DID turn in the work, which is actually good! But instead of praising the student for getting the work turned in, the red label conveys the message "no, you did bad." Negative messages like that are not how you encourage students to do better the next time.

And the Missing label is worse: it sends a negative message, and it is not even clear what the student is supposed to do next. Are they supposed to complete the missing work and turn it in anyway? Or not? Different teachers have different approaches to missing work (if the work really is missing), but Canvas doesn't know that. And Canvas doesn't care. If Canvas cared about that, they would let us configure the labels in our own way, based on our own algorithms, and conveying our own messages to our students. 

About uplift. I'll add one last observation here, and that is about what it means to be "uplifting." I used to be an active member of the Canvas Community, and my last blog post at the Community was about the Gradebook labels, along with my criticisms of Instructure's claims about AI and predictive algorithms. If they can't get the Gradebook right, why should I trust them to get anything else right about student data? At the time, the Community Managers told me I could no longer write posts like that; all contributions to the Community must be uplifting in nature, so say the Community Guidelines. Fair enough: it's their space; they make the rules, and they don't want me complaining in their space. Because I was not willing to self-censor my posts in order to be uplifting all the time, I started blogging again here; that was back in March of this year.

So, what about the Gradebook? Who makes the rules there? Just like Canvas wants an uplifting Community, I want an uplifting Gradebook. Those punitive red labels are NOT uplifting to my students, and I want them out of my Gradebook; those are my Gradebook Guidelines, and Canvas should respect that. The Gradebook belongs to me and my students. It is our space, and we should be able to tell Canvas to get its negative messages out of our space.

And so, in 2000 words (tl;dr I know), that is why I have zero faith in Instructure's ability to do anything useful with data analytics. The devil is in the details, and the details about the Canvas Gradebook are not pretty. 

That's it for this week, but I'll be back with more Zuboff again next time. And I'm glad to say that, aside from the Gradebook labels, my classes are going great! The blog network is up and running; I'll be writing about our adventures at Twitter: @OnlineCrsLady. Happy New Semester, everybody!



August 18, 2019

Zuboff, Chapter 4: How Google Got Away With It

Last week was Zuboff's chapter on the discovery of surveillance capitalism, based on using surplus behavioral data for user profiles and predictions; the parallels to the LMS were, in my opinion, both clear and frightening, and that was the focus of my post. In this week's post, I'll be sharing my notes on Zuboff's chapter about "how Google got away with it," and, coincidentally, this week is also when Google announced two new moves in its effort to automate education: on Wednesday, they announced a plagiarism policing service which got widespread attention in the Twitterverse (my part of the Twitterverse anyway); on Thursday, they announced an AI-powered tutoring tool, Socratic. It is the tutoring tool which I think is far more alarming, although my quick test of the tutor led to some laughable results (see below).

So, for this week, my notes about Zuboff's book will be less detailed since the chapter is mostly about Google, but I would urge everybody to think about Google's very aggressive new moves into the education world. Here is some of the initial coverage in TechCrunch, and I hope we will see some detailed critical analysis soon: Google’s new ‘Assignments’ software for teachers helps catch plagiarism and Google discloses its acquisition of mobile learning app Socratic as it relaunches on iOS.

And now, some notes from Zuboff, Chapter 4: The Moat Around the Castle.

~ ~ ~

Chapter 4 opens with a historical take on capitalism as appropriation and dispossession, where "things that live outside the market sphere and declaring their new life as market commodities." We've seen this happen most clearly at TurnItIn, where students had been writing school work for decades; it took TurnItIn to figure out how to turn student writing into a billion-dollar business. As Zuboff explained in detail already in the previous chapter, the extraction process reduces our subjective experience into behavioral data for machine learning:
human experience is subjugated to surveillance capitalism’s market mechanisms and reborn as “behavior.” These behaviors are rendered into data, ready to take their place in a numberless queue that feeds the machines for fabrication into predictions
In return, we get "personalized" products (personalized education is inevitable, as Instructure's CEO has proclaimed), but the real story is the larger corporate agenda:
Even when knowledge derived from our behavior is fed back to us as a quid pro quo for participation, as in the case of so-called “personalization,” parallel secret operations pursue the conversion of surplus into sales that point far beyond our interests.
For that corporate agenda to move forward, we must be denied power over the future use of our data, making us "exiles from our own behavior" as Zuboff explains, carrying on with her metaphor of home and sanctuary:
We are exiles from our own behavior, denied access to or control over knowledge derived from its dispossession by others for others. Knowledge, authority, and power rest with surveillance capital, for which we are merely “human natural resources.”
After these preliminaries, Zuboff then moves into a detailed examination of the factors that allowed Google to get away with it, an examination that will carry on for the rest of the book:
“How did they get away with it?” It is an important question that we will return to throughout this book.
Some of the factors are specific to Google as a company, but some of them also parallel moves that we see in the ed tech world, such as the way that we are being asked to simply trust Instructure with our data, without legal protections:
In the absence of standard checks and balances, the public was asked to simply “trust” [Google's] founders. [...] Schmidt insisted that Google needed no regulation because of strong incentives to “treat its users right.”
In the case of education, FERPA is indeed an almost 50-year-old law (well, 45 years; Wikipedia), just the kind of legal framework that Google's Larry Page has scoffed at:
“Old institutions like the law and so on aren’t keeping up with the rate of change that we’ve caused through technology.… The laws when we went public were 50 years old. A law can’t be right if it’s 50 years old, like it’s before the internet.”
Zuboff then provides a detailed analysis of the impact that the events of September 11 had both on Google's corporate agenda, as well as the government's surveillance efforts. That discussion is not directly relevant to education, but it got me to thinking how the rise of the LMS coincided with the "great adjunctification" of the higher ed workforce. Because of the LMS, schools could experiment with centrally designed courses that could be staffed at a moment's notice with part-time temporary faculty. The LMS was not created in order to make that possible, but the availability of the LMS certainly made the adjunctification of higher ed much easier over the past two decades.

Zuboff also has a chilling section on the role that Google played in the elections of 2008 and 2012, along with an inventory of Google's enormous political lobbying efforts.

Towards the end of the chapter, Zuboff presents this description of Google's Page and Schmidt to sum things up:
Two men at Google who do not enjoy the legitimacy of the vote, democratic oversight, or the demands of shareholder governance exercise control over the organization and presentation of the world’s information.
I have much the same feeling about the engineers at Instructure and other ed-tech companies: without being teachers themselves, and without being directly accountable to teachers (but instead accountable to schools and those schools' IT departments), they exercise control over the organization of our schooling.

We need and deserve better, and so do our students.

~ ~ ~

P.S. Unrelated to Zuboff's book, I tested the new Google Socratic and it was a total failure with both questions I tried. Has anyone else tried it with success? I guess I am glad that it looks to be so bad!

For example, I asked it what do bush cats eat (something I actually was researching earlier today)... and the response from Socratic was a Yahoo Answers item about a house cat who eats leaves from a lilac bush, and the owner is worried that they might be poisonous. Poor Socrates didn't recognize "bush cat" is another name for the African serval. It thought I was asking what-kind-of-bush do cats eat, as opposed to my actual question, which was "what do bush cats eat?" I didn't mean to trick it, but that was pretty funny once I figured out how the computer misunderstood the question. (And yes, I really was learning about bush cats earlier today, ha ha, re: this story: How a Hunter obtained Money from his Friends the Leopard, Goat, Bush Cat, and Cock, and how he got out of repaying them.)

For your viewing pleasure, this is a bush cat (photo by Danny Idelevich):


Then, I asked what I thought would be an easy question: what was the first Cinderella story? But instead of sending me to the Wikipedia article on Cinderella, it sent me to the Wikipedia article about the Toy Story film franchise. I'm not even sure what's up with that one.

Anyway, the official Google post says that Socratic is ready to help... but it sure doesn't look like it to me. Help-not-help ha ha.



and now...
Happy Back-to-School, everybody!


UPDATE: Here are the notes on Chapter 5: The Dispossession Cycle

August 11, 2019

Zuboff, Chapter 3. Google: The Pioneer of Surveillance Capitalism

Chapter 3 is the ESSENTIAL chapter in Zuboff's whole book, and it contains a powerful warning for what is happening in the ed-tech world right now. I took a break from my reading notes blogs last week when I wrote a response to the latest Instructure statement on data gathering and predictive algorithms (Data Analytics... no, I don't dig it), and it is really good timing to move on from that to this chapter of Zuboff's book, where she tells the story of how Google discovered/invented surveillance capitalism. That happened step by step, based on specific choices made by Google executives and employees, and I would contend that Instructure executives and employees are looking at a path very similar to the one that Google followed, a path that might be profitable for the company but which I think will be very bad news for education.

So, as I write my notes here I'll focus on points of comparison that I see between Google's story and the story of Canvas, and for another really powerful ed-tech comparison, see Ben Williamson's piece on ClassDojo, which is also going down the path of big data and behavior modification: Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry.

And now... Chapter 3. Google: The Pioneer of Surveillance Capitalism.


Google and the LMSes. Although Google's search engine and an LMS are quite different software products, they share a key feature that Zuboff emphasizes in a quote from Google's chief economist, Hal Varian, writing about “computer-mediated transactions” and their transformational effects on the modern economy: the computer systems are pervasive, and that pervasiveness has consequences.
Nowadays, there is a computer in the middle of virtually every transaction… now that they are available these computers have several other uses.
And what are some of those other uses? The uses are: data extraction and analysis; new contractual forms due to better monitoring; personalization and customization; and continuous experiments.

Anyone familiar with the evolution of the LMS over the past two decades can see plenty of parallels there: as with Google, so too the LMS. The LMS increasingly puts itself in the middle of transactions between teachers and students, and as a result we are seeing data extraction and analysis that didn't used to happen before, monitoring unlike any attendance system ever used in a traditional classroom, the mounting hype of personalization and customization, along with continuous experiments... including experiments for which we and our students never gave our permission.

As Zuboff narrates the story of Google's discovery/invention of behavioral surplus, she starts with the early days of Google, when "each Google search query produced a wake of collateral data," but the value of that collateral data had not yet been recognized, and "these behavioral by-products were haphazardly stored and operationally ignored." The same, of course, has been true of the LMS until very recently.

Zuboff credits the initial discovery of these new uses for collateral data to Amit Patel, at that time a Stanford grad student:
His work with these data logs persuaded him that detailed stories about each user—thoughts, feelings, interests—could be constructed from the wake of unstructured signals that trailed every online action.
That is the kind of thing we are hearing from the LMSes now too, although personally, I am not convinced by the depth of data they have to work with compared to Google. The user experience of the LMS is so limited and predefined, with so little opportunity for truly individual action (just the opposite of the Google search engine), that I don't think the LMSes are going to be able to do the kind of profiling they claim they will be able to do... not unless/until they get into the kind of surveillance that is taking shape in Chinese schools now as part of the Chinese government's huge investment in edtech and AI; for more on that, see: Camera Above the Classroom by Xue Yujie. See also this new piece on facial recognition experiments in education: Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements.

The LMS started out as a tool for teachers and students to use in order to accomplish teaching and learning tasks, but now it is morphing into a surveillance device so that the LMS company can gather data and take over those tasks, intervening in ways that the LMS never did before, turning into the kind of recursive learning system that Google has become:
Google’s engineers soon grasped that the continuous flows of collateral behavioral data could turn the search engine into a recursive learning system.
Google then used the predictive power of that system in order to create something completely unprecedented: the Google advertising empire. Zuboff provides a step by step account of just how that happened, and how a similar transformation then took place at Facebook.

What's next for the LMS? So, an obvious question is this: what are the LMS companies going to do with their predictive products? The answer is: they don't know. Yet. Which is why we need to be talking about this now; the future of education is something of public importance, and it is not something that should be decided by software company executives and engineers. It's one thing for companies to let Google take control of their advertising; it is something else entirely for schools to let the LMS take control of schooling.

Here is how Zuboff describes the shift in the relationship between Google and advertisers as the new predictive products took shape; as you read this, think about what this might foretell for education:
In this new approach, he insisted that advertisers “shouldn’t even get involved with choosing keywords—Google would choose them.” [...] Operationally, this meant that Google would turn its own growing cache of behavioral data and its computational power and expertise toward the single task of matching ads with queries. [...] Google would cross into virgin territory by exploiting sensitivities that only its exclusive and detailed collateral behavioral data about millions and later billions of users could reveal.
While it might seem like advertising and education don't have anything to do with each other, they overlap a lot if you look at education as a form of behavior modification (which, sad to say, many people do, including many educators).
Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” [...] The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising.
That transformation of advertising into a "science" sounds scarily like the way that some would like to see teaching turned into a data-driven science, precise and standardized in its practices. For more on that topic, see the Ben Williamson piece I mentioned above about ClassDojo. In addition, Zuboff is going to have a lot to say about behaviorism, especially the radical behaviorism of B. F. Skinner, later in the book.

Profiling. So, back to the Google story. As Google accumulated more and more of this behavioral data, they began to develop (and patent) what they called UPIs, user profile information:
These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.
So too at Instructure, where they claim that they can already develop predictive profiles of students by combining data across courses; here's CEO Dan Goldsmith: "We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom."

Again, as I said above, I am not really persuaded by the power of Instructure's so-called insights. If they are looking at a student's grades in all their other classes, for example, and using that GPA to predict the student's performance in a new class, sure, the GPA has some predictive power.

What I really want to know, though, is how Instructure has the right to use a student's grade data in that way, when I thought such data was private, protected by FERPA. I am not allowed to see the grades my students receive in their other courses (nor do I want to); I'm not even allowed to see the other courses they are taking — all that data is protected by FERPA. But Instructure is now apparently profiling students based on their grades in other classes (?), and then using that grade-derived data in order to insert itself as an actor in other classrooms, all without the students' permission. Now, if I am wrong in that characterization of their predictive Dig initiative, I will be glad to stand corrected (and I'm hoping for a reply to my blog post last week about those issues); I'm just going on statements made in public by Dan Goldsmith about the Dig project.

As Instructure gathers up all this data, without allowing us to opt out, they are proceeding much as Google did, unilaterally extracting without users' awareness or informed consent:
A clear aim of the [UPI] patent is to assure its audience that Google scientists will not be deterred by users’ exercise of decision rights over their personal information, despite the fact that such rights were an inherent feature of the original social contract between the company and its users. [...] Google’s proprietary methods enable it to surveil, capture, expand, construct, and claim behavioral surplus, including data that users intentionally choose not to share. Recalcitrant users will not be obstacles to data expropriation. No moral, legal, or social constraints will stand in the way of finding, claiming, and analyzing others’ behavior for commercial purposes.
The right to decide. Narrating the Google history year by year, Zuboff shows that the new Google emerged over time; the values and principles of Google today are not the values and principles that Google espoused at the beginning. Are we seeing the same kind of shift happening at Instructure? Re-reading this chapter in her Zuboff's book I am very concerned that this is indeed what we are seeing, a "180-degree turn from serving users to surveilling them." And as I've said repeatedly in my complaints to Instructure about its new data initiatives, this is not just about privacy; instead, it is about the right to decide:
That Google had the power to choose secrecy is itself testament to the success of its own claims. This power is a crucial illustration of the difference between “decision rights” and “privacy.” [...] Surveillance capitalism lays claim to these decision rights. The typical complaint is that privacy is eroded, but that is misleading. In the larger societal pattern, privacy is not eroded but redistributed, as decision rights over privacy are claimed for surveillance capital. [...] Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism. [...] Surveillance is the path to profit that overrides “we the people,” taking our decision rights without permission and even when we say “no.”
Writing about Instructure's new data analytics since back in March (My Soylent Green Moment), I've been saying no... but it is still not clear if I am going to be able to opt out of having data from my courses included in Instructure's machine learning project, and it is also not clear if my students are going to be able to opt out of the type of profiling that Goldsmith has described. I believe that each of us needs to be able to decide to say "no" on an individual level, not just institutional consent and institutional opt-out, which seems to be (?) what Instructure is offering. So, I'm still hoping we will hear more about that, and sooner rather than later, given that another school year is about to begin.

One last quote... Okay, this has become another too-long blog post, so I'll close with a spectacular Zuboffian sentences... can this woman write? This woman can write! And we need to pay attention to every word here:
The remarkable questions here concern the facts that our lives are rendered as behavioral data in the first place; that ignorance is a condition of this ubiquitous rendition; that decision rights vanish before one even knows that there is a decision to make; that there are consequences to this diminishment of rights that we can neither see nor foretell; that there is no exit, no voice, and no loyalty, only helplessness, resignation, and psychic numbing; and that encryption is the only positive action left to discuss when we sit around the dinner table and casually ponder how to hide from the forces that hide from us.
As a teacher at the start of a new school year, I should not have to ponder how to hide from the LMS, or how to help my students do so, but that is the position I am in. When my students create blogs and websites for my classes, they can do all of that using pseudonyms (pseudonyms are great, in fact), and they can keep or delete whatever they want at the end of the semester. But what about Instructure? Is Instructure going to take what it learns about my students in my class and then use that data to profile those students in their other classes, prejudicing those other instructors before those classes even begin? (See Goldsmith quote above re: profiling.) If that is what Instructure is going to do with data from my classes, I need to able to say "no," and so do my students.

Meanwhile, I'll be back with more from Zuboff next weekend. Thanks for reading! You can comment here if you want, or connect at Twitter (@OnlineCrsLady).

P.S. I hope that those who know more about Instructure analytics will chime in, especially anybody who's at a Unizin school. All I know about Unizin I learned from the Chronicle article here: Colleges are Banding Together ... and the claims made there sound even more alarming than Goldsmith's description of Instructure profiling. Which is to say: very alarming indeed. Claims from Brad Wheeler, Unizin cofounder:
Take students’ clickstreams and pageviews on the learning-management system, their writing habits, their participatory clicks during classroom discussions, their grades. Then combine that with information on their educational and socioeconomic backgrounds, their status as transfer students, and so on. You end up with "a unique asset," says Wheeler, in learning what teaching methods work. 
Digital publishers have learned a lot about students who use the publishers’ texts and other resources, he says, but the demographic puzzle pieces are key to discovering impediments to learning.
My school is not a participant in Unizin (I'll add: thank goodness). Here is the list: Unizin Members. If you are at a Unizin school, I would love to know more about what kind of informed consent and opt-out procedures are in place at those schools.

UPDATE: Here are the notes on Chapter 4: How Google Got Away With It

August 3, 2019

Data Analytics... no, I don't dig it

This week Jared Stein wrote a blog post about Canvas data, Power to the People with Canvas Data and Analytics (Can You Dig It?). I'm glad that a conversation is happening, and I have a lot to say in response, especially about an opt-out for those of us who don't want to be part of the AI/machine-learning project, and a shut-off so that we can stop intrusive profiling, labeling, and nudging in our Canvas classes. It's not clear from Jared's post just what kind of opt-out and shut-off control we will have, and I hope we will hear more about that in future posts. Also, since Jared does not detail any specific Dig projects, I am relying on Phil Hill's reporting from InstructureCon which describes one such project: profiling a student across courses, including past courses, and using that comprehensive course data to predict and manage their behavior in a current course. (This use of grade data across courses without the student's express consent sure looks like a violation of FERPA to me, but I'll leave that to the lawyers.)


And now, some thoughts:

1. Not everyone digs it. I understand that some people see value in Dig predictive analytics, and maybe they are even passionate about it as Jared says in his post, but there are also people whose passions run in different directions. As I explain below, my passion is for data that emerges in actual dialogue with students, so it is imperative that I be able to stop intrusive, impersonal auto-nudges of the sort that Dig will apparently be generating. The punitive red labels in the Canvas Gradebook are already a big problem for me (my students' work is NOT missing, and it is NOT late, despite all the labels to the contrary). Based on the failure of the Gradebook algorithms in my classes, I do not want even more algorithms undermining the work I do to establish good communication and mutual trust. So, I really hope Instructure will learn a lesson from those Gradebook labels: instructors need to be able to turn off features that are unwelcome and inappropriate for their classes. Ideally, Instructure would give that power directly to the students, or let teachers choose to do so; that's what I would choose. My students voted by a large majority to turn off the labels (which I now do manually, week by week, using a javascript), although a few students would have wanted to keep the labels. I say: let the students decide. And for crying out loud, let them choose the color too; the labels don't need to be red, do they?

2. We need to target school deficits, not student deficits. I believe that Instructure's focus on at-risk students comes from good intentions, but that cannot be our only focus. Instead, we need data to help us focus on our own failures, the deficits in our own courses: deficits in design, content, activities, feedback, assessment, etc., along with data about obstacles that students face beyond the classroom. This is a huge and incredibly important topic, way too big for this blog post, so I hope everybody might take the time to read some more about the perils of deficit-driven thinking. A few places to start:
For a great example of what happens when you invite students to talk about the obstacles they face, see this item by Peg Grafwallner: How I Helped My Students Assess Their Own Writing. Applying that approach to Canvas: instead of labeling students with red ink in the Gradebook ("you messed up!") and then auto-nudging them based on those labels ("don't mess up again!"), the labels could be more like a "what happened?" button, prompting a dialogue where the student could let the instructor know the reason(s) why they missed an assignment or did poorly, etc., and the instructor could then work with the student to find a positive step forward, based on what the student has told them. That is the way I would like to see data-gathering happen: student-initiated, in context and in dialogue.

3. Dig is not just about privacy; it is about Instructure's unilateral appropriation of our data. Jared emphasizes in his post that Instructure is not selling or sharing our data, but there is more at stake here than just data privacy and data sharing. Instructure is using our data to engage in AI experiments, and they have not obtained our permission to do that; I have not consented, and would not give my consent if asked. Dan Goldsmith has stated that users "own" their data, and one of the data tenets announced at InstructureCon was "Empower People, Don't Define Them" (discussed here). Speaking for myself, as someone who does not want to be defined by Instructure's profiling and predictive algorithms, I need to be able to just opt out. In his post, Jared writes about Instructure being a "good partner" in education, "ensuring our client institutions are empowered to use data appropriately." Well, here's the thing about partnerships: they need to work both ways. It's not just about what Instructure empowers institutions to do; it is also about what we, both as institutions and as individuals, empower Instructure to do. By unilaterally appropriating our data for their own experiments, Instructure is not being a good partner to individual students and teachers. If the people at Instructure "see no value or benefit in technology that replaces teachers or sacrifices students' agency" as Jared says in his post, then I hope they will give us the agency we need to opt out of Dig data-mining and remove our courses from the Dig laboratory.

Okay, everybody knows my blog posts are always too long, so I'll stop here to keep this about the same length as Jared's post (and of course I've written a lot about these topics here at the blog already). I hope this conversation can continue, and I also hope that Instructure will explore the options for both data opt-out and feature shut-off as they proceed with the Dig rollout for 2020. Thanks for reading!