Pages

September 1, 2019

Zuboff, Chapter 5: The Dispossession Cycle

Chapter 5 — The Elaboration of Surveillance Capitalism: Kidnap, Corner, Compete — is another one of the essential chapters in Zuboff's book. This is where she identifies the "dispossession" strategy use by Google and other companies as they implement the extraction imperative, gathering data in ways that at first appear shocking, but which they manage to make into the new normal by means of a carefully orchestrated four-step process: incursion, habituation, adaptation, and redirection.

Zuboff's focus is on the tech giants (Google, Facebook, Microsoft) and also the telecom companies (esp. Verizon), but it seems to me that her analysis also applies to Instructure, along with other data-crunching edtech companies like ClassDojo, etc. So, here are my notes on Zuboff, Chapter 5.

~ ~ ~

Frictionless. Magical. Inevitable. I don't know if any of you listened in on the Future Trends Forum with Dan Goldsmith, Instructure CEO, last week (video will be in archive), but it was very revealing. He talked about the LMS as being frictionless software, so that you don't even perceive it's there; elsewhere, he's spoken about the inevitability of data personalization. In a conversation I had with Hilary Scharton at Instructure, she talked about the magical content recommendations that Instructure is going to offer us.

All this vocabulary fits exactly with the kind of vocabulary we see coming from Google, as for example Larry Page as quoted in Zuboff:
“Our ultimate ambition is to transform the overall Google experience, making it beautifully simple, almost automagical because we understand what you want and can deliver it instantly.”
Why is Instructure crunching all that data about us, creating student profiles and teacher profiles too? It's so that they can make "automagical" recommendations. Here's Goldsmith back in March 2019We can start making recommendations [...] watch this video, read this passage, do problems 17-34 in this textbook, spend an extra two hours on this or that.


Now, just speaking for myself, I don't think there is going to be very much magic in these suggestions from Instructure. The depth of data is just not there to support that kind of insight. So, that can be comforting (this is all a lot of marcomm bluster, but nothing really of substance), but it can also be alarming because there is plenty more data Instructure could get... and which they are going to have to get if they want to make automagical recommendations to teachers and students.

How far will they go? For example, will Instructure start doing sentiment analysis on discussion board posts? That's a logical step, but one that I imagine a lot of teachers and students would find troubling.

Here is Zuboff writing about similar imperatives at work in Google's data extraction architecture:
There can be no boundaries that limit scale in the hunt for behavioral surplus, no territory exempted from plunder. [...] The assertion of decision rights over the expropriation of human experience, its translation into data, and the uses of those data are collateral to this process, inseparable as a shadow. [...] Google is a shape-shifter, but each shape harbors the same aim: to hunt and capture raw material. Baby, won’t you ride my car? Talk to my phone? Wear my shirt? Use my map?
In the ed tech world, that might mean, "won't you wear my glasses?" like in the AttentivU project from MIT Media Lab:


Is this the kind of education technology you want to see? I do not, but the business world sets no limits here, and the purveyors of ed tech hardware like this are going to want to partner with the LMSes, and vice versa

We may find it outrageous, but as Zuboff documents in the story of Google Glass in this chapter, the technology companies are manipulating the system in a step-by-step process so that these outrageous incursions eventually become the new normal. The hardware doesn't matter; it's all about the data which the hardware gathers:
It’s not the car; it’s the behavioral data from driving the car. It’s not the map; it’s the behavioral data from interacting with the map. The ideal here is continuously expanding borders that eventually describe the world and everything in it, all the time.
And that includes the world of teachers and students, a world that we thought belonged to us... but which is instead being turned into data, data over which we have very little control.

Dispossession. Here is how Zuboff describes the first stage of the Dispossession Cycle, incursion:
The first stage of successful dispossession is initiated by unilateral incursion into undefended space. [...] Incursion moves down the road without looking left or right, continuously laying claim to decision rights over whatever is in its path. “I’m taking this,” it says. “These are mine now.”
That's exactly how I see Instructure's new move towards AI and machine learning: a unilateral claim made on our data, along with incursions into the Gradebook, labeling my students' work (incorrectly) as MISSING and LATE (yes, with red ink).

In an old-school LMS, that intrusion into the Gradebook would be unthinkable, but the new Canvas LMS has decided it knows better than teachers how to assess and label students' work. As I explain in this detailed post, Instructure actually began intruding into the Gradebook two years ago, and then they pulled back (I breathed a sigh of relief...), but when the new Gradebook launched, there it was: the labels were back, unchanged.

This pattern is the same kind of thing that Zuboff describes at Google: outrageous incursion, followed by what appears to be a retreat but which is not a retreat at all:
[Google] has learned to launch incursions and proceed until resistance is encountered. It then seduces, ignores, overwhelms, or simply exhausts its adversaries. [...] People habituate to the incursion with some combination of agreement, helplessness, and resignation. The sense of astonishment and outrage dissipates. [...] The incursion itself, once unthinkable, slowly worms its way into the ordinary. Worse still, it gradually comes to seem inevitable.
Zuboff documents this cycle in detail for Gmail, Street View, and other Google products, along with Facebook's Like button, Microsoft Cortana, the tracking IDs launched by Verizon, and so on.

Life as data. Just as Google is rendering the real world into data, something similar is now happening for education, where the human world of education is being transformed into data to be mined and manipulated:
Any public space is a fitting subject for the firm’s new breed of incursion without authorization, knowledge, or agreement. Homes, streets, neighborhoods, villages, towns, cities: they are no longer local scenes where neighbors live and walk, where residents meet and talk. Google Street View, we are informed, claims every place as just another object among objects in an infinite grid of GPS coordinates and camera angles. [...] Google’s prerogative to empty every place of the subjective meanings that unite the human beings who gather there. 
Instead of human beings who gather to learn, teachers and students with our own subjective realities and personal goals, we are now gathering in order to be measured and rendered as data.

Google wraps this up in "empowering people" rhetoric that is very similar to the rhetoric we hear from Instructure: this data is going to empower students, empower teachers, empower institutions, etc. etc. Here's Google's John Hanke speaking about Street View, for example:
[Hanke] declared that Street View’s information was “good for the economy and good for us as individuals.… It is about giving people powerful information so that they can make better choices.” [...] Hanke’s remarks were wishful thinking, of course, but they were consistent with Google’s wider practice: it’s great to empower people, but not too much, lest they notice the pilfering of their decision rights and try to reclaim them. 
So, I am now wondering about just what we have to do to reclaim our decision rights at Instructure. Dan Goldsmith stated very clearly in the Future Trends Forum meeting that individuals own their data. Not institutions, not Instructure: individuals. He's said similar things before, and of course I am glad to hear it, but I have to ask what it means for us to own our data. What are the powers that come with ownership? Can I export my data? Can I remove my data from my institution's data set for AI and machine learning experiments? Can I remove my data from Instructure's AI and machine learning experiments?

In short: do I have to right to decide just who will use my data and how? And will that be opt-in or opt-out? I am really hoping to hear more about that from Instructure.

First day of school to last day of work. That's Instructure's new company slogan; you can read more about their recent rebranding here: Connecting our Brand and Website to a Mission of Lifelong Learning.



That Instructure rhetoric sounds a lot like the Google rhetoric which Zuboff focuses on in this chapter, where Google wants to be part of all our decisions, big and small, throughout all aspects of our lives:
Google the “copilot” prompts an individual to turn left and right on a path defined by its continuously accruing knowledge of the person and the context. Predictions about where and why a person might spend money are derived from Google’s exclusive access to behavior surplus and its equally exclusive analytic capabilities. [...] Push and pull, suggest, nudge, cajole, shame, seduce: Google wants to be your copilot for life itself.
And it's not just Google; Microsoft's Cortana assistant has similar aspirations:
One Microsoft executive characterizes Cortana’s message: “‘I know so much about you. I can help you in ways you don’t quite expect. I can see patterns that you can’t see.’ That’s the magic.”
Even more so now that Microsoft has acquired LinkedIn, so that Microsoft CEO Nadella can proclaim:
“Today Cortana knows about you, your organization and about the world. In the future, Cortana will also know your entire professional network to connect dots on your behalf and stay one step ahead.”
It's the same kind of rhetoric we're hearing from Instructure with its new overarching school-and-work mission. Instead of offering teachers software and webspace for their courses (how Canvas used to be), they are now using both school and work spaces to extract data and profile individuals for both educational and professional projects (Canvas, Bridge, all the Instructure products).

But do we really want Instructure to "connect the dots on our behalf"? Speaking for myself, I do not. And that's why I need to know how to opt out of the AI experiments that Instructure has started. I've been raising these questions since back in March, and it is still not clear to me what data is, and is not, being used in DIG and other machine-learning experiments. I hope we will get clarification about that soon, along with a procedure so that we can opt ourselves out of those data-mining experiments. We need to be able to remove our own data from those cross-course and cross-institutional data sets, limiting the data use to the courses in which the data was collected.

As Zuboff points out in this chapter, these tech corporations seek to seduce, ignore, overwhelm, or simply exhaust their adversaries... but as regards the data opt-out, I am not giving up! :-)

~ ~ ~

So, those are my thoughts from Chapter 5, and I'll be back next week with my notes on Chapter 6. Thanks for reading!