Pages

Showing posts with label Zuboff. Show all posts
Showing posts with label Zuboff. Show all posts

October 27, 2019

Zuboff, Chapter 7: Reality Business / School Business

Yep, it's been a month (ugh, not a great month), but I realized during the Can*Innovate conference last week (I presented on randomizers... UN-prediction!) that I need to get back to work on summarizing Zuboff. I know that most people are not going to read her book, but maybe they will read these notes and think more critically about not just the LMS but the way Learning Analytics is now driving the LMS. This week's chapter from Zuboff addresses that directly, too, since this is the chapter where she begins the discussion about behavior modification.

The fact that behaviorist assumptions run deep in education is a big reason why, I suspect, many people uncritically accept the claims of learning analytics and the behavior modification agenda that goes with them. For teachers who approach education as a behavior modification project, then learning analytics are just what they need. But for teachers who approach education with a belief in human freedom, we need to be aware of how the LMS constrains our students' freedom, and our freedom as teachers too.

And let there be no mistake about it: right now the emphasis is on learning analytics to monitor and control students, but that is just the beginning; there will be teaching analytics also to monitor and control teachers. So keep those thoughts in mind regarding the Internet of Things, which is the topic of Zuboff's seventh chapter: The Reality Business.


1. The Prediction Imperative

We've been reading more and more in the education news about the data that schools are not seeking to collect about their students in order to create better predictive algorithms. Not just attendance in class, but going to the library, etc. That is what this chapter is about: the need to gather more and more data to create new predictive products:
Even the most sophisticated process of converting behavioral surplus into products that accurately forecast the future is only as good as the raw material available. [...] Surveillance capitalists therefore must ask this: what forms of surplus enable the fabrication of prediction products that most reliably foretell the future? This question marks a critical turning point in the trial-and-error elaboration of surveillance capitalism. It crystallizes a second economic imperative—the prediction imperative—and reveals the intense pressure that it exerts on surveillance capitalist revenues. [...] Compelled to improve predictions, surveillance capitalists such as Google understood that they had to widen and diversify their extraction architectures to accommodate new sources of surplus and new supply operations.
Zuboff presents the data-gathering grab as two different processes: extension and depth. Extension is about reach:
Extension wants your bloodstream and your bed, your breakfast conversation, your commute, your run, your refrigerator, your parking space, your living room.
So, in education, that means not just what is happening in the classroom, but in the dorm room, the library, dining halls, etc.

Then, there is depth:
The idea here is that highly predictive, and therefore highly lucrative, behavioral surplus would be plumbed from intimate patterns of the self. These supply operations are aimed at your personality, moods, and emotions, your lies and vulnerabilities.
In this context, think about "sentiment analysis" and other data-mining that schools want to run on LMS discussion boards or students' social media, their Internet search history, etc. 

Beyond the data gathering, broad and deep, is the behavior modification; this is what Zuboff calls economies of action:
Behavioral surplus must be vast and varied, but the surest way to predict behavior is to intervene at its source and shape it. The processes invented to achieve this goal are what I call economies of action. [...] These interventions are designed to enhance certainty by doing things: they nudge, tune, herd, manipulate, and modify behavior in specific directions.
Yep, all the nudges. Educators will claim that they are seeking to modify student behaviors only in positive directions, for positive outcomes. That is why one of the main questions we all need to asking ourselves is how we see our role as educators. Is behavior modification at the heart of our teaching project? Or do we have other ideas about our roles as teachers? One good way to address that question is to ask yourself about how you would feel being continuously monitored and nudged to change your behavior as a teacher. For more on that, see this powerful new essay by Alfie Kohn: How Not to Get a Standing Ovation at a Teachers’ Conference.

2. The Tender Conquest of Unrestrained Animals

This section of the chapter is really eye-opening: Zuboff looks at the use of telemetry used by scientists to monitor animals and climate, gathering data that could never be collected in a zoo or replicated in a laboratory:
It was a time when scientists reckoned with the obstinacy of free-roaming animals and concluded that surveillance was the necessary price of knowledge. Locking these creatures in a zoo would only eliminate the very behavior that scientists wanted to study, but how were they to be surveilled? [...] The key principle was that his telematics operated outside an animal’s awareness.
One such scientist was R. Stuart MacKay
MacKay’s inventions enabled scientists to render animals as information even when they believed themselves to be free, wandering and resting, unaware of the incursion into their once-mysterious landscapes.  
One of the recurring themes throughout this chapter is the tension between scientific curiosity and capitalist exploitation:
MacKay yearned for discovery, but today’s “experimenters” yearn for certainty as they translate our lives into calculations. [...] Now, the un-self-conscious, easy freedom enjoyed by the human animal—the sense of being unrestrained that thrives in the mystery of distant places and intimate spaces—is simply friction on the path toward surveillance revenues.
That "easy freedom" is something that I am prepared to fight for, as an educator.

3. Human Herds

In this section, Zuboff focuses on work by Joseph Paradiso and his colleagues at the MIT Media Lab, with their quest to build something like a browser for reality itself, a browser not for an Internet of webpages but for that Internet of things... all the things. 
Just as browsers like Netscape first “gave us access to the mass of data contained on the internet, so will software browsers enable us to make sense of the flood of sensor data that is on the way.” [...] Paradiso is confident that “a proper interface to this artificial sensoria promises to produce… a digital omniscience… a pervasive everywhere augmented reality environment… that can be intuitively browsed” just as web browsers opened up the data contained on the internet.
Again, this sense of scientific challenge cannot afford to ignore the business ramifications:
For all their brilliance, these creative scientists appear to be unaware of the restless economic order eager to commandeer their achievements under the flag of surveillance revenues.
That is my fear also: yes, there might be things I am curious to know about my students, and things it might even be useful for me to know, but not at the risk of empowering data-gathering processes and markets that extend far beyond my classroom, real or virtual.

4. Surveillance Capitalism’s Realpolitik

In this chapter, Zuboff shifts from that sense of scientific curiosity into the real business projects based on converting reality into a data stream, with a focus on IBM’s $3 billion investment in the “internet of things,” a project led by Harriet Green. For these projects to succeed, there cannot be "dark data," data that is out of reach:
Because the apparatus of connected things is intended to be everything, any behavior of human or thing absent from this push for universal inclusion is dark: menacing, untamed, rebellious, rogue, out of control. [...] The tension is that no thing counts until it is rendered as behavior, translated into electronic data flows, and channeled into the light as observable data. Everything must be illuminated for counting and herding. [quoting Harriet Green] “You know the amount of data being created on a daily basis—much of which will go to waste unless it is utilized. This so-called dark data represents a phenomenal opportunity… the ability to use sensors for everything in the world to basically be a computer, whether it’s your contact lens, your hospital bed, or a railway track.”
At the same time that ed-tech seeks to gather all the data of a student's life, they are also de-contextualizing that data, rendering everything as behavior, objectifying everything and everyone:
Each rendered bit is liberated from its life in the social, no longer inconveniently encumbered by moral reasoning, politics, social norms, rights, values, relationships, feelings, contexts, and situations. In the flatness of this flow, data are data, and behavior is behavior. [...] All things animate and inanimate share the same existential status in this blended confection, each reborn as an objective and measurable, indexable, browsable, searchable “it.” [...] His washing machine, her car’s accelerator, and your intestinal flora are collapsed into a single dimension of equivalency as information assets that can be disaggregated, reconstituted, indexed, browsed, manipulated, analyzed, reaggregated, predicted, productized, bought, and sold: anywhere, anytime.
It used to be that student "surveillance" consisted of teachers taking attendance and giving tests. The world of ed-tech surveillance has changed that into something profoundly different, and profoundly alienating for both students and teachers. Our classroom is not our classroom any longer.

5. Certainty for Profit

This section focuses on the way that predictive products fundamentally change the nature of a business like insurance, which is no longer about communities and shared risk, but individualization based on data analytics and predictive algorithms. Does anybody know of a good write-up on how the same process could undermine education? Traditionally, education was a community project, but it seems to me that, by analogy, the predictive analytics that are fundamentally changing the insurance business will change the education business in the same way.

Here are some of Zuboff's comments about telematics in the auto insurance world:
This leads to demutualization and a focus on predicting and managing individual risks rather than communities. [...] Telematics are not intended merely to know but also to do (economies of action). They are hammers; they are muscular; they enforce. Behavioral underwriting promises to reduce risk through machine processes designed to modify behavior in the direction of maximum profitability. [...] Telematics announce a new day of behavioral control.
Another ominous education parallel is the use of gamification (think ClassDojo); when people push back on these metrics as an invasion of privacy, the insurance companies respond with fun gamification:
If price inducements don’t work, insurers are counseled to present behavioral monitoring as “fun,” “interactive,” “competitive,” and “gratifying,” rewarding drivers for improvements on their past record and “relative to the broader policy holder pool.” [...] In this approach, known as “gamification,” drivers can be engaged to participate in “performance based contests” and “incentive based challenges.”
Of course, gamification does not have to work this way... but it can. And for how that is playing out in education, see Ben Williamson on ClassDojo here: Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry

6. Executing the Uncontract

In this chapter, Zuboff discusses how what is today the stuff of marketing hype was stuff that would once have been considered a dystopian nightmare. 
Yet now that same nightmare is rendered as an enthusiastic progress report on surveillance capitalism’s latest triumphs. [...] How has the nightmare become banal? Where is our sense of astonishment and outrage?
To answer this question, Zuboff proposes the idea of an uncontract, which has rendered us as passive agents:
The uncontract is not a space of contractual relations but rather a unilateral execution that makes those relations unnecessary. The uncontract desocializes the contract, manufacturing certainty through the substitution of automated procedures for promises, dialogue, shared meaning, problem solving, dispute resolution, and trust: the expressions of solidarity and human agency that have been gradually institutionalized in the notion of “contract” over the course of millennia. [...] The uncontract bypasses all that social work in favor of compulsion.
What Zuboff calls the "substitution of machine work for social work" is an enormous threat in education today, with the most vulnerable populations to most likely to have their agency institutionalized.

7. Inevitabilism

The nightmare has not just become normalized; it has become inevitable.
Among high-tech leaders, within the specialist literature, and among expert professionals there appears to be universal agreement on the idea that everything will be connected, knowable, and actionable in the near future: ubiquity and its consequences in total information are an article of faith. [...] Paradiso’s conception of a “digital omniscience” is taken for granted, with little discussion of politics, power, markets, or governments. As in most accounts of the apparatus, questions of individual autonomy, moral reasoning, social norms and values, privacy, decision rights, politics, and law take the form of afterthoughts and genuflections that can be solved with the correct protocols or addressed with still more technology solutions.
Are data analytics inevitable? The folks at Instructure think so (Instructure CEO Dan Goldsmith: "So when you think about adaptive and personalized learning I think it's inevitable"), but Zuboff reminds us about the three essential questions we must ask, questions whose answers are not inevitable.
What if I don’t want my life streaming through your senses? Who knows? Who decides? Who decides who decides?
There then follows one of the most interesting parts of this chapter: Zuboff talked to Silicon Valley engineers to find out what they thought about inevitabilism. Answer: these insiders at the heart of the "inevitability" know better.
Nearly every interviewee regarded inevitability rhetoric as a Trojan horse for powerful economic imperatives.
Here is a quote from one of those interviewees:
“There’s all that dumb real estate out there and we’ve got to turn it into revenue. The ‘internet of things’ is all push, not pull. Most consumers do not feel a need for these devices. You can say ‘exponential’ and ‘inevitable’ as much as you want. The bottom line is that the Valley has decided that this has to be the next big thing so that firms here can grow.”
Push, not pull: that to me is very much what is happening with analytics in the LMS. And when you push back and say you do not want them, lo and behold, you cannot turn them off. I just want the ability to opt out, but I am growing less and less hopeful about that. And I still can't turn off the (wrong) Canvas Gradebook labeling of my students.

8. Men Made It

The title of this subchapter is from Steinbeck's Grapes of Wrath where it is the banking system, made by men but beyond our control: The bank is something more than men, I tell you. It's the monster. Men made it, but they can't control it.


Thus the bitter paradox of using our agency to build systems that deprive us of agency:
Every doctrine of inevitability carries a weaponized virus of moral nihilism programmed to target human agency and delete resistance and creativity from the text of human possibility.
Zuboff insists it does not have to be this way; it is NOT inevitable.
We know that there can be alternative paths to a robust information capitalism that produces genuine solutions for a third modernity. [...] Inevitabilism precludes choice and voluntary participation. It leaves no room for human will as the author of the future. [...] Will inevitabilism’s utopian declarations summon new forms of coercion designed to quiet restless populations unable to quell their hankering for a future of their choice?
And I return again and again to the most distinctive feature in Canvas LMS: there is no choice. You cannot build a course in Canvas predicated on the idea that students will choose to do things, or not to do things. The learning management system turns student agency into compliance.

And data collection.

9. To the Ground Campaign

The final chapter is about Google's Sidewalk Labs and the creation of "Google Cities." Mutatis mutandis, you can see the same thing happening to universities as they allow themselves to be rendered as data. Ironically, Sidewalk Labs presents itself as a way to combat digital inequality, just as some proponents of learning analytics insist that they, too, want to help students:
Sidewalk Labs’ first public undertaking was the installation of several hundred free internet-enabled kiosks in New York City, ostensibly to combat the problem of “digital inequality.” [...] Sidewalk’s data flows combine public and private assets for sale in dynamic, real-time virtual markets that extract maximum fees from citizens and leave municipal governments dependent upon Sidewalk’s proprietary information.
So, yes, this is what I thought about at Can*Innovate: while people cheer on the ability to track student views of LMS Pages, the real discussion are happening offstage:
The realpolitik of commercial surveillance operations is concealed offstage while the chorus of actors singing and dancing under the spotlights holds our attention and sometimes even our enthusiasm. [...quoting Google's Eric Schmidt] “The genesis of the thinking for Sidewalk Labs came from Google’s founders getting excited thinking of ‘all the things you could do if someone would just give us a city and put us in charge.’”
Do we really want to put the LMS more and more in charge of the education we deliver online? I certainly do not, which is why I am still (still...) hoping for the ability to opt out of Instructure's use of data from my courses for its machine learning experiments and the development of its predictive algorithms.

I don't want to predict my students' futures. I want them to choose their futures, and I will do my best to then help them get there.

September 15, 2019

Zuboff, Chapter 6: Hijacked! The Queen's to Command

Once again, my Zuboff note-taking resonates eerily with what I see happening in the ed-tech world around me. Last week, I skipped writing up notes from Zuboff to document the weird Twitter event surrounding #ChangeWithAnalytics; here's that post: The Buzz and the Buzzkill. When I resumed note-taking this week with Zuboff's Chapter 6, the opening topic of that chapter — conquest by declaration — resonated perfectly with the declarations by the #ChangeWithAnalytics crew:


To make things even more eerie, just as Zuboff analyzes six declarations by Google (see below), there are also six principles being promulgated as part of #ChangeWithAnaytics: "The thoughtful application of the following six principles will accelerate the meaningful use of analytics and take advantage of the power of data to make the decisions and take the actions that just may save higher education. Really." Here are their principles with cutesey graphics; for Google's declarations, read on.


To get started, Zuboff takes us back to 1492:
On December 4, 1492, Columbus escaped the onshore winds that had prevented his departure from the island that we now call Cuba. Within a day he dropped anchor off the coast of a larger island known to its people as Quisqueya or Bohio, setting into motion what historians call the “conquest pattern.” ]...] It’s a design that unfolds in three phases: the invention of legalistic measures to provide the invasion with a gloss of justification, a declaration of territorial claims, and the founding of a town to legitimate and institutionalize the conquest.
The unsuspecting inhabitants are now the Queen's to command:
Convinced that the island was “his best find so far, with the most promising environment and the most ingenious inhabitants,” he declared to Queen Isabella, “it only remains to establish a Spanish presence and order them to perform your will. For… they are yours to command and make them work, sow seed, and do whatever else is necessary, and build a town, and teach them to wear clothes and adopt our customs.”
Zuboff brings in the philosopher John Searle's work on speech acts to help us grasp just what is going on with this type of declaration:
A declaration is a particular way of speaking and acting that establishes facts out of thin air, creating a new reality where there was nothing [...] asserting a new reality by describing the world as if a desired change were already true: “All humans are created equal.” “They are yours to command.” As Searle concludes, “All of institutional reality, and therefore… all of human civilization is created by… declarations.”
A key feature of this conquest is the insistence on its inevitability, as we hear also in the ed-tech world (Instructure CEO Dan Goldsmith: "So when you think about adaptive and personalized learning I think it's inevitable..." and also the inevitabilism of the #ChangeWithAnalytics campaign):
As historian Matthew Restall writes: Sixteenth-century Spaniards consistently presented their deeds and those of their compatriots in terms that prematurely anticipated the completion of Conquest campaigns and imbued Conquest chronicles with an air of inevitability. The native people were summoned, advised, and forewarned in a language they could not fathom to surrender without resistance in recognition of authorities they could not conceive.
Zuboff then presents the claims of Google, showing how they serve the same function of conquest:
(1) We claim human experience as raw material free for the taking. 
(2) On the basis of our claim, we assert the right to take an individual’s experience for translation into behavioral data.  
(3) Our right to take, based on our claim of free raw material, confers the right to own the behavioral data derived from human experience.  
(4) Our rights to take and to own confer the right to know what the data disclose.  
(5) Our rights to take, to own, and to know confer the right to decide how we use our knowledge.  
(6) Our rights to take, to own, to know, and to decide confer our rights to the conditions that preserve our rights to take, to own, to know, and to decide.
As Zuboff shows, while it was Google who pioneered this data-dispossession strategy, it is now a pervasive corporate practice, and of course we see it in the new moves by the LMSes, as Instructure has claimed the right to consider what we do inside the LMS as experience free for them to take and to translate into behavioral data (even if that has nothing to do with the goals and purposes of our actions). On that basis, Instructure also claims the right to know what that data disclose (even if we do not need or want an ed-tech company to know these things about us), and to act on that knowledge (even if those actions are misguided or unwelcome, like the Gradebook labeling).

The "division of learning" that Zuboff then invokes is meant to echo the "division of labor," which was a hallmark of late 19th- and 20th-century capitalism.
In our time the division of learning emerges from the economic sphere as a new principle of social order and reflects the primacy of learning, information, and knowledge in today’s quest for effective life. [...]  Today our societies are threatened as the division of learning drifts into pathology and injustice at the hands of the unprecedented asymmetries of knowledge and power that surveillance capitalism has achieved.
Now, instead of a division of labor, it is a division of learning, which involves knowledge, authority, and power, expressed in the form of three questions:
The first question is “Who knows?”
The second question is “Who decides?”
The third question is “Who decides who decides?”
The deskilling of humans in order to invest in machines which Zuboff describes as an early phase of surveillance capitalism is exactly the crisis we are now facing in education, as more and more purveyors of ed-tech tell us that it is not the teachers or the students who know; it is the machines... and so we should invest, not in people, but in those machines, turning over not just our money, but also the actual work of education.
The answer to the question Who knows? was that the machine knows, along with an elite cadre able to wield the analytic tools to troubleshoot and extract value from information. [...] How different might our society be if US businesses had chosen to invest in people as well as in machines? [...] Most companies opted for the smart machine over smart people, producing a well-documented pattern that favors substituting machines and their algorithms for human contributors in a wide range of jobs.
Indeed! And how different the whole field of online education would look right now if most schools and colleges had spent the past 20 years investing not in LMS companies and their contraptions, but instead in the teachers and students who are engaged in the actual work of education.

As a result of the data-dispossession process which is now fully entrenched in education, the LMSes and companies like TurnItIn control what Zuboff describes as the two texts. There is one outward text which we both read and write: our posts in the LMS discussion boards, the answers we enter for quizzes, the essays we deposit in the dropbox. That first text might feel like it is "us," like it is "ours," but it is in reality just a business mechanism, a way for Instructure and TurnItIn to construct the shadow text on which their data-based businesses depend:
The first text actually functions as the supply operation for the second text: the shadow text. [...] The shadow text is a burgeoning accumulation of behavioral surplus and its analyses, and it says more about us than we can know about ourselves. [...] Worse still, it becomes increasingly difficult, and perhaps impossible, to refrain from contributing to the shadow text. It automatically feeds on our experience as we engage in the normal and necessary routines of social participation. [...] As the source from which all the treasure flows, this second text is about us, but it is not for us. Instead, it is created, maintained, and exploited outside our awareness for others’ benefit.
This is why Instructure is going to find it very hard to accommodate requests to opt-out of the data-mining. It used to be that we could just use the LMS, and all the data was just dumped at the end of the course. Now, all that digital exhaust is what the company runs on; they cannot do without it, which means they cannot just let us opt out. And, just like the conquistadors, they did not ask us to opt in. They simply took the data, building the shadow text without securing our permission to do so, at least not in any meaningful way. Instead, they presented us with take-it-or-leave-it terms-of-service that simply made us the Queen's to command.

The dangers are real, and they are much bigger than ed-tech; as Zuboff shows, this is a question about the future of democracy, about the future of "the future" itself:
Surveillance capitalism’s ability to corrupt and control these texts produces unprecedented asymmetries of knowledge and power. [...] These trends [are] incompatible not only with privacy but with the very possibility of democracy, which depends upon a reservoir of individual capabilities associated with autonomous moral judgment and self-determination.
And for those of us who believe in democratic education, these words from Paul Schwartz (Director of the Berkeley Center for Law and Technology) are chilling:
The more that is known about a person, the easier it is to control him. Insuring the liberty that nourishes democracy requires a structuring of societal use of information and even permitting some concealment of information.
So too with ed-tech: knowing "everything" about students in order to more fully control them does not advance the cause of education; just the opposite.

Zuboff closes the chapter with the warning that the struggle we now face is unprecedented. That is also why we were caught off guard:
We were caught off guard because there was no way that we could have imagined these acts of invasion and dispossession, any more than the first unsuspecting Taíno cacique could have foreseen the rivers of blood that would flow from his inaugural gesture of hospitality toward the hairy, grunting, sweating men, the adelantados who appeared out of thin air waving the banner of the Spanish monarchs and their pope as they trudged across the beach.
I remember how these passages about the Conquest floored me when I read the book the first time last spring. For literally 20 years I looked on the LMS as some clunky piece of junk, and I could not understand why teachers were using it when we had so many better alternatives. In my preoccupation with the clunkiness of the LMS, like the awkwardness of those grunting, sweating conquistadors, I failed to realize how sinister the LMS had become until there it was: machine-learning and predictive algorithms that claim to foretell my students' educational outcomes before we even begin the semester, dispossessing us of our right to the future tense. It is our freedom that is at stake here:
These operations challenge our elemental right to the future tense, which is the right to act free of the influence of illegitimate forces that operate outside our awareness to influence, modify, and condition our behavior.
So ends Part I of Zuboff's book, and I'll move on to Part II next week.

~ ~ ~

An Instructure update. Back in July, Jared Stein wrote a blog post at the Canvas blog: Power to the People with Canvas Data and Analytics (and just as Zuboff warns, the cycle of data dispossession likes to wrap itself up in the rhetoric of freedom and empowerment). At the time, Jared said there would be further details in a future blog post, so I just now checked to see if another post had shown up. Nothing yet, but I saw this new post: Growing the Wonderful World of Learning.  Check out the first sentence; it's one of those surreal declarations that is gaslight-worthy: We don’t look at education as a “business”


Anyway, nothing new yet at the Canvas blog about the possibilities of a data opt-out but until I actually hear the words "no, there will be no opt-out," I am going to keep on asking.

And I'll be back with Chapter 7 of Zuboff next week. Thanks for reading!


September 1, 2019

Zuboff, Chapter 5: The Dispossession Cycle

Chapter 5 — The Elaboration of Surveillance Capitalism: Kidnap, Corner, Compete — is another one of the essential chapters in Zuboff's book. This is where she identifies the "dispossession" strategy use by Google and other companies as they implement the extraction imperative, gathering data in ways that at first appear shocking, but which they manage to make into the new normal by means of a carefully orchestrated four-step process: incursion, habituation, adaptation, and redirection.

Zuboff's focus is on the tech giants (Google, Facebook, Microsoft) and also the telecom companies (esp. Verizon), but it seems to me that her analysis also applies to Instructure, along with other data-crunching edtech companies like ClassDojo, etc. So, here are my notes on Zuboff, Chapter 5.

~ ~ ~

Frictionless. Magical. Inevitable. I don't know if any of you listened in on the Future Trends Forum with Dan Goldsmith, Instructure CEO, last week (video will be in archive), but it was very revealing. He talked about the LMS as being frictionless software, so that you don't even perceive it's there; elsewhere, he's spoken about the inevitability of data personalization. In a conversation I had with Hilary Scharton at Instructure, she talked about the magical content recommendations that Instructure is going to offer us.

All this vocabulary fits exactly with the kind of vocabulary we see coming from Google, as for example Larry Page as quoted in Zuboff:
“Our ultimate ambition is to transform the overall Google experience, making it beautifully simple, almost automagical because we understand what you want and can deliver it instantly.”
Why is Instructure crunching all that data about us, creating student profiles and teacher profiles too? It's so that they can make "automagical" recommendations. Here's Goldsmith back in March 2019We can start making recommendations [...] watch this video, read this passage, do problems 17-34 in this textbook, spend an extra two hours on this or that.


Now, just speaking for myself, I don't think there is going to be very much magic in these suggestions from Instructure. The depth of data is just not there to support that kind of insight. So, that can be comforting (this is all a lot of marcomm bluster, but nothing really of substance), but it can also be alarming because there is plenty more data Instructure could get... and which they are going to have to get if they want to make automagical recommendations to teachers and students.

How far will they go? For example, will Instructure start doing sentiment analysis on discussion board posts? That's a logical step, but one that I imagine a lot of teachers and students would find troubling.

Here is Zuboff writing about similar imperatives at work in Google's data extraction architecture:
There can be no boundaries that limit scale in the hunt for behavioral surplus, no territory exempted from plunder. [...] The assertion of decision rights over the expropriation of human experience, its translation into data, and the uses of those data are collateral to this process, inseparable as a shadow. [...] Google is a shape-shifter, but each shape harbors the same aim: to hunt and capture raw material. Baby, won’t you ride my car? Talk to my phone? Wear my shirt? Use my map?
In the ed tech world, that might mean, "won't you wear my glasses?" like in the AttentivU project from MIT Media Lab:


Is this the kind of education technology you want to see? I do not, but the business world sets no limits here, and the purveyors of ed tech hardware like this are going to want to partner with the LMSes, and vice versa

We may find it outrageous, but as Zuboff documents in the story of Google Glass in this chapter, the technology companies are manipulating the system in a step-by-step process so that these outrageous incursions eventually become the new normal. The hardware doesn't matter; it's all about the data which the hardware gathers:
It’s not the car; it’s the behavioral data from driving the car. It’s not the map; it’s the behavioral data from interacting with the map. The ideal here is continuously expanding borders that eventually describe the world and everything in it, all the time.
And that includes the world of teachers and students, a world that we thought belonged to us... but which is instead being turned into data, data over which we have very little control.

Dispossession. Here is how Zuboff describes the first stage of the Dispossession Cycle, incursion:
The first stage of successful dispossession is initiated by unilateral incursion into undefended space. [...] Incursion moves down the road without looking left or right, continuously laying claim to decision rights over whatever is in its path. “I’m taking this,” it says. “These are mine now.”
That's exactly how I see Instructure's new move towards AI and machine learning: a unilateral claim made on our data, along with incursions into the Gradebook, labeling my students' work (incorrectly) as MISSING and LATE (yes, with red ink).

In an old-school LMS, that intrusion into the Gradebook would be unthinkable, but the new Canvas LMS has decided it knows better than teachers how to assess and label students' work. As I explain in this detailed post, Instructure actually began intruding into the Gradebook two years ago, and then they pulled back (I breathed a sigh of relief...), but when the new Gradebook launched, there it was: the labels were back, unchanged.

This pattern is the same kind of thing that Zuboff describes at Google: outrageous incursion, followed by what appears to be a retreat but which is not a retreat at all:
[Google] has learned to launch incursions and proceed until resistance is encountered. It then seduces, ignores, overwhelms, or simply exhausts its adversaries. [...] People habituate to the incursion with some combination of agreement, helplessness, and resignation. The sense of astonishment and outrage dissipates. [...] The incursion itself, once unthinkable, slowly worms its way into the ordinary. Worse still, it gradually comes to seem inevitable.
Zuboff documents this cycle in detail for Gmail, Street View, and other Google products, along with Facebook's Like button, Microsoft Cortana, the tracking IDs launched by Verizon, and so on.

Life as data. Just as Google is rendering the real world into data, something similar is now happening for education, where the human world of education is being transformed into data to be mined and manipulated:
Any public space is a fitting subject for the firm’s new breed of incursion without authorization, knowledge, or agreement. Homes, streets, neighborhoods, villages, towns, cities: they are no longer local scenes where neighbors live and walk, where residents meet and talk. Google Street View, we are informed, claims every place as just another object among objects in an infinite grid of GPS coordinates and camera angles. [...] Google’s prerogative to empty every place of the subjective meanings that unite the human beings who gather there. 
Instead of human beings who gather to learn, teachers and students with our own subjective realities and personal goals, we are now gathering in order to be measured and rendered as data.

Google wraps this up in "empowering people" rhetoric that is very similar to the rhetoric we hear from Instructure: this data is going to empower students, empower teachers, empower institutions, etc. etc. Here's Google's John Hanke speaking about Street View, for example:
[Hanke] declared that Street View’s information was “good for the economy and good for us as individuals.… It is about giving people powerful information so that they can make better choices.” [...] Hanke’s remarks were wishful thinking, of course, but they were consistent with Google’s wider practice: it’s great to empower people, but not too much, lest they notice the pilfering of their decision rights and try to reclaim them. 
So, I am now wondering about just what we have to do to reclaim our decision rights at Instructure. Dan Goldsmith stated very clearly in the Future Trends Forum meeting that individuals own their data. Not institutions, not Instructure: individuals. He's said similar things before, and of course I am glad to hear it, but I have to ask what it means for us to own our data. What are the powers that come with ownership? Can I export my data? Can I remove my data from my institution's data set for AI and machine learning experiments? Can I remove my data from Instructure's AI and machine learning experiments?

In short: do I have to right to decide just who will use my data and how? And will that be opt-in or opt-out? I am really hoping to hear more about that from Instructure.

First day of school to last day of work. That's Instructure's new company slogan; you can read more about their recent rebranding here: Connecting our Brand and Website to a Mission of Lifelong Learning.



That Instructure rhetoric sounds a lot like the Google rhetoric which Zuboff focuses on in this chapter, where Google wants to be part of all our decisions, big and small, throughout all aspects of our lives:
Google the “copilot” prompts an individual to turn left and right on a path defined by its continuously accruing knowledge of the person and the context. Predictions about where and why a person might spend money are derived from Google’s exclusive access to behavior surplus and its equally exclusive analytic capabilities. [...] Push and pull, suggest, nudge, cajole, shame, seduce: Google wants to be your copilot for life itself.
And it's not just Google; Microsoft's Cortana assistant has similar aspirations:
One Microsoft executive characterizes Cortana’s message: “‘I know so much about you. I can help you in ways you don’t quite expect. I can see patterns that you can’t see.’ That’s the magic.”
Even more so now that Microsoft has acquired LinkedIn, so that Microsoft CEO Nadella can proclaim:
“Today Cortana knows about you, your organization and about the world. In the future, Cortana will also know your entire professional network to connect dots on your behalf and stay one step ahead.”
It's the same kind of rhetoric we're hearing from Instructure with its new overarching school-and-work mission. Instead of offering teachers software and webspace for their courses (how Canvas used to be), they are now using both school and work spaces to extract data and profile individuals for both educational and professional projects (Canvas, Bridge, all the Instructure products).

But do we really want Instructure to "connect the dots on our behalf"? Speaking for myself, I do not. And that's why I need to know how to opt out of the AI experiments that Instructure has started. I've been raising these questions since back in March, and it is still not clear to me what data is, and is not, being used in DIG and other machine-learning experiments. I hope we will get clarification about that soon, along with a procedure so that we can opt ourselves out of those data-mining experiments. We need to be able to remove our own data from those cross-course and cross-institutional data sets, limiting the data use to the courses in which the data was collected.

As Zuboff points out in this chapter, these tech corporations seek to seduce, ignore, overwhelm, or simply exhaust their adversaries... but as regards the data opt-out, I am not giving up! :-)

~ ~ ~

So, those are my thoughts from Chapter 5, and I'll be back next week with my notes on Chapter 6. Thanks for reading!


August 18, 2019

Zuboff, Chapter 4: How Google Got Away With It

Last week was Zuboff's chapter on the discovery of surveillance capitalism, based on using surplus behavioral data for user profiles and predictions; the parallels to the LMS were, in my opinion, both clear and frightening, and that was the focus of my post. In this week's post, I'll be sharing my notes on Zuboff's chapter about "how Google got away with it," and, coincidentally, this week is also when Google announced two new moves in its effort to automate education: on Wednesday, they announced a plagiarism policing service which got widespread attention in the Twitterverse (my part of the Twitterverse anyway); on Thursday, they announced an AI-powered tutoring tool, Socratic. It is the tutoring tool which I think is far more alarming, although my quick test of the tutor led to some laughable results (see below).

So, for this week, my notes about Zuboff's book will be less detailed since the chapter is mostly about Google, but I would urge everybody to think about Google's very aggressive new moves into the education world. Here is some of the initial coverage in TechCrunch, and I hope we will see some detailed critical analysis soon: Google’s new ‘Assignments’ software for teachers helps catch plagiarism and Google discloses its acquisition of mobile learning app Socratic as it relaunches on iOS.

And now, some notes from Zuboff, Chapter 4: The Moat Around the Castle.

~ ~ ~

Chapter 4 opens with a historical take on capitalism as appropriation and dispossession, where "things that live outside the market sphere and declaring their new life as market commodities." We've seen this happen most clearly at TurnItIn, where students had been writing school work for decades; it took TurnItIn to figure out how to turn student writing into a billion-dollar business. As Zuboff explained in detail already in the previous chapter, the extraction process reduces our subjective experience into behavioral data for machine learning:
human experience is subjugated to surveillance capitalism’s market mechanisms and reborn as “behavior.” These behaviors are rendered into data, ready to take their place in a numberless queue that feeds the machines for fabrication into predictions
In return, we get "personalized" products (personalized education is inevitable, as Instructure's CEO has proclaimed), but the real story is the larger corporate agenda:
Even when knowledge derived from our behavior is fed back to us as a quid pro quo for participation, as in the case of so-called “personalization,” parallel secret operations pursue the conversion of surplus into sales that point far beyond our interests.
For that corporate agenda to move forward, we must be denied power over the future use of our data, making us "exiles from our own behavior" as Zuboff explains, carrying on with her metaphor of home and sanctuary:
We are exiles from our own behavior, denied access to or control over knowledge derived from its dispossession by others for others. Knowledge, authority, and power rest with surveillance capital, for which we are merely “human natural resources.”
After these preliminaries, Zuboff then moves into a detailed examination of the factors that allowed Google to get away with it, an examination that will carry on for the rest of the book:
“How did they get away with it?” It is an important question that we will return to throughout this book.
Some of the factors are specific to Google as a company, but some of them also parallel moves that we see in the ed tech world, such as the way that we are being asked to simply trust Instructure with our data, without legal protections:
In the absence of standard checks and balances, the public was asked to simply “trust” [Google's] founders. [...] Schmidt insisted that Google needed no regulation because of strong incentives to “treat its users right.”
In the case of education, FERPA is indeed an almost 50-year-old law (well, 45 years; Wikipedia), just the kind of legal framework that Google's Larry Page has scoffed at:
“Old institutions like the law and so on aren’t keeping up with the rate of change that we’ve caused through technology.… The laws when we went public were 50 years old. A law can’t be right if it’s 50 years old, like it’s before the internet.”
Zuboff then provides a detailed analysis of the impact that the events of September 11 had both on Google's corporate agenda, as well as the government's surveillance efforts. That discussion is not directly relevant to education, but it got me to thinking how the rise of the LMS coincided with the "great adjunctification" of the higher ed workforce. Because of the LMS, schools could experiment with centrally designed courses that could be staffed at a moment's notice with part-time temporary faculty. The LMS was not created in order to make that possible, but the availability of the LMS certainly made the adjunctification of higher ed much easier over the past two decades.

Zuboff also has a chilling section on the role that Google played in the elections of 2008 and 2012, along with an inventory of Google's enormous political lobbying efforts.

Towards the end of the chapter, Zuboff presents this description of Google's Page and Schmidt to sum things up:
Two men at Google who do not enjoy the legitimacy of the vote, democratic oversight, or the demands of shareholder governance exercise control over the organization and presentation of the world’s information.
I have much the same feeling about the engineers at Instructure and other ed-tech companies: without being teachers themselves, and without being directly accountable to teachers (but instead accountable to schools and those schools' IT departments), they exercise control over the organization of our schooling.

We need and deserve better, and so do our students.

~ ~ ~

P.S. Unrelated to Zuboff's book, I tested the new Google Socratic and it was a total failure with both questions I tried. Has anyone else tried it with success? I guess I am glad that it looks to be so bad!

For example, I asked it what do bush cats eat (something I actually was researching earlier today)... and the response from Socratic was a Yahoo Answers item about a house cat who eats leaves from a lilac bush, and the owner is worried that they might be poisonous. Poor Socrates didn't recognize "bush cat" is another name for the African serval. It thought I was asking what-kind-of-bush do cats eat, as opposed to my actual question, which was "what do bush cats eat?" I didn't mean to trick it, but that was pretty funny once I figured out how the computer misunderstood the question. (And yes, I really was learning about bush cats earlier today, ha ha, re: this story: How a Hunter obtained Money from his Friends the Leopard, Goat, Bush Cat, and Cock, and how he got out of repaying them.)

For your viewing pleasure, this is a bush cat (photo by Danny Idelevich):


Then, I asked what I thought would be an easy question: what was the first Cinderella story? But instead of sending me to the Wikipedia article on Cinderella, it sent me to the Wikipedia article about the Toy Story film franchise. I'm not even sure what's up with that one.

Anyway, the official Google post says that Socratic is ready to help... but it sure doesn't look like it to me. Help-not-help ha ha.



and now...
Happy Back-to-School, everybody!


UPDATE: Here are the notes on Chapter 5: The Dispossession Cycle

August 11, 2019

Zuboff, Chapter 3. Google: The Pioneer of Surveillance Capitalism

Chapter 3 is the ESSENTIAL chapter in Zuboff's whole book, and it contains a powerful warning for what is happening in the ed-tech world right now. I took a break from my reading notes blogs last week when I wrote a response to the latest Instructure statement on data gathering and predictive algorithms (Data Analytics... no, I don't dig it), and it is really good timing to move on from that to this chapter of Zuboff's book, where she tells the story of how Google discovered/invented surveillance capitalism. That happened step by step, based on specific choices made by Google executives and employees, and I would contend that Instructure executives and employees are looking at a path very similar to the one that Google followed, a path that might be profitable for the company but which I think will be very bad news for education.

So, as I write my notes here I'll focus on points of comparison that I see between Google's story and the story of Canvas, and for another really powerful ed-tech comparison, see Ben Williamson's piece on ClassDojo, which is also going down the path of big data and behavior modification: Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry.

And now... Chapter 3. Google: The Pioneer of Surveillance Capitalism.


Google and the LMSes. Although Google's search engine and an LMS are quite different software products, they share a key feature that Zuboff emphasizes in a quote from Google's chief economist, Hal Varian, writing about “computer-mediated transactions” and their transformational effects on the modern economy: the computer systems are pervasive, and that pervasiveness has consequences.
Nowadays, there is a computer in the middle of virtually every transaction… now that they are available these computers have several other uses.
And what are some of those other uses? The uses are: data extraction and analysis; new contractual forms due to better monitoring; personalization and customization; and continuous experiments.

Anyone familiar with the evolution of the LMS over the past two decades can see plenty of parallels there: as with Google, so too the LMS. The LMS increasingly puts itself in the middle of transactions between teachers and students, and as a result we are seeing data extraction and analysis that didn't used to happen before, monitoring unlike any attendance system ever used in a traditional classroom, the mounting hype of personalization and customization, along with continuous experiments... including experiments for which we and our students never gave our permission.

As Zuboff narrates the story of Google's discovery/invention of behavioral surplus, she starts with the early days of Google, when "each Google search query produced a wake of collateral data," but the value of that collateral data had not yet been recognized, and "these behavioral by-products were haphazardly stored and operationally ignored." The same, of course, has been true of the LMS until very recently.

Zuboff credits the initial discovery of these new uses for collateral data to Amit Patel, at that time a Stanford grad student:
His work with these data logs persuaded him that detailed stories about each user—thoughts, feelings, interests—could be constructed from the wake of unstructured signals that trailed every online action.
That is the kind of thing we are hearing from the LMSes now too, although personally, I am not convinced by the depth of data they have to work with compared to Google. The user experience of the LMS is so limited and predefined, with so little opportunity for truly individual action (just the opposite of the Google search engine), that I don't think the LMSes are going to be able to do the kind of profiling they claim they will be able to do... not unless/until they get into the kind of surveillance that is taking shape in Chinese schools now as part of the Chinese government's huge investment in edtech and AI; for more on that, see: Camera Above the Classroom by Xue Yujie. See also this new piece on facial recognition experiments in education: Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements.

The LMS started out as a tool for teachers and students to use in order to accomplish teaching and learning tasks, but now it is morphing into a surveillance device so that the LMS company can gather data and take over those tasks, intervening in ways that the LMS never did before, turning into the kind of recursive learning system that Google has become:
Google’s engineers soon grasped that the continuous flows of collateral behavioral data could turn the search engine into a recursive learning system.
Google then used the predictive power of that system in order to create something completely unprecedented: the Google advertising empire. Zuboff provides a step by step account of just how that happened, and how a similar transformation then took place at Facebook.

What's next for the LMS? So, an obvious question is this: what are the LMS companies going to do with their predictive products? The answer is: they don't know. Yet. Which is why we need to be talking about this now; the future of education is something of public importance, and it is not something that should be decided by software company executives and engineers. It's one thing for companies to let Google take control of their advertising; it is something else entirely for schools to let the LMS take control of schooling.

Here is how Zuboff describes the shift in the relationship between Google and advertisers as the new predictive products took shape; as you read this, think about what this might foretell for education:
In this new approach, he insisted that advertisers “shouldn’t even get involved with choosing keywords—Google would choose them.” [...] Operationally, this meant that Google would turn its own growing cache of behavioral data and its computational power and expertise toward the single task of matching ads with queries. [...] Google would cross into virgin territory by exploiting sensitivities that only its exclusive and detailed collateral behavioral data about millions and later billions of users could reveal.
While it might seem like advertising and education don't have anything to do with each other, they overlap a lot if you look at education as a form of behavior modification (which, sad to say, many people do, including many educators).
Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” [...] The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising.
That transformation of advertising into a "science" sounds scarily like the way that some would like to see teaching turned into a data-driven science, precise and standardized in its practices. For more on that topic, see the Ben Williamson piece I mentioned above about ClassDojo. In addition, Zuboff is going to have a lot to say about behaviorism, especially the radical behaviorism of B. F. Skinner, later in the book.

Profiling. So, back to the Google story. As Google accumulated more and more of this behavioral data, they began to develop (and patent) what they called UPIs, user profile information:
These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.
So too at Instructure, where they claim that they can already develop predictive profiles of students by combining data across courses; here's CEO Dan Goldsmith: "We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom."

Again, as I said above, I am not really persuaded by the power of Instructure's so-called insights. If they are looking at a student's grades in all their other classes, for example, and using that GPA to predict the student's performance in a new class, sure, the GPA has some predictive power.

What I really want to know, though, is how Instructure has the right to use a student's grade data in that way, when I thought such data was private, protected by FERPA. I am not allowed to see the grades my students receive in their other courses (nor do I want to); I'm not even allowed to see the other courses they are taking — all that data is protected by FERPA. But Instructure is now apparently profiling students based on their grades in other classes (?), and then using that grade-derived data in order to insert itself as an actor in other classrooms, all without the students' permission. Now, if I am wrong in that characterization of their predictive Dig initiative, I will be glad to stand corrected (and I'm hoping for a reply to my blog post last week about those issues); I'm just going on statements made in public by Dan Goldsmith about the Dig project.

As Instructure gathers up all this data, without allowing us to opt out, they are proceeding much as Google did, unilaterally extracting without users' awareness or informed consent:
A clear aim of the [UPI] patent is to assure its audience that Google scientists will not be deterred by users’ exercise of decision rights over their personal information, despite the fact that such rights were an inherent feature of the original social contract between the company and its users. [...] Google’s proprietary methods enable it to surveil, capture, expand, construct, and claim behavioral surplus, including data that users intentionally choose not to share. Recalcitrant users will not be obstacles to data expropriation. No moral, legal, or social constraints will stand in the way of finding, claiming, and analyzing others’ behavior for commercial purposes.
The right to decide. Narrating the Google history year by year, Zuboff shows that the new Google emerged over time; the values and principles of Google today are not the values and principles that Google espoused at the beginning. Are we seeing the same kind of shift happening at Instructure? Re-reading this chapter in her Zuboff's book I am very concerned that this is indeed what we are seeing, a "180-degree turn from serving users to surveilling them." And as I've said repeatedly in my complaints to Instructure about its new data initiatives, this is not just about privacy; instead, it is about the right to decide:
That Google had the power to choose secrecy is itself testament to the success of its own claims. This power is a crucial illustration of the difference between “decision rights” and “privacy.” [...] Surveillance capitalism lays claim to these decision rights. The typical complaint is that privacy is eroded, but that is misleading. In the larger societal pattern, privacy is not eroded but redistributed, as decision rights over privacy are claimed for surveillance capital. [...] Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism. [...] Surveillance is the path to profit that overrides “we the people,” taking our decision rights without permission and even when we say “no.”
Writing about Instructure's new data analytics since back in March (My Soylent Green Moment), I've been saying no... but it is still not clear if I am going to be able to opt out of having data from my courses included in Instructure's machine learning project, and it is also not clear if my students are going to be able to opt out of the type of profiling that Goldsmith has described. I believe that each of us needs to be able to decide to say "no" on an individual level, not just institutional consent and institutional opt-out, which seems to be (?) what Instructure is offering. So, I'm still hoping we will hear more about that, and sooner rather than later, given that another school year is about to begin.

One last quote... Okay, this has become another too-long blog post, so I'll close with a spectacular Zuboffian sentences... can this woman write? This woman can write! And we need to pay attention to every word here:
The remarkable questions here concern the facts that our lives are rendered as behavioral data in the first place; that ignorance is a condition of this ubiquitous rendition; that decision rights vanish before one even knows that there is a decision to make; that there are consequences to this diminishment of rights that we can neither see nor foretell; that there is no exit, no voice, and no loyalty, only helplessness, resignation, and psychic numbing; and that encryption is the only positive action left to discuss when we sit around the dinner table and casually ponder how to hide from the forces that hide from us.
As a teacher at the start of a new school year, I should not have to ponder how to hide from the LMS, or how to help my students do so, but that is the position I am in. When my students create blogs and websites for my classes, they can do all of that using pseudonyms (pseudonyms are great, in fact), and they can keep or delete whatever they want at the end of the semester. But what about Instructure? Is Instructure going to take what it learns about my students in my class and then use that data to profile those students in their other classes, prejudicing those other instructors before those classes even begin? (See Goldsmith quote above re: profiling.) If that is what Instructure is going to do with data from my classes, I need to able to say "no," and so do my students.

Meanwhile, I'll be back with more from Zuboff next weekend. Thanks for reading! You can comment here if you want, or connect at Twitter (@OnlineCrsLady).

P.S. I hope that those who know more about Instructure analytics will chime in, especially anybody who's at a Unizin school. All I know about Unizin I learned from the Chronicle article here: Colleges are Banding Together ... and the claims made there sound even more alarming than Goldsmith's description of Instructure profiling. Which is to say: very alarming indeed. Claims from Brad Wheeler, Unizin cofounder:
Take students’ clickstreams and pageviews on the learning-management system, their writing habits, their participatory clicks during classroom discussions, their grades. Then combine that with information on their educational and socioeconomic backgrounds, their status as transfer students, and so on. You end up with "a unique asset," says Wheeler, in learning what teaching methods work. 
Digital publishers have learned a lot about students who use the publishers’ texts and other resources, he says, but the demographic puzzle pieces are key to discovering impediments to learning.
My school is not a participant in Unizin (I'll add: thank goodness). Here is the list: Unizin Members. If you are at a Unizin school, I would love to know more about what kind of informed consent and opt-out procedures are in place at those schools.

UPDATE: Here are the notes on Chapter 4: How Google Got Away With It

July 28, 2019

Zuboff, Chapter 2. Setting the Stage for Surveillance Capitalism.

Last week, I wrote up notes on the introductory chapter to Zuboff's book (and I'll be tagging all the posts #Zuboff to keep them together); this week's chapter plunges us into the details of her historical argument. Zuboff takes three key moments from the summer of 2011 (not all that long ago, right? but already part of a historical drama) and uses those moments to help focus a larger narrative about two massive transformations of the 20th century: the modernity which has resulted in a new "society of individuals" on the one hand, and the advent of neoliberalism and free market capitalism on the other. By studying the rise of individualization and also of neoliberalism, Zuboff shows how surveillance capitalism has emerged as the convergence of these two very different historical movements.

An aside: my economic ignorance. Zuboff's analysis easily explains just how I was so caught by surprise by the turn that events have taken: the arrival of the Internet enraptured me (as a reader, a writer, a teacher, it was a dream come true), while my ignorance of economics made me oblivious to what was happening in the corporate world. When I first learned to create webpages in the fall of 1998, I felt so lucky: it was my last year of graduate school, and I would begin my career as an academic with all the freedom that the Internet offered me and my students. Instead of writing "papers" and taking paper-and-pencil tests that all ended up in the trash can, we would create digital artifacts ⁠— made of words, images, and even audio and video ⁠— and we would share what we created with other learners all over the world. We would no longer have to travel to university libraries to read the books; the books would come to us, wherever we were. No longer would publishers regulate what we read and what we would have to pay to read those publications; we could do all that by ourselves for ourselves.

And, indeed, some of that digital dream has come true... but some of it has turned into a neoliberal nightmare. I'm teaching fully online courses where students create and share what they create with one another (my dream come true), but I am doing that as an adjunct instructor, teaching at a public university where state support for education has plunged and tuition has skyrocketed. My school offers webspace for faculty to use, but most faculty use the LMS instead, keeping their courses closed, deleting everything at the end of the semester, and leaving no digital trail for other teachers and learners to follow and explore. Worse: the LMS has now become a full-blown experiment in surveillance capitalism, gathering data about students to build predictive algorithms, as I wrote about in last week's post.

Dream. Nightmare. That is the story that Zuboff begins to unfold in detail with this chapter. If the Internet had arrived during the era of the New Deal, things might have turned out very differently. Or if the Internet had arrived during the era of the civil rights movement and the Great Society. But instead, the Internet arrived during the era of free markets and neoliberalism... and the result is terrifying. We urgently need to understand what is happening so that we can try to put a stop to the nightmare. And maybe even save some of the dream.

Here are some of the key points I would highlight from the chapter:

Remember the iPod? Before the iPhone, it was the iPod and iTunes that turned Apple into a business colossus. The "i" was about digital info but also about "I" the individual:
Young people's enthusiasm for Napster and other forms of file sharing expressed a new quality of demand: consumption my way, what I want, when I want it, where I want it. [...] Apple was among the first to experience explosive commercial success by tapping into a new society of individuals and their demand for individualized consumption.
This individualized consumption resonated with the modern "society of individuals," feeling like a kind of liberation, but instead it was going to lead to new forms of exploitation and oppression. Although this was not clear at the time ⁠—certainly not to digital enthusiasts like myself ⁠— there were two vectors at work here:
One vector belongs to the longer history of modernization and the centuries-long societal shift from the mass to the individual. [...] The opposing vector belongs to the decades-long elaboration and implementation of the neoliberal economic paradigm, [...] especially its aim to reverse, subdue, impede, and even destroy the individual urge toward psychological self-determination and moral agency.
Individualization v. individualism. In last week's post, I wrote about the perverse way that ed tech uses the term "personalization" to describe the automation of education, something that I would call de-personalization instead. Zuboff highlights a similar tension between modern individualization and the individualism of neoliberalism:
First let’s establish that the concept of "individualization" should not be confused with the neoliberal ideology of "individualism" that shifts all responsibility for success or failure to a mythical, atomized, isolated individual.
When the Internet arrived, it offered possibilities for discovery and connection that were poised to promote individual liberation and self-determination, just at the moment we needed it:
The burdens of life without a fixed destiny turned us toward the empowering information-rich resources of the new digital milieu as it offered new ways to amplify our voices and forge our own chosen patterns of connection.
But neoliberal economic policies promoted by Hayek, Friedman, et al., hijacked that potential for the benefit of companies and their shareholders:
The absolute authority of market forces would be enshrined as the ultimate source of imperative control, displacing democratic contest and deliberation with an ideology of atomized individuals sentenced to perpetual competition for scarce resources. [...] The disciplines of competitive markets promised to quiet unruly individuals and even transform them back into subjects too preoccupied with survival to complain.
The result is not just neoliberalism, but neofeudalism:
Many scholars have taken to describing these new conditions as neofeudalism, marked by the consolidation of elite wealth and power far beyond the control of ordinary people and the mechanisms of democratic consent. [...] Piketty calls it a return to "patrimonial capitalism," a reversion to a premodern society in which one’s life chances depend upon inherited wealth rather than meritocratic achievement.
What's different, of course, is that while these neofeudalizing forces at work, we are now modern individuals:
We are not illiterate peasants, serfs, or slaves. We are [...] people whom history has freed both from the once-immutable facts of a destiny told at birth and from the conditions of mass society. [...] We want to exercise control over our own lives, but everywhere that control is thwarted.
An especially cruel paradox is that one way we have sought to exercise control is by using the Internet: searching, connecting, and sharing. Yet all those clicks become the new "behavioral surplus" to be harvested by the digital overlords in this neofeudal arrangement:
Every casual search, like, and click was claimed as an asset to be tracked, parsed, and monetized by some company, all within a decade of the iPod’s debut. [...]The rise of surveillance capitalism betrayed the hopes and expectations of many "netizens" who cherished the emancipatory promise of the networked milieu. Under this new regime, the precise moment at which our needs are met is also the precise moment at which our lives are plundered for behavioral data, and all for the sake of others’ gain.
Ed tech and the right to be forgotten. Back in 1998, I saw the Internet as the way that we would empower ourselves as students and teachers, but now the LMS wants to diminish us instead, rendering us as raw material for new ed tech products like predictive algorithms and other forms of automation, just as Zuboff sees happening in the digital world at large:
The result is a perverse amalgam of empowerment inextricably layered with diminishment. [...] Terms whose meanings we take to be positive or at least banal—"the open internet," "interoperability," and "connectivity"—have been quietly harnessed to a market process in which individuals are definitively cast as the means to others’ market ends.
But there are ways to fight back, and Zuboff closes the chapter with the story of a legal battle in Spain for the "right to be forgotten," thus reclaiming our rights to the future tense and to sanctuary:
The new harms we face entail challenges to the sanctity of the individual, and chief among these challenges I count the elemental rights that bear on individual sovereignty, including the right to the future tense and the right to sanctuary. [...] The extreme asymmetries of knowledge and power that have accrued to surveillance capitalism abrogate these elemental rights as our lives are unilaterally rendered as data, expropriated, and repurposed in new forms of social control, all of it in the service of others’ interests and in the absence of our awareness or means of combat.
I began reading Zuboff's book as a result of learning that Instructure had unilaterally laid claim to the data about me and about my students in their LMS, exploiting that data to create predictive models as part of their ongoing quest for market dominance (details). Zuboff does not talk about LMSes in this book, but the "right to be forgotten" is something we must fight for in the ed tech world. As companies like Instructure now want to gather data about us "from the first day of school until the last day of work" (their new company slogan), we have to fight for the right to our own futures, futures that we choose, not futures determined by the data in Instructure's possession, data that they are using to profile us, both to predict our behavior and also to modify it.

(Instructure home page)

The story of Instructure's data grab parallels events at Google and its evolution. A big difference, of course, is that while Google does offer enormous benefits, I cannot say the same about the LMS, which makes it all the more perverse that we have put the LMS at the center of our digital education efforts, letting it demand our money, time, attention, and other precious resources even before it started amassing our data for machine-learning purposes. More on that later; for now, here is Zuboff on Google and the importance of the EU Court of Justice decision on the right to be forgotten:
Google’s mission to "organize the world’s information and make it universally accessible and useful"—starting with the web—changed all of our lives. There have been enormous benefits, to be sure. But for individuals it has meant that information that would normally age and be forgotten now remains forever young, highlighted in the foreground of each person’s digital identity. [...] The Court of Justice’s decision, so often reduced to the legal and technical considerations related to the deletion or de-linking of personal data, was in fact a key inflection point at which democracy began to claw back rights to the future tense from the powerful forces of a new surveillance capitalism determined to claim unilateral authority over the digital future.
In these brief notes and highlights, I cannot convey the fullness of Zuboff's story, but I hope that my comments here might inspire you to read the book too, and also to take action, clawing back our rights to our own digital future.

For me, that action has been scrutinizing developments at Instructure and lobbying for the right to opt out of their AI and machine-learning initiatives. I keep watching the three company blogs ⁠— Instructure, Canvas, Bridge ⁠— for a promised post about their new "data tenets."

Meanwhile, I will be back again next week with notes for Chapter 3 of Zuboff's book. Thoughts? You can comment here or find me at Twitter; I'm @OnlineCrsLady there. And thanks for reading!