Pages

Sunday, August 18, 2019

Zuboff, Chapter 4: How Google Got Away With It

Last week was Zuboff's chapter on the discovery of surveillance capitalism, based on using surplus behavioral data for user profiles and predictions; the parallels to the LMS were, in my opinion, both clear and frightening, and that was the focus of my post. In this week's post, I'll be sharing my notes on Zuboff's chapter about "how Google got away with it," and, coincidentally, this week is also when Google announced two new moves in its effort to automate education: on Wednesday, they announced a plagiarism policing service which got widespread attention in the Twitterverse (my part of the Twitterverse anyway); on Thursday, they announced an AI-powered tutoring tool, Socratic. It is the tutoring tool which I think is far more alarming, although my quick test of the tutor led to some laughable results (see below).

So, for this week, my notes about Zuboff's book will be less detailed since the chapter is mostly about Google, but I would urge everybody to think about Google's very aggressive new moves into the education world. Here is some of the initial coverage in TechCrunch, and I hope we will see some detailed critical analysis soon: Google’s new ‘Assignments’ software for teachers helps catch plagiarism and Google discloses its acquisition of mobile learning app Socratic as it relaunches on iOS.

And now, some notes from Zuboff, Chapter 4: The Moat Around the Castle.

~ ~ ~

Chapter 4 opens with a historical take on capitalism as appropriation and dispossession, where "things that live outside the market sphere and declaring their new life as market commodities." We've seen this happen most clearly at TurnItIn, where students had been writing school work for decades; it took TurnItIn to figure out how to turn student writing into a billion-dollar business. As Zuboff explained in detail already in the previous chapter, the extraction process reduces our subjective experience into behavioral data for machine learning:
human experience is subjugated to surveillance capitalism’s market mechanisms and reborn as “behavior.” These behaviors are rendered into data, ready to take their place in a numberless queue that feeds the machines for fabrication into predictions
In return, we get "personalized" products (personalized education is inevitable, as Instructure's CEO has proclaimed), but the real story is the larger corporate agenda:
Even when knowledge derived from our behavior is fed back to us as a quid pro quo for participation, as in the case of so-called “personalization,” parallel secret operations pursue the conversion of surplus into sales that point far beyond our interests.
For that corporate agenda to move forward, we must be denied power over the future use of our data, making us "exiles from our own behavior" as Zuboff explains, carrying on with her metaphor of home and sanctuary:
We are exiles from our own behavior, denied access to or control over knowledge derived from its dispossession by others for others. Knowledge, authority, and power rest with surveillance capital, for which we are merely “human natural resources.”
After these preliminaries, Zuboff then moves into a detailed examination of the factors that allowed Google to get away with it, an examination that will carry on for the rest of the book:
“How did they get away with it?” It is an important question that we will return to throughout this book.
Some of the factors are specific to Google as a company, but some of them also parallel moves that we see in the ed tech world, such as the way that we are being asked to simply trust Instructure with our data, without legal protections:
In the absence of standard checks and balances, the public was asked to simply “trust” [Google's] founders. [...] Schmidt insisted that Google needed no regulation because of strong incentives to “treat its users right.”
In the case of education, FERPA is indeed an almost 50-year-old law (well, 45 years; Wikipedia), just the kind of legal framework that Google's Larry Page has scoffed at:
“Old institutions like the law and so on aren’t keeping up with the rate of change that we’ve caused through technology.… The laws when we went public were 50 years old. A law can’t be right if it’s 50 years old, like it’s before the internet.”
Zuboff then provides a detailed analysis of the impact that the events of September 11 had both on Google's corporate agenda, as well as the government's surveillance efforts. That discussion is not directly relevant to education, but it got me to thinking how the rise of the LMS coincided with the "great adjunctification" of the higher ed workforce. Because of the LMS, schools could experiment with centrally designed courses that could be staffed at a moment's notice with part-time temporary faculty. The LMS was not created in order to make that possible, but the availability of the LMS certainly made the adjunctification of higher ed much easier over the past two decades.

Zuboff also has a chilling section on the role that Google played in the elections of 2008 and 2012, along with an inventory of Google's enormous political lobbying efforts.

Towards the end of the chapter, Zuboff presents this description of Google's Page and Schmidt to sum things up:
Two men at Google who do not enjoy the legitimacy of the vote, democratic oversight, or the demands of shareholder governance exercise control over the organization and presentation of the world’s information.
I have much the same feeling about the engineers at Instructure and other ed-tech companies: without being teachers themselves, and without being directly accountable to teachers (but instead accountable to schools and those schools' IT departments), they exercise control over the organization of our schooling.

We need and deserve better, and so do our students.

~ ~ ~

P.S. Unrelated to Zuboff's book, I tested the new Google Socratic and it was a total failure with both questions I tried. Has anyone else tried it with success? I guess I am glad that it looks to be so bad!

For example, I asked it what do bush cats eat (something I actually was researching earlier today)... and the response from Socratic was a Yahoo Answers item about a house cat who eats leaves from a lilac bush, and the owner is worried that they might be poisonous. Poor Socrates didn't recognize "bush cat" is another name for the African serval. It thought I was asking what-kind-of-bush do cats eat, as opposed to my actual question, which was "what do bush cats eat?" I didn't mean to trick it, but that was pretty funny once I figured out how the computer misunderstood the question. (And yes, I really was learning about bush cats earlier today, ha ha, re: this story: How a Hunter obtained Money from his Friends the Leopard, Goat, Bush Cat, and Cock, and how he got out of repaying them.)

For your viewing pleasure, this is a bush cat (photo by Danny Idelevich):


Then, I asked what I thought would be an easy question: what was the first Cinderella story? But instead of sending me to the Wikipedia article on Cinderella, it sent me to the Wikipedia article about the Toy Story film franchise. I'm not even sure what's up with that one.

Anyway, the official Google post says that Socratic is ready to help... but it sure doesn't look like it to me. Help-not-help ha ha.



and now...
Happy Back-to-School, everybody!



Sunday, August 11, 2019

Zuboff, Chapter 3. Google: The Pioneer of Surveillance Capitalism

Chapter 3 is the ESSENTIAL chapter in Zuboff's whole book, and it contains a powerful warning for what is happening in the ed-tech world right now. I took a break from my reading notes blogs last week when I wrote a response to the latest Instructure statement on data gathering and predictive algorithms (Data Analytics... no, I don't dig it), and it is really good timing to move on from that to this chapter of Zuboff's book, where she tells the story of how Google discovered/invented surveillance capitalism. That happened step by step, based on specific choices made by Google executives and employees, and I would contend that Instructure executives and employees are looking at a path very similar to the one that Google followed, a path that might be profitable for the company but which I think will be very bad news for education.

So, as I write my notes here I'll focus on points of comparison that I see between Google's story and the story of Canvas, and for another really powerful ed-tech comparison, see Ben Williamson's piece on ClassDojo, which is also going down the path of big data and behavior modification: Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry.

And now... Chapter 3. Google: The Pioneer of Surveillance Capitalism.


Google and the LMSes. Although Google's search engine and an LMS are quite different software products, they share a key feature that Zuboff emphasizes in a quote from Google's chief economist, Hal Varian, writing about “computer-mediated transactions” and their transformational effects on the modern economy: the computer systems are pervasive, and that pervasiveness has consequences.
Nowadays, there is a computer in the middle of virtually every transaction… now that they are available these computers have several other uses.
And what are some of those other uses? The uses are: data extraction and analysis; new contractual forms due to better monitoring; personalization and customization; and continuous experiments.

Anyone familiar with the evolution of the LMS over the past two decades can see plenty of parallels there: as with Google, so too the LMS. The LMS increasingly puts itself in the middle of transactions between teachers and students, and as a result we are seeing data extraction and analysis that didn't used to happen before, monitoring unlike any attendance system ever used in a traditional classroom, the mounting hype of personalization and customization, along with continuous experiments... including experiments for which we and our students never gave our permission.

As Zuboff narrates the story of Google's discovery/invention of behavioral surplus, she starts with the early days of Google, when "each Google search query produced a wake of collateral data," but the value of that collateral data had not yet been recognized, and "these behavioral by-products were haphazardly stored and operationally ignored." The same, of course, has been true of the LMS until very recently.

Zuboff credits the initial discovery of these new uses for collateral data to Amit Patel, at that time a Stanford grad student:
His work with these data logs persuaded him that detailed stories about each user—thoughts, feelings, interests—could be constructed from the wake of unstructured signals that trailed every online action.
That is the kind of thing we are hearing from the LMSes now too, although personally, I am not convinced by the depth of data they have to work with compared to Google. The user experience of the LMS is so limited and predefined, with so little opportunity for truly individual action (just the opposite of the Google search engine), that I don't think the LMSes are going to be able to do the kind of profiling they claim they will be able to do... not unless/until they get into the kind of surveillance that is taking shape in Chinese schools now as part of the Chinese government's huge investment in edtech and AI; for more on that, see: Camera Above the Classroom by Xue Yujie. See also this new piece on facial recognition experiments in education: Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements.

The LMS started out as a tool for teachers and students to use in order to accomplish teaching and learning tasks, but now it is morphing into a surveillance device so that the LMS company can gather data and take over those tasks, intervening in ways that the LMS never did before, turning into the kind of recursive learning system that Google has become:
Google’s engineers soon grasped that the continuous flows of collateral behavioral data could turn the search engine into a recursive learning system.
Google then used the predictive power of that system in order to create something completely unprecedented: the Google advertising empire. Zuboff provides a step by step account of just how that happened, and how a similar transformation then took place at Facebook.

What's next for the LMS? So, an obvious question is this: what are the LMS companies going to do with their predictive products? The answer is: they don't know. Yet. Which is why we need to be talking about this now; the future of education is something of public importance, and it is not something that should be decided by software company executives and engineers. It's one thing for companies to let Google take control of their advertising; it is something else entirely for schools to let the LMS take control of schooling.

Here is how Zuboff describes the shift in the relationship between Google and advertisers as the new predictive products took shape; as you read this, think about what this might foretell for education:
In this new approach, he insisted that advertisers “shouldn’t even get involved with choosing keywords—Google would choose them.” [...] Operationally, this meant that Google would turn its own growing cache of behavioral data and its computational power and expertise toward the single task of matching ads with queries. [...] Google would cross into virgin territory by exploiting sensitivities that only its exclusive and detailed collateral behavioral data about millions and later billions of users could reveal.
While it might seem like advertising and education don't have anything to do with each other, they overlap a lot if you look at education as a form of behavior modification (which, sad to say, many people do, including many educators).
Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” [...] The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising.
That transformation of advertising into a "science" sounds scarily like the way that some would like to see teaching turned into a data-driven science, precise and standardized in its practices. For more on that topic, see the Ben Williamson piece I mentioned above about ClassDojo. In addition, Zuboff is going to have a lot to say about behaviorism, especially the radical behaviorism of B. F. Skinner, later in the book.

Profiling. So, back to the Google story. As Google accumulated more and more of this behavioral data, they began to develop (and patent) what they called UPIs, user profile information:
These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.
So too at Instructure, where they claim that they can already develop predictive profiles of students by combining data across courses; here's CEO Dan Goldsmith: "We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom."

Again, as I said above, I am not really persuaded by the power of Instructure's so-called insights. If they are looking at a student's grades in all their other classes, for example, and using that GPA to predict the student's performance in a new class, sure, the GPA has some predictive power.

What I really want to know, though, is how Instructure has the right to use a student's grade data in that way, when I thought such data was private, protected by FERPA. I am not allowed to see the grades my students receive in their other courses (nor do I want to); I'm not even allowed to see the other courses they are taking — all that data is protected by FERPA. But Instructure is now apparently profiling students based on their grades in other classes (?), and then using that grade-derived data in order to insert itself as an actor in other classrooms, all without the students' permission. Now, if I am wrong in that characterization of their predictive Dig initiative, I will be glad to stand corrected (and I'm hoping for a reply to my blog post last week about those issues); I'm just going on statements made in public by Dan Goldsmith about the Dig project.

As Instructure gathers up all this data, without allowing us to opt out, they are proceeding much as Google did, unilaterally extracting without users' awareness or informed consent:
A clear aim of the [UPI] patent is to assure its audience that Google scientists will not be deterred by users’ exercise of decision rights over their personal information, despite the fact that such rights were an inherent feature of the original social contract between the company and its users. [...] Google’s proprietary methods enable it to surveil, capture, expand, construct, and claim behavioral surplus, including data that users intentionally choose not to share. Recalcitrant users will not be obstacles to data expropriation. No moral, legal, or social constraints will stand in the way of finding, claiming, and analyzing others’ behavior for commercial purposes.
The right to decide. Narrating the Google history year by year, Zuboff shows that the new Google emerged over time; the values and principles of Google today are not the values and principles that Google espoused at the beginning. Are we seeing the same kind of shift happening at Instructure? Re-reading this chapter in her Zuboff's book I am very concerned that this is indeed what we are seeing, a "180-degree turn from serving users to surveilling them." And as I've said repeatedly in my complaints to Instructure about its new data initiatives, this is not just about privacy; instead, it is about the right to decide:
That Google had the power to choose secrecy is itself testament to the success of its own claims. This power is a crucial illustration of the difference between “decision rights” and “privacy.” [...] Surveillance capitalism lays claim to these decision rights. The typical complaint is that privacy is eroded, but that is misleading. In the larger societal pattern, privacy is not eroded but redistributed, as decision rights over privacy are claimed for surveillance capital. [...] Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism. [...] Surveillance is the path to profit that overrides “we the people,” taking our decision rights without permission and even when we say “no.”
Writing about Instructure's new data analytics since back in March (My Soylent Green Moment), I've been saying no... but it is still not clear if I am going to be able to opt out of having data from my courses included in Instructure's machine learning project, and it is also not clear if my students are going to be able to opt out of the type of profiling that Goldsmith has described. I believe that each of us needs to be able to decide to say "no" on an individual level, not just institutional consent and institutional opt-out, which seems to be (?) what Instructure is offering. So, I'm still hoping we will hear more about that, and sooner rather than later, given that another school year is about to begin.

One last quote... Okay, this has become another too-long blog post, so I'll close with a spectacular Zuboffian sentences... can this woman write? This woman can write! And we need to pay attention to every word here:
The remarkable questions here concern the facts that our lives are rendered as behavioral data in the first place; that ignorance is a condition of this ubiquitous rendition; that decision rights vanish before one even knows that there is a decision to make; that there are consequences to this diminishment of rights that we can neither see nor foretell; that there is no exit, no voice, and no loyalty, only helplessness, resignation, and psychic numbing; and that encryption is the only positive action left to discuss when we sit around the dinner table and casually ponder how to hide from the forces that hide from us.
As a teacher at the start of a new school year, I should not have to ponder how to hide from the LMS, or how to help my students do so, but that is the position I am in. When my students create blogs and websites for my classes, they can do all of that using pseudonyms (pseudonyms are great, in fact), and they can keep or delete whatever they want at the end of the semester. But what about Instructure? Is Instructure going to take what it learns about my students in my class and then use that data to profile those students in their other classes, prejudicing those other instructors before those classes even begin? (See Goldsmith quote above re: profiling.) If that is what Instructure is going to do with data from my classes, I need to able to say "no," and so do my students.

Meanwhile, I'll be back with more from Zuboff next weekend. Thanks for reading! You can comment here if you want, or connect at Twitter (@OnlineCrsLady).

P.S. I hope that those who know more about Instructure analytics will chime in, especially anybody who's at a Unizin school. All I know about Unizin I learned from the Chronicle article here: Colleges are Banding Together ... and the claims made there sound even more alarming than Goldsmith's description of Instructure profiling. Which is to say: very alarming indeed. Claims from Brad Wheeler, Unizin cofounder:
Take students’ clickstreams and pageviews on the learning-management system, their writing habits, their participatory clicks during classroom discussions, their grades. Then combine that with information on their educational and socioeconomic backgrounds, their status as transfer students, and so on. You end up with "a unique asset," says Wheeler, in learning what teaching methods work. 
Digital publishers have learned a lot about students who use the publishers’ texts and other resources, he says, but the demographic puzzle pieces are key to discovering impediments to learning.
My school is not a participant in Unizin (I'll add: thank goodness). Here is the list: Unizin Members. If you are at a Unizin school, I would love to know more about what kind of informed consent and opt-out procedures are in place at those schools.


Saturday, August 3, 2019

Data Analytics... no, I don't dig it

This week Jared Stein wrote a blog post about Canvas data, Power to the People with Canvas Data and Analytics (Can You Dig It?). I'm glad that a conversation is happening, and I have a lot to say in response, especially about an opt-out for those of us who don't want to be part of the AI/machine-learning project, and a shut-off so that we can stop intrusive profiling, labeling, and nudging in our Canvas classes. It's not clear from Jared's post just what kind of opt-out and shut-off control we will have, and I hope we will hear more about that in future posts. Also, since Jared does not detail any specific Dig projects, I am relying on Phil Hill's reporting from InstructureCon which describes one such project: profiling a student across courses, including past courses, and using that comprehensive course data to predict and manage their behavior in a current course. (This use of grade data across courses without the student's express consent sure looks like a violation of FERPA to me, but I'll leave that to the lawyers.)


And now, some thoughts:

1. Not everyone digs it. I understand that some people see value in Dig predictive analytics, and maybe they are even passionate about it as Jared says in his post, but there are also people whose passions run in different directions. As I explain below, my passion is for data that emerges in actual dialogue with students, so it is imperative that I be able to stop intrusive, impersonal auto-nudges of the sort that Dig will apparently be generating. The punitive red labels in the Canvas Gradebook are already a big problem for me (my students' work is NOT missing, and it is NOT late, despite all the labels to the contrary). Based on the failure of the Gradebook algorithms in my classes, I do not want even more algorithms undermining the work I do to establish good communication and mutual trust. So, I really hope Instructure will learn a lesson from those Gradebook labels: instructors need to be able to turn off features that are unwelcome and inappropriate for their classes. Ideally, Instructure would give that power directly to the students, or let teachers choose to do so; that's what I would choose. My students voted by a large majority to turn off the labels (which I now do manually, week by week, using a javascript), although a few students would have wanted to keep the labels. I say: let the students decide. And for crying out loud, let them choose the color too; the labels don't need to be red, do they?

2. We need to target school deficits, not student deficits. I believe that Instructure's focus on at-risk students comes from good intentions, but that cannot be our only focus. Instead, we need data to help us focus on our own failures, the deficits in our own courses: deficits in design, content, activities, feedback, assessment, etc., along with data about obstacles that students face beyond the classroom. This is a huge and incredibly important topic, way too big for this blog post, so I hope everybody might take the time to read some more about the perils of deficit-driven thinking. A few places to start:
For a great example of what happens when you invite students to talk about the obstacles they face, see this item by Peg Grafwallner: How I Helped My Students Assess Their Own Writing. Applying that approach to Canvas: instead of labeling students with red ink in the Gradebook ("you messed up!") and then auto-nudging them based on those labels ("don't mess up again!"), the labels could be more like a "what happened?" button, prompting a dialogue where the student could let the instructor know the reason(s) why they missed an assignment or did poorly, etc., and the instructor could then work with the student to find a positive step forward, based on what the student has told them. That is the way I would like to see data-gathering happen: student-initiated, in context and in dialogue.

3. Dig is not just about privacy; it is about Instructure's unilateral appropriation of our data. Jared emphasizes in his post that Instructure is not selling or sharing our data, but there is more at stake here than just data privacy and data sharing. Instructure is using our data to engage in AI experiments, and they have not obtained our permission to do that; I have not consented, and would not give my consent if asked. Dan Goldsmith has stated that users "own" their data, and one of the data tenets announced at InstructureCon was "Empower People, Don't Define Them" (discussed here). Speaking for myself, as someone who does not want to be defined by Instructure's profiling and predictive algorithms, I need to be able to just opt out. In his post, Jared writes about Instructure being a "good partner" in education, "ensuring our client institutions are empowered to use data appropriately." Well, here's the thing about partnerships: they need to work both ways. It's not just about what Instructure empowers institutions to do; it is also about what we, both as institutions and as individuals, empower Instructure to do. By unilaterally appropriating our data for their own experiments, Instructure is not being a good partner to individual students and teachers. If the people at Instructure "see no value or benefit in technology that replaces teachers or sacrifices students' agency" as Jared says in his post, then I hope they will give us the agency we need to opt out of Dig data-mining and remove our courses from the Dig laboratory.

Okay, everybody knows my blog posts are always too long, so I'll stop here to keep this about the same length as Jared's post (and of course I've written a lot about these topics here at the blog already). I hope this conversation can continue, and I also hope that Instructure will explore the options for both data opt-out and feature shut-off as they proceed with the Dig rollout for 2020. Thanks for reading!