So, as I write my notes here I'll focus on points of comparison that I see between Google's story and the story of Canvas, and for another really powerful ed-tech comparison, see Ben Williamson's piece on ClassDojo, which is also going down the path of big data and behavior modification: Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry.
And now... Chapter 3. Google: The Pioneer of Surveillance Capitalism.
Google and the LMSes. Although Google's search engine and an LMS are quite different software products, they share a key feature that Zuboff emphasizes in a quote from Google's chief economist, Hal Varian, writing about “computer-mediated transactions” and their transformational effects on the modern economy: the computer systems are pervasive, and that pervasiveness has consequences.
Nowadays, there is a computer in the middle of virtually every transaction… now that they are available these computers have several other uses.And what are some of those other uses? The uses are: data extraction and analysis; new contractual forms due to better monitoring; personalization and customization; and continuous experiments.
Anyone familiar with the evolution of the LMS over the past two decades can see plenty of parallels there: as with Google, so too the LMS. The LMS increasingly puts itself in the middle of transactions between teachers and students, and as a result we are seeing data extraction and analysis that didn't used to happen before, monitoring unlike any attendance system ever used in a traditional classroom, the mounting hype of personalization and customization, along with continuous experiments... including experiments for which we and our students never gave our permission.
As Zuboff narrates the story of Google's discovery/invention of behavioral surplus, she starts with the early days of Google, when "each Google search query produced a wake of collateral data," but the value of that collateral data had not yet been recognized, and "these behavioral by-products were haphazardly stored and operationally ignored." The same, of course, has been true of the LMS until very recently.
Zuboff credits the initial discovery of these new uses for collateral data to Amit Patel, at that time a Stanford grad student:
His work with these data logs persuaded him that detailed stories about each user—thoughts, feelings, interests—could be constructed from the wake of unstructured signals that trailed every online action.That is the kind of thing we are hearing from the LMSes now too, although personally, I am not convinced by the depth of data they have to work with compared to Google. The user experience of the LMS is so limited and predefined, with so little opportunity for truly individual action (just the opposite of the Google search engine), that I don't think the LMSes are going to be able to do the kind of profiling they claim they will be able to do... not unless/until they get into the kind of surveillance that is taking shape in Chinese schools now as part of the Chinese government's huge investment in edtech and AI; for more on that, see: Camera Above the Classroom by Xue Yujie. See also this new piece on facial recognition experiments in education: Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements.
The LMS started out as a tool for teachers and students to use in order to accomplish teaching and learning tasks, but now it is morphing into a surveillance device so that the LMS company can gather data and take over those tasks, intervening in ways that the LMS never did before, turning into the kind of recursive learning system that Google has become:
Google’s engineers soon grasped that the continuous flows of collateral behavioral data could turn the search engine into a recursive learning system.Google then used the predictive power of that system in order to create something completely unprecedented: the Google advertising empire. Zuboff provides a step by step account of just how that happened, and how a similar transformation then took place at Facebook.
What's next for the LMS? So, an obvious question is this: what are the LMS companies going to do with their predictive products? The answer is: they don't know. Yet. Which is why we need to be talking about this now; the future of education is something of public importance, and it is not something that should be decided by software company executives and engineers. It's one thing for companies to let Google take control of their advertising; it is something else entirely for schools to let the LMS take control of schooling.
Here is how Zuboff describes the shift in the relationship between Google and advertisers as the new predictive products took shape; as you read this, think about what this might foretell for education:
In this new approach, he insisted that advertisers “shouldn’t even get involved with choosing keywords—Google would choose them.” [...] Operationally, this meant that Google would turn its own growing cache of behavioral data and its computational power and expertise toward the single task of matching ads with queries. [...] Google would cross into virgin territory by exploiting sensitivities that only its exclusive and detailed collateral behavioral data about millions and later billions of users could reveal.While it might seem like advertising and education don't have anything to do with each other, they overlap a lot if you look at education as a form of behavior modification (which, sad to say, many people do, including many educators).
Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” [...] The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising.That transformation of advertising into a "science" sounds scarily like the way that some would like to see teaching turned into a data-driven science, precise and standardized in its practices. For more on that topic, see the Ben Williamson piece I mentioned above about ClassDojo. In addition, Zuboff is going to have a lot to say about behaviorism, especially the radical behaviorism of B. F. Skinner, later in the book.
Profiling. So, back to the Google story. As Google accumulated more and more of this behavioral data, they began to develop (and patent) what they called UPIs, user profile information:
These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.So too at Instructure, where they claim that they can already develop predictive profiles of students by combining data across courses; here's CEO Dan Goldsmith: "We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom."
Again, as I said above, I am not really persuaded by the power of Instructure's so-called insights. If they are looking at a student's grades in all their other classes, for example, and using that GPA to predict the student's performance in a new class, sure, the GPA has some predictive power.
What I really want to know, though, is how Instructure has the right to use a student's grade data in that way, when I thought such data was private, protected by FERPA. I am not allowed to see the grades my students receive in their other courses (nor do I want to); I'm not even allowed to see the other courses they are taking — all that data is protected by FERPA. But Instructure is now apparently profiling students based on their grades in other classes (?), and then using that grade-derived data in order to insert itself as an actor in other classrooms, all without the students' permission. Now, if I am wrong in that characterization of their predictive Dig initiative, I will be glad to stand corrected (and I'm hoping for a reply to my blog post last week about those issues); I'm just going on statements made in public by Dan Goldsmith about the Dig project.
As Instructure gathers up all this data, without allowing us to opt out, they are proceeding much as Google did, unilaterally extracting without users' awareness or informed consent:
A clear aim of the [UPI] patent is to assure its audience that Google scientists will not be deterred by users’ exercise of decision rights over their personal information, despite the fact that such rights were an inherent feature of the original social contract between the company and its users. [...] Google’s proprietary methods enable it to surveil, capture, expand, construct, and claim behavioral surplus, including data that users intentionally choose not to share. Recalcitrant users will not be obstacles to data expropriation. No moral, legal, or social constraints will stand in the way of finding, claiming, and analyzing others’ behavior for commercial purposes.The right to decide. Narrating the Google history year by year, Zuboff shows that the new Google emerged over time; the values and principles of Google today are not the values and principles that Google espoused at the beginning. Are we seeing the same kind of shift happening at Instructure? Re-reading this chapter in her Zuboff's book I am very concerned that this is indeed what we are seeing, a "180-degree turn from serving users to surveilling them." And as I've said repeatedly in my complaints to Instructure about its new data initiatives, this is not just about privacy; instead, it is about the right to decide:
That Google had the power to choose secrecy is itself testament to the success of its own claims. This power is a crucial illustration of the difference between “decision rights” and “privacy.” [...] Surveillance capitalism lays claim to these decision rights. The typical complaint is that privacy is eroded, but that is misleading. In the larger societal pattern, privacy is not eroded but redistributed, as decision rights over privacy are claimed for surveillance capital. [...] Instead of people having the rights to decide how and what they will disclose, these rights are concentrated within the domain of surveillance capitalism. [...] Surveillance is the path to profit that overrides “we the people,” taking our decision rights without permission and even when we say “no.”Writing about Instructure's new data analytics since back in March (My Soylent Green Moment), I've been saying no... but it is still not clear if I am going to be able to opt out of having data from my courses included in Instructure's machine learning project, and it is also not clear if my students are going to be able to opt out of the type of profiling that Goldsmith has described. I believe that each of us needs to be able to decide to say "no" on an individual level, not just institutional consent and institutional opt-out, which seems to be (?) what Instructure is offering. So, I'm still hoping we will hear more about that, and sooner rather than later, given that another school year is about to begin.
One last quote... Okay, this has become another too-long blog post, so I'll close with a spectacular Zuboffian sentences... can this woman write? This woman can write! And we need to pay attention to every word here:
The remarkable questions here concern the facts that our lives are rendered as behavioral data in the first place; that ignorance is a condition of this ubiquitous rendition; that decision rights vanish before one even knows that there is a decision to make; that there are consequences to this diminishment of rights that we can neither see nor foretell; that there is no exit, no voice, and no loyalty, only helplessness, resignation, and psychic numbing; and that encryption is the only positive action left to discuss when we sit around the dinner table and casually ponder how to hide from the forces that hide from us.
As a teacher at the start of a new school year, I should not have to ponder how to hide from the LMS, or how to help my students do so, but that is the position I am in. When my students create blogs and websites for my classes, they can do all of that using pseudonyms (pseudonyms are great, in fact), and they can keep or delete whatever they want at the end of the semester. But what about Instructure? Is Instructure going to take what it learns about my students in my class and then use that data to profile those students in their other classes, prejudicing those other instructors before those classes even begin? (See Goldsmith quote above re: profiling.) If that is what Instructure is going to do with data from my classes, I need to able to say "no," and so do my students.
Meanwhile, I'll be back with more from Zuboff next weekend. Thanks for reading! You can comment here if you want, or connect at Twitter (@OnlineCrsLady).
P.S. I hope that those who know more about Instructure analytics will chime in, especially anybody who's at a Unizin school. All I know about Unizin I learned from the Chronicle article here: Colleges are Banding Together ... and the claims made there sound even more alarming than Goldsmith's description of Instructure profiling. Which is to say: very alarming indeed. Claims from Brad Wheeler, Unizin cofounder:
Meanwhile, I'll be back with more from Zuboff next weekend. Thanks for reading! You can comment here if you want, or connect at Twitter (@OnlineCrsLady).
P.S. I hope that those who know more about Instructure analytics will chime in, especially anybody who's at a Unizin school. All I know about Unizin I learned from the Chronicle article here: Colleges are Banding Together ... and the claims made there sound even more alarming than Goldsmith's description of Instructure profiling. Which is to say: very alarming indeed. Claims from Brad Wheeler, Unizin cofounder:
Take students’ clickstreams and pageviews on the learning-management system, their writing habits, their participatory clicks during classroom discussions, their grades. Then combine that with information on their educational and socioeconomic backgrounds, their status as transfer students, and so on. You end up with "a unique asset," says Wheeler, in learning what teaching methods work.
Digital publishers have learned a lot about students who use the publishers’ texts and other resources, he says, but the demographic puzzle pieces are key to discovering impediments to learning.
My school is not a participant in Unizin (I'll add: thank goodness). Here is the list: Unizin Members. If you are at a Unizin school, I would love to know more about what kind of informed consent and opt-out procedures are in place at those schools.
UPDATE: Here are the notes on Chapter 4: How Google Got Away With It