Pages

April 21, 2019

Data-Mongering (2): Algorithms and Agency, GIGO, and more

This is my second of these round-ups; you can see them all here: Data-Mongering Round-Ups. I'm also annotating as I go in Diigo, so you can the Diigo files here: Data-Mongering articles. The editorial comments there are just copied-and-pasted from the blog posts.

It's depressing to keep on reading and learning about this, but especially now that I'm reading Shoshana Zuboff's The Age of Surveillance Capitalism, I can see that the shift is happening at Instructure just as it has in one company after another: under their new CEO, Instructure has realized that its "collateral" data can actually be commodified, turning user behavior into the product that is being sold (Soylent Canvas). Who would have thought that the main outcome of the digital revolution in education would be the triumphant return of behaviorism...? Eegad. Skinner would be so happy. And I am not.

Anyway, here are some of the things I read this week that made me stop and think:


Postdigital: 10 years later, Algorithms and Agency by Lawrie Phipps. This piece gets at so many of my deep concerns right now, and it's "looking back" perspective to 10 years ago shows what a dramatic shift there has been in the normalizing of expansive digital networks, both good and bad. For example, re: TurnItIn and their ilk: "Would the sector have been so fast to sign up to a plagiarism service 10 years ago, if they had known all the student IP would one day be the property of a publishing company?" I too was wildly naive 10 years ago, and guilty of some techno-evangelism I guess. I still love teaching online, but more and more I see the technological space in which I am supposed to do my work (Canvas) as being a threat, not a resource. quote "The naive utopia we described in our 2009 postdigital paper probably only exists in the minds of idealists and tech evangelists. People have designed digital tools, platforms, and other environments with political and financial motives. In our current postdigital world, digital does not serve the social, but through the manipulation of people, it is driving a particular kind of society, one that exploits the weaknesses and and fears of people; enables the rise of racism and xenophobia, and intensifying inequality."

Why ‘learning analytics’? Why ‘Learning Record Stores’? by Donald Clark. I don't always agree with Donald Clark, but I think he is spot on in his criticism of learning analytics hype here: "Perhaps the best use of data is dynamically, to create courses, provide feedback, adapt learning, text to speech for podcasts and so on. This is using AI in a precise fashion to solve specific learning problems. The least efficient use of data is storing it in huge pots, boiling it up and hoping that something, as yet undefined, emerges" (that last bit sounds just like the pie-in-the-sky claims by Instructure's CEO that just because they have lots of data they can get lots of use out of it). Specifically on AI and learning behavior: "Recording what people just ‘do’ is not that revealing if they are clickthrough courses, without much cognitive effort. Just showing them video, animation, text and graphics, no matter how dazzling is almost irrelevant if they have learnt little. This is a classic GIGO problem (Garbage In, Garbage Out)."

One Way to Reduce Gender Bias in Performance Reviews by Lauren Rivera and AndrĂ¡s Tilcsik. This is a fascinating piece at Harvard Business Review that warns us to be suspicious of any measurement because the measuring stick itself shapes the data in ways that we never realized or intended, like the way that women are more discriminated against if you use a 10-point rating scale as opposed to a 6-point scale. So, before we start measuring everything, we need to stand back and think about the prejudices that are going to inform/deform every supposedly objective measurement we make.


Institutions’ Use of Data and Analytics for Student Success by Amelia Parnell, Darlena Jones, Alexis Wesaw, and D. Christopher Brooks. This is part of an Educause research project, and it's a good reference point for the ways that schools are trying to use data to improve student success. It is such a slippery slope, and insofar as these systems rely on numbers, and grades in particular, I am dubious. My main concern, though, is the fact that in their eagerness to run their own data experiments, schools have given companies like Instructure way too much freedom to commodify and monetize that student data for purposes that go far beyond any local initiative. From this report, I learned that my school is not alone in a strong focus on first-year retention, and the report also shows that efforts are instead going to advising, tutoring, and counseling, which is again the case at my school. IMO we need to focus on the direct educational mission, in my opinion, not just on ancillary supports. Sadly, the report does not recommend strengthening faculty role or involvement, instead the recommendation is to "identify and expand institutionally appropriate roles for IR, IT, and student affairs." But there was also this bizarre quote plunked down in the middle of the discussion of admin and support services: "As algorithms become more sophisticated, there will increasingly be opportunities for faculty to become more engaged in the delivery of interventions." The one bright spot was this recommendation: Recommendation 4: Increase the use of qualitative data, especially from students. Yes, I say, yes! Student voices please!

Developing Minds in the Digital Age: Towards a Science of Learning for 21st Century Education. Big book (250 pages) from Patricia Kuhl et al. at OECD, which I learned about from Ben Williamson at Twitter. I haven't read the book yet; I was really struck, though, by the capitalization of Big Data and Artificial Intelligence as if they were gods or something; what's up with that??? This image is from Ben Williamson's tweet:


Anyway, the book looks useful, and I will give it a read this summer. 


Insurers Want to Know How Many Steps You Took Today by Mark Raymond. I already knew a lot of the content covered here in the NYTimes article from reading Cathy O'Neil's Weapons of Math Destruction, and the whole "health management" business shows just what dangers await us in the "learning management" business: "As machine learning works its way into more and more decisions about who gets coverage and what it costs, discrimination becomes harder to spot." College pricing is already a nightmare (sticker price, as it were, versus what individual students end up actually paying). That's just one nightmare scenario I can see playing out in future, as colleges become increasingly convinced, rightly or wrongly, that they can predict accurately just how students are going to perform (and creating self-fulfilling prophecies as a result of the biases that they institutionalize in this way...).

And from @YujieXuett, a screenshot (which of course I cannot read) of data-mongering in Chinese schools: