Pages

May 18, 2019

Data Mongering (5): Dashboards Eat the World

This is my fifth round-up; you can see them all here: Data-Mongering Round-Ups. I'm also annotating as I go in Diigo, so you can the Diigo files here: Data-Mongering articles. (The editorial comments there in Diigo are just copied-and-pasted from the blog posts.)

DISQUANTIFIED: Higher Education in the Age of Metrics. There was an amazing speaker line-up at this conference on May 16-17 in Santa Barbara, and I'm hoping we will see materials from these talks appearing online soon. See the detailed abstracts of the presentations at the website, along with lots of materials for a related Reading Group. In the Twitter traffic, I was intrigued by this screenshot from Ben Williamson's presentation, Student Counting: Assembling the Data Infrastructure of Higher Education. I'm calling this "dashboards eat the world." Thanks to Phil Hill for the photo:


Audrey Watters gave one of the keynotes at Disquantified, and she has been writing on datamongering in education for a long time now; if people don't know, she's collected all those fabulous keynotes into a series of books: The Monsters of Education Technology, The Revenge of the Monsters of Education Technology, The Curse of the Monsters of Education Technology, and The Monsters of Education Technology 4. With pigeons, of course:


And on the subject of dashboards eating the world, big news this week was the SAT's new adversity score: "The new adversity score is meant to be one such gauge. It is part of a larger rating system called the Environmental Context Dashboard that the College Board will include in test results it reports to schools" as reported in SAT’s New ‘Adversity Score’ Will Take Students’ Hardships Into Account by Anemona Hartocollis in the NYTimes plus in many other media outlets. This is a perfect example of datamongering since it is, ultimately, all about the SAT trying to stay competitive in the college admissions marketplace. One of the things that I find mind-boggling and scary about this new score is that it will be reported to colleges, and students will not even be able to see that data being reported to colleges about them. There are so many problems here, and I will not even try to unpack them all. Thomas Chatterton Williams's editorial in the NYTimes, The SAT’s Bogus ‘Adversity Score,  focuses on the pseudoscientific faux accuracy of rating something on a scale of 1-100, for example. 

(Not directly on the subject of datamongering, but I was really intrigued by this remark about software engineering in this ASCD EducationUpdate piece, Are Grades Reliable? Lessons from a Century of Research by Susan M. Brookhart and Thomas R. Guskey: quote "The current resurgence of the 100-point percentage grade scale has been attributed to the use of computerized grading programs that typically are developed by software engineers who have scant knowledge of this history of their unreliability.")

The AI Supply Chain Runs on Ignorance by Sidney Fussell in the Atlantic. You can read all about Ever photo service here, along with the larger problems of surveillance and data mongering; quote "Companies can mine data to be scraped and used later, with the original user base having no clue what the ultimate purpose down the line is. In fact, companies themselves may not actually know who’ll buy the data later, and for what purpose, so they bake vague permissions into their terms of service." This expresses my concerns about what could happen to our Instructure data down the line. It's not just about privacy (i.e. Instructure removing personally identifiable information), but instead about the products that will be created with that data. I have strong personal beliefs about what is and is not ethical in education, but after Instructure obtains data about me from surveilling my use of their software, I have no control over what kinds of products they will create, or products they will create in conjunction with their partners. Those products might include plagiarism police, robograders, tutorbots, etc., and I do not want to support creation of such products with my data or data harvested from my students. So, I keep hoping for a data re-use opt-out (more on that here).

How Much Artificial Intelligence Should There Be in the Classroom? by Betsy Corcoran and Jeffrey R. Young in EdSurge reporting on a conference about AI in education organized by a Chinese company called Squirrel AI, including an interview with Derek Li, one of Squirrel's co-founders. This quote from pretty much sums it up: "And he believes that having AI-driven tutors or instructors will help them each get the individual approach they need. He closed his remarks by saying that he “hopes to provide each child with a super-power AI teacher that is the combination of Einstein and Socrates, that way it feels like she has more than 100 teachers in front of her.”" Yeah, right. There is nothing in the article that makes it seem like Squirrel AI actually understands the world of teaching and learning, and it was disappointing to see it written up in EdSurge without any actual facts or details that would lend some credibility to this pie-in-the-sky AI hype. Instead, for a really detailed report on a different company in China and the Chinese government's endorsement of massive AI experiments in education, see this important article: Camera Above the Classroom by Yujie Xue.

And speaking of pie-in-the-sky hype, let's not forget about Knewton. Here's some reporting on its demise: End of the Line for Much-Hyped Tech Company by Lindsay McKenzie in InsideHigherEd. quote "The educational technology company, which famously boasted about the power of its adaptive learning platform to “semi-read” students’ minds, has been acquired by publisher Wiley Education." And a quote from Phil Hill in the article: "It’s a fire sale. In the press release, they don’t say they’re buying Knewton the company, they say they’re acquiring its assets -- they don’t even try and sugarcoat it." See also Tony Wan's write-up in EdSurge: Wiley to Acquire Knewton’s Assets, Marking an End to an Expensive Startup Journey with Michael Feldstein's "snake-oil" description of Knewton's hype, as reported a few years ago in NPR: Meet The Mind-Reading Robo Tutor In The Sky: quote "But wait. Learning is a lot more complicated than algorithmic reading of multiple-choice responses. What about a student's level of motivation or persistence or, say, the empathy, passion and insight of a good teacher or dozens of other factors? "He is overselling the kind of power that Knewton can do," says Michael Feldstein, a digital education consultant and co-founder of the ed blog e-Literate. "I would go so far as to say that he is selling snake oil." Claims about the wonders of tech to revolutionize learning, Feldstein argues, vastly oversimplify the complexity, beauty and mystery of how humans learn."

And for more on algorithms and the companies who sell them, Civitas Learning has a new CEO: Chris Hester. Although the website claims the company is "built by educators for education," Hester is not an educator. He comes to Civitas from the health care industry, "twenty years of experience leading companies that create social impact through the application of technology and analytics." My school is a Civitas customer (or so I've read), but faculty have never received any information about that partnership, so I actually have no idea what data services they provide for us, or at what cost.

Finally, for the "if-it-can-be-gamed-it-will-be-gamed" files (and in the field of health care surveillance in fact), here's some data gaming for you (via Twitter):