Pages

Saturday, May 18, 2019

Data Mongering (5): Dashboards Eat the World

This is my fifth round-up; you can see them all here: Data-Mongering Round-Ups. I'm also annotating as I go in Diigo, so you can the Diigo files here: Data-Mongering articles. (The editorial comments there in Diigo are just copied-and-pasted from the blog posts.)

DISQUANTIFIED: Higher Education in the Age of Metrics. There was an amazing speaker line-up at this conference on May 16-17 in Santa Barbara, and I'm hoping we will see materials from these talks appearing online soon. See the detailed abstracts of the presentations at the website, along with lots of materials for a related Reading Group. In the Twitter traffic, I was intrigued by this screenshot from Ben Williamson's presentation, Student Counting: Assembling the Data Infrastructure of Higher Education. I'm calling this "dashboards eat the world." Thanks to Phil Hill for the photo:


Audrey Watters gave one of the keynotes at Disquantified, and she has been writing on datamongering in education for a long time now; if people don't know, she's collected all those fabulous keynotes into a series of books: The Monsters of Education Technology, The Revenge of the Monsters of Education Technology, The Curse of the Monsters of Education Technology, and The Monsters of Education Technology 4. With pigeons, of course:


And on the subject of dashboards eating the world, big news this week was the SAT's new adversity score: "The new adversity score is meant to be one such gauge. It is part of a larger rating system called the Environmental Context Dashboard that the College Board will include in test results it reports to schools" as reported in SAT’s New ‘Adversity Score’ Will Take Students’ Hardships Into Account by Anemona Hartocollis in the NYTimes plus in many other media outlets. This is a perfect example of datamongering since it is, ultimately, all about the SAT trying to stay competitive in the college admissions marketplace. One of the things that I find mind-boggling and scary about this new score is that it will be reported to colleges, and students will not even be able to see that data being reported to colleges about them. There are so many problems here, and I will not even try to unpack them all. Thomas Chatterton Williams's editorial in the NYTimes, The SAT’s Bogus ‘Adversity Score,  focuses on the pseudoscientific faux accuracy of rating something on a scale of 1-100, for example. 

(Not directly on the subject of datamongering, but I was really intrigued by this remark about software engineering in this ASCD EducationUpdate piece, Are Grades Reliable? Lessons from a Century of Research by Susan M. Brookhart and Thomas R. Guskey: quote "The current resurgence of the 100-point percentage grade scale has been attributed to the use of computerized grading programs that typically are developed by software engineers who have scant knowledge of this history of their unreliability.")

The AI Supply Chain Runs on Ignorance by Sidney Fussell in the Atlantic. You can read all about Ever photo service here, along with the larger problems of surveillance and data mongering; quote "Companies can mine data to be scraped and used later, with the original user base having no clue what the ultimate purpose down the line is. In fact, companies themselves may not actually know who’ll buy the data later, and for what purpose, so they bake vague permissions into their terms of service." This expresses my concerns about what could happen to our Instructure data down the line. It's not just about privacy (i.e. Instructure removing personally identifiable information), but instead about the products that will be created with that data. I have strong personal beliefs about what is and is not ethical in education, but after Instructure obtains data about me from surveilling my use of their software, I have no control over what kinds of products they will create, or products they will create in conjunction with their partners. Those products might include plagiarism police, robograders, tutorbots, etc., and I do not want to support creation of such products with my data or data harvested from my students. So, I keep hoping for a data re-use opt-out (more on that here).

How Much Artificial Intelligence Should There Be in the Classroom? by Betsy Corcoran and Jeffrey R. Young in EdSurge reporting on a conference about AI in education organized by a Chinese company called Squirrel AI, including an interview with Derek Li, one of Squirrel's co-founders. This quote from pretty much sums it up: "And he believes that having AI-driven tutors or instructors will help them each get the individual approach they need. He closed his remarks by saying that he “hopes to provide each child with a super-power AI teacher that is the combination of Einstein and Socrates, that way it feels like she has more than 100 teachers in front of her.”" Yeah, right. There is nothing in the article that makes it seem like Squirrel AI actually understands the world of teaching and learning, and it was disappointing to see it written up in EdSurge without any actual facts or details that would lend some credibility to this pie-in-the-sky AI hype. Instead, for a really detailed report on a different company in China and the Chinese government's endorsement of massive AI experiments in education, see this important article: Camera Above the Classroom by Yujie Xue.

And speaking of pie-in-the-sky hype, let's not forget about Knewton. Here's some reporting on its demise: End of the Line for Much-Hyped Tech Company by Lindsay McKenzie in InsideHigherEd. quote "The educational technology company, which famously boasted about the power of its adaptive learning platform to “semi-read” students’ minds, has been acquired by publisher Wiley Education." And a quote from Phil Hill in the article: "It’s a fire sale. In the press release, they don’t say they’re buying Knewton the company, they say they’re acquiring its assets -- they don’t even try and sugarcoat it." See also Tony Wan's write-up in EdSurge: Wiley to Acquire Knewton’s Assets, Marking an End to an Expensive Startup Journey with Michael Feldstein's "snake-oil" description of Knewton's hype, as reported a few years ago in NPR: Meet The Mind-Reading Robo Tutor In The Sky: quote "But wait. Learning is a lot more complicated than algorithmic reading of multiple-choice responses. What about a student's level of motivation or persistence or, say, the empathy, passion and insight of a good teacher or dozens of other factors? "He is overselling the kind of power that Knewton can do," says Michael Feldstein, a digital education consultant and co-founder of the ed blog e-Literate. "I would go so far as to say that he is selling snake oil." Claims about the wonders of tech to revolutionize learning, Feldstein argues, vastly oversimplify the complexity, beauty and mystery of how humans learn."

And for more on algorithms and the companies who sell them, Civitas Learning has a new CEO: Chris Hester. Although the website claims the company is "built by educators for education," Hester is not an educator. He comes to Civitas from the health care industry, "twenty years of experience leading companies that create social impact through the application of technology and analytics." My school is a Civitas customer (or so I've read), but faculty have never received any information about that partnership, so I actually have no idea what data services they provide for us, or at what cost.

Finally, for the "if-it-can-be-gamed-it-will-be-gamed" files (and in the field of health care surveillance in fact), here's some data gaming for you (via Twitter):



Sunday, May 5, 2019

Data Mongering (4): Canvas, Capitalism, Nudges and Smacks

This is my fourth of these round-ups; you can see them all here: Data-Mongering Round-Ups. I'm also annotating as I go in Diigo, so you can the Diigo files here: Data-Mongering articles. The editorial comments there are just copied-and-pasted from the blog posts. I'll be out of town this coming week with only intermittent Internet access... so there won't be a post next weekend, but I'll be back on May 19 with more datamongering to report. The fun never stops in the world of Big Data. :-)

The most important discovery I made this week was an article in the University of British Columbia's student newspaper calling out Instructure's data takeover: Canvas is tracking your data. What is UBC doing with it? by Zak Vescera. I would love to see some reporting like this by our student newspaper at OU! UBC recently adopted Canvas, and there is journalism student there, Bryan Short, who is expressing concerns (which I share) about the way local student data is now part of Instructure's global operations and product development, quote: "But there’s technically little stopping Instructure or Amazon from using the de-aggregated data to improve their own services — and that’s what has Short worried. “This is stuff that’s private personal information that you’re sharing in the context of your education of your university, but this information is now accessible to a company in the United States,” argued Short.


Platform Capitalism and the Governance of Knowledge Infrastructure by Leslie Chan. Powerful stuff here; I'll let the abstract speak for itself; quote: "The dominant academic publishers are busy positioning themselves to monetize not only on content, but increasingly on data analytics and predictive products on research assessment and funding trends. Their growing investment and control over the entire knowledge production workflow, from article submissions, to metrics to reputation management and global rankings means that researchers and their institutions are increasingly locked into the publishers’ “value chain”. I will discuss some of the implications of this growing form of “surveillance capitalism” in the higher education sector and what it means in terms of the autonomy of the researchers and the academy. The intent is to call attention to the need to support community-governed infrastructure and to rethink our understanding of “openness” in terms of consent and social values."


Code Acts in Education: Learning from Surveillance Capitalism by Ben Williamson. These are his thoughts on Zuboff's book. I hope to finish the book on my travels this week, so I will have some thoughts of my own to share. Williamson has three main takeaways from this book for education, and I am most interested in this one: quote "3) Programmable policies. A third line of inquiry would be into the idea of ‘policies’. Education policy studies have long engaged critically with the ways government policies circumscribe ‘correct’ forms of educational activity, progress, and behaviour. With the advance of AI-based technologies into schools and universities, policy researchers may need to start interrogating the policies encoded in the software as well as the policies inscribed in government texts. These new programmable policies potentially have a much more direct influence on  ‘correct’ behaviours and maximum outcomes by instrumenting and orchestrating activities, tasks and behaviours in educational institutions." For related resources, check out the Twitter thread.

Predictive Analytics: Nudging, Shoving, and Smacking Behaviors in Higher Education by Kevin Desouza and Kendra Smith. This is a piece from 2016, but I just learned about it now. I definitely use "nudges" in my classes, lots of them in fact, but they are not automated; each one depends on what I know about the individual student (Small Data and Microassignments). I am not interested in automating that process of nudging, and as for shoves and smacks, no, thank you. And that's even if everybody's intentions are pure, as the authors are willing to suppose, quote: "The answer is not simple. Perhaps the deepest concern lies in the definition of the problem and in who decides the direction of nudges. Nudging can easily become shoving or smacking. Obviously, the intentions behind most higher education practices are pure, but with new technologies, we need to know more about the intentions and remain vigilant so that the resulting practices don’t become abusive. The unintended consequences of automating, depersonalizing, and behavioral exploitation are real. We must think critically about what is most important: the means or the end."

CWiC Courseware in Context. This is a framework for evaluating courseware that I learned about this week (details: Student Agency: The Latest Casualty in the Marcomm War for Control of Online Learning) so I thought I would comment on that here. The framework does contain some basic privacy criteria (FERPA compliance certification; US / EU Safe Harbor certification; Ability to ensure that data will not reside in foreign data centers), but there is no discussion of a user opt-out, which is the key criterion I am looking for — and I am still hoping for some kind of opt-out in Instructure's new turn towards predictive products.

Maybe Universities Shouldn't Be Putting Amazon Echos in Student Dorms by Eric Stoller. I remember being appalled that Instructure was promoting an Alexa "skills" Canvas interface ("Alexa, tell me what my grade is" etc.), and there are schools who have jumped on the Alexa bandwagon, which is to say, Alexa surveillance. Even Eric Stoller now has some doubts: "Yes, I'm still an advocate for experimentation with technologies that can enhance the student experience. However, there's a cautiousness that's been creeping into my consciousness. When it was disclosed that Amazon employs thousands of people who listen to what you say (directly or indirectly) to Alexa, warning bells went off inside my head. The reason for this invasive monitoring, according to Amazon, is to make the product better for users. My guess is that an always-on microphone-laden device serves as an excellent surveillance instrument to feed the Amazon marketing/data machine."

And while it is not on the subject of datamongering directly, this great piece from Jess Mitchell, Age of Awareness - The Damage We Do: Assessment, gives us a lot to think about when we ponder all that data and what people are going to do with it. Quote: "When we are confronted with complexity and uncertainty, we lean back on simplicity and completeness and sameness. We approach the world with a transactional check-book accounting expectation — if we document our inputs and outputs, it should all reconcile neatly in the end. Tell me how I’ll be measured and only then will I know how to perform. Tell me the desired outputs and I’ll manhandle the inputs to make them conform."

With thanks to Bonni Stachowiak for the tweet, which is how I learned about the article:




Student Agency: The Latest Casualty in the Marcomm War for Control of Online Learning

I got into a Twitter discussion/argument last week over the term "personalization," which is one of those words that has been co-opted in marcomm Newspeak, so that personalization now often means the opposite of what you might think it means: instead of personalization being about actual persons choosing what they want to learn for their own personal reasons, personalized learning now means machine-driven learning that is, in fact, completely impersonal.

Of course, personalization is not the only term that has been corrupted in the new era of automated learning and the marcomm that goes with it. Interactive, for example, no longer means two or more people interacting with one another. Instead, interactive refers to a student taking a quiz online, "interacting" with the course content. The student clicks the answers to a quiz and then gets the score back from the computer: presto! That's interactive learning online.

Or take the term "immersive," a term that was once used to refer to language learning that happened totally in the context of the target language. If you learned Turkish by going to live in Ankara, that would be immersive learning. Role-playing games can also offer immersive experiences, so a program like Reacting to the Past (I am a fan!) can make a good claim for being immersive. But in marcomm Newspeak, anything online is now "immersive" as you can see here: Online Courses and Marketing Fluff: What is an immersive history course?

So, I already knew that personalized, interactive, and immersive were lost causes as vocabulary goes, but what I learned from the Twitter conversation last week is that even the term agency has been co-opted, and that really surprised me... not in a good way. As I see it, machine-driven learning is the exact opposite of learner agency, but the proponent of machine-driven learning in the Twitter convo (Donald Clark) insisted that there are lots of great examples of machine-driven learning which are based on learner agency. I asked for specifics. He told me to look at CogBooks (only later did I find out he is an adviser to the company). So, I looked at CogBooks ... and that's where I saw, for the first time I think, the appropriation of the term agency for something that is anything-but-agency; here's a screenshot:


If I understand the information here correctly, "student agency" now means the following: the student self-scores their learning on a scale of 0 to 100 (of course! there must be a percentage), and if you rate yourself poorly (below the sacred number 70? I'm not sure), then the computer will give you one item to read/watch, plus two more that you can choose from. More content, of course, must be what this student wants. Or, at least, you better hope that is what the student wants, because that's all the courseware apparently has on offer. 

And that's... student agency.

In response, then, to what seems to be a poor understanding of the term "agency," I will offer two readings and a graphic to deepen that understanding:

Tech, Agency, Voice (On Not Teaching) by Chris Friend. quote: Paulo Freire talks of “problem-posing education,” in which learners identify and/or construct the problems they see as pressing and worthy of attention or study. They don’t respond to the questions asked of them by teachers; they create the questions and ask them of world. In problem-posing education, learning becomes real, essential—I dare say life-giving. Learners are compelled to seek answers because they want to know. Not because it’s been assigned. Not because they’re submitting. This is the heart of liberatory education — and should be our goal when using technology in classes: Get students learning on their own terms, following their own interests, seeking their own satisfaction.

Trust, Agency, and Connected Learning by Jesse Stommel. quote: Very little about what happens in a classroom should be fixed in advance. And I mean fixed chairs, inflexible reading lists, predetermined outcomes, and assignments with rules not designed for breaking. It is good to offer guidance and also protections for difference. But, for me, the best outcome for a learning experience is something I never could have anticipated in advance. Trajectories can be mapped, but never at the expense of epiphanies.

Connected Learning from the Connected Learning Research Network and Digital Media & Learning Research Hub. quote: Connected learning prizes the learning that comes from actively producing, creating, experimenting, and designing, because it promotes skills and dispositions for lifelong learning, and for making meaningful contributions to today's rapidly changing work and social conditions.



The click-this-or-that claim to agency at CogBooks is the opposite of the open-ended, learner-driven freedom that you can read about in these articles and in the connected learning graphic. 

Interestingly, when I looked up a review of CogBooks courseware at EdSurge, I found that CogBooks courseware was rated very low on learner autonomy using EdSurge's Courseware in Context (CWiC) Framework.


I then looked online to learn more about the framework:


So, as usual, despite the very frustrating Twitter conversation, I did manage to learn something new while poking around afterwards. I had not seen this CWiC framework before (no surprise, as I am not a user of courseware and not interested in using it). What CWiC looks at for learner autonomy still falls short of what I aspire to in my classes, but at least it is a component in the overall rating. Here are the questions CWiC asks about learner autonomy:


Admittedly, the phrase they are using here is "learner autonomy," not agency. So maybe if the word "agency" has indeed been co-opted by the forces of marcomm now, we can still hang on a little longer to the word autonomy before it also becomes part of the marketing campaign for machine-driven learning.

And hey: no worries! Those of us who have been teaching online successfully for years and years have apparently never gotten it right before, but finally, CogBooks offers online learning that really works. Everything is double-plus-good, as you can see: