Pages

Sunday, July 14, 2019

After InstructureCon: Yes, I'm still hoping for that data opt-out!

Last week, I did a round-up post focused on InstructureCon, summarizing my many concerns about Instructure's new AI experiments. Back in March, CEO Dan Goldsmith announced a big shift for Instructure: instead of just giving teachers and schools access to data for traditional statistics as in the past, Instructure itself would be analyzing our students, profiling them in order to create predictive algorithms for future business growth, while doubling their TAM as Goldsmith claimed:


InstructureCon updates on DIG

So, after InstructureCon we know a lot more about this AI project, called DIG. For example, Goldsmith now claims: We can predict, to a pretty high accuracy, what a likely outcome for a student in a course is, even before they set foot in the classroom. 

Personally, I find this claim hard to believe, given that the only data Instructure has to work with is the isolated, low-level data they gather from Canvas activity: log-ins, page views, quizzes, gradebook, etc. Unizin schools add demographics to that Canvas data (which I find even more alarming), but it sounds like Goldsmith is making the claim about Canvas data itself.

In any case, speaking for myself, I do not want Instructure to tell me how to do my job ("we can make recommendations..."), prejudicing my views of students before I have even met them. My school currently does not share a student's GPA with me, and for good reason; as I see it, Instructure's labeling of students in this way is no different than sharing their GPA. In fact, I would suspect that past grade data is a very significant component in Instructure's prediction engine, perhaps even the most significant component. But hey, it's their proprietary AI; I'm just guessing how it might work, which is all we can do with corporate AI/ML experiments.

Notice also the slipperiness of the word "outcome" in Goldsmith's claims about predictive accuracy. When teachers think about outcomes, we are thinking about what students learn, i.e. the learning they can take away with them from the class (what comes out of the class), especially the learning that will be useful to them in their later lives. And that's very complex; there is a whole range of things that each student might learn, directly and indirectly, from a class, and at the time of the class there's no telling what direction their lives might take afterwards and what might turn out to be useful learning along that life path. But the LMS has no record of those real learning outcomes. In fact, the LMS has no real measures of learning at all; there are only measures of performance: performance on a quiz, performance on a test, attendance, etc. So when Goldsmith talks about predicting the "likely outcome" for a student, what I suspect he means is that Instructure is able to predict the likely final grade that the student will receive at the end of a class (which is why I suspect GPA would be a big component in that prediction). But the grade is not the learning, and it is not the only outcome of a class. In fact, I would argue that we should not be using grades at all, but that is a topic for a separate discussion.

What about a data opt-out?

So, now that we know more about the goals of DIG, what about opting out? There was no announcement about an opt-out, and no mention even of the possibility of an opt-out. Goldsmith even claimed in an interview that there hasn't been any request for an opt-out: "We haven’t had that request, honestly." 


Well, that claim doesn't make sense as I myself had a long phone conversation with two VPs at Instructure about my opt-out request. What Goldsmith must mean, I suppose, is that they have not had a request at the institutional level for any campus-wide opt-outs, which is not surprising at all. While it would be great if we had some institutional support for our preferences as individual users, I would be very surprised if whole institutions decide to opt out. Predictive analytics serve the needs of institutions far more than they do the needs of individual teachers or students, and I can imagine that institutions might be eager to see how they can use predictive analytics to look at school-wide patterns that are otherwise hard to discern. Teachers can grok what is going on in their individual classrooms far more easily than provosts and deans can grok what is going on across hundreds and thousands of classrooms. 

Yet... there is hope!

Yet I still have some hope for an opt-out, because I learned from that same Goldsmith interview that individuals OWN their data: One of our first and primary tenets is that the student, the individual and the institution own the data—that’s their asset. 


And he says the same in this at video interview here: we own our data.


This concession about data ownership really caught me by surprise, in a good way, and renewed my hope for an opt-out. If individuals own their data, then we should be able to take our data out of the Instructure cloud when a course is over if we choose to do so. In other words: a data opt-out, perhaps with the same procedure that Instructure already uses to sunset data from schools that terminate their Instructure contract.

In fact, in the context of ownership, it really sounds more like an opt-in is required. If Instructure wants to use my data — data about me, my behavior, my work, my OWN data  then they should ask me for my permission. They should ask for permission regarding specific timeframes (a year, or two years, or in perpetuity, etc.), and they should ask for permission regarding specific uses. For example, while I strongly object to AI/ML experiments, there might be other research to which I would not object, such as a study of the impact that OER has on student course completion. Not all data uses are the same, so different permissions would be required.

Of course, as I've said before, I am not optimistic that Instructure is going to implement an opt-in procedure — even though they should — but I am also not giving up hope for a data opt-out, especially given the newly announced Canvas data tenets.

Canvas Data Tenets

In addition to this surprising concession about data ownership, we learned about these new Canvas data tenets at InstructureCon. In the video interview cited above, Goldsmith promised a post about data tenets coming soon at the Instructure blog, and there was already this slide in circulation at InstructureCon, which I assume are the data tenets Goldsmith is referring to in the interview (strangely, even the Instructure staff keynotes were not livestreamed this year, so I am just relying on Twitter for this information). As you can see, one of those tenets is: Empower People, don't Define Them.


Now, the language here sounds more like marcomm-speak rather than the legal or technical language I would expect, but even so, I am going to take heart from this statement. If Instructure promises to empower me, then surely they will provide a data opt-out, right? It would not be empowering if Instructure were to take my Canvas data and use it for an experiment to which I do not consent, as is currently the case.

My Canvas Data Doubts

Meanwhile, that tension between empowering people, not defining them, is what I want to focus on in the final part of this blog post. I saw really mixed messages from InstructureCon this year, as the big keynotes from Malcolm Gladwell, Dan Heath, and Bettina Love were all about community, peak moments, love and creativity... with a corporate counterpoint of big data and a billion Canvas quizzes as I learned via Twitter:


See also the contradiction where Goldsmith claims in an interview that Instructure is all about "understanding the individuals, their paths, their passions, and what their interests are" and what we see in the data dashboards: there are no passions and interests on those dashboards (but I do know those red "missing" labels all too well):




Impersonal personalization

There's a single word that I think expresses this dangerous ambivalence in ed-tech generally, and at Instucture in particular; that word is personalization. On the one hand, personalization looks like it would be about persons (personal agency, personal interactions, personal passions) but personalization has also become a codeword for the automation of education. Both in terms of philosophy and pedagogy, automation sounds really bad... but personalization: ah, that sounds better, doesn't it?

So, for example, listen to what Dan Goldsmith says in this interview: it's technology inevitablism, literally. (video hereSo when you think about adaptive and personalized learning I think it's inevitable that we as an educational community need to figure out ways of driving more personalized learning and personalized growth experiences.


I'm not going to rehash here all the problems with the rhetoric of personalization; Audrey Watters has done that for us, as in this keynote (among others): Pigeons and Personalization: The Histories of Personalized Learning. (A good all-purpose rule for thinking about ed tech: READ AUDREY.)

Instead, I will just focus here on the impersonality of Canvas data, listing five big reasons why I mistrust that data and Instructure's claims about it:

1. Canvas data measure behavior, not learning. Canvas is an environment that monitors student behavior: log on, log off; click here, click there; take this quiz, take that quiz; this this many words, download this many files, etc. If your educational philosophy is based on behaviorism, then you might find that data useful (but not necessarily; see next item in this list). If, however, your educational philosophy is instead founded on other principles, then this behavioral data is not going to be very useful. And consider the keynote speakers at InstructureCon: none of them was advocating behaviorism; just the opposite. Here's Bettina Love, for example, on liberation, not behaviorism (more on her great work below):


2. Canvas fails to gather data about the why. Even for purposes of behavior modification, that superficial Canvas data will not be enough; you need to know the "why" behind that behavior. If a student doesn't log on to Canvas for a week, you need to know why. If a student clicks on a page but spends very little time there, you need to know why. If a student does poorly on a quiz, you need to know why. For example, if a student got a poor score on a quiz because of a lack of sleep that is very different from getting a poor score because they did not understand the content, which is in turn very different from being bored, or being distracted by problems at home, etc. Just because students completed a billion quizzes in Canvas does not mean Instructure has all the data it needs for accurately profiling those students, much less for making predictions about them.

3. Canvas data are not human presence. The keynote speakers consistently emphasized the importance of people, presence, relationships, and community in learning, but numbers are not presence. Does this look like a person to you? This is how Canvas represents a student to me right now; the coming data dashboard (see above) uses the same numbers repackaged, because that is all that Canvas has to offer me: numbers turned into different kinds of visualizations.


Goldsmith claims that Instructure is different from other learning companies because they are all about people's passions and interests, but that claim does not fit with the views I get of my students in the Canvas Dashboard and the Canvas Gradebook: no passions, no interests; just numbers. I don't need percentage grades, much less the faux-precision of two decimal points. Instead, I need to know about students' passions and interests; that would be exactly the information that will help me do my job well, but Canvas data cannot provide that data.

4. Canvas data does not reflect student agency. The basic pedagogical design of Canvas is top-down and teacher-directed. Student choice is not a driving principle; in fact, it is really a struggle to build courses based on student choice (I will spare you the gory detail of my own struggles in that regard). Students cannot even ask questions in the form of search; yes, that's right: students cannot search the course content. The only access to the course content is through the click-here-click-there navigation paths pre-determined by the instructor. And, sad to say, there is apparently no fix in sight for this lack of search; as far as I could determine, there was no announcement regarding the deferred search project from Project Khaki back in 2017 (details here). Think about that lack of search for just a minute. It's no accident that Google started out as a search engine; the questions that people brought to Google, and people's choices in response to those answers, generated the behavioral surplus juggernaut that now powers Google AI. Netflix succeeds as a prediction engine precisely because it is driven by user choice: lots of options, lots of choices, and lots of data about those choices with which to build the prediction engine. The way that Canvas forestalls student choice, including the simple ability to initiate a search, is why I believe their AI project is going to fail. (Meanwhile, if I am wrong and there was an announcement about Canvas search at InstructureCon, let me know!)

And this last item is actually the most important:

5. Canvas data cannot measure obstacles to student learning. By focusing data collection on the students, Instructure runs the risk of neglecting the social, political, and economic contexts in which student learning takes place. Whether students succeed or fail in school is not simply the result of their own efforts; instead, there are opportunities and obstacles, not evenly distributed, which are crucially important. Does Canvas data record when students are hungry or homeless or without health insurance? Does Canvas data record that a course is taught by a poorly paid adjunct with no job security? As Dave Paunesku wrote in Ed Week this week, "When data reveal students' shortcomings without revealing the shortcomings of the systems intended to serve them, it becomes easier to treat students as deficient and harder to recognize how those systems must be changed to create more equitable opportunities." I hope everybody will take a few minutes to read the whole article: The Deficit Lens of the 'Achievement Gap' Needs to Be Flipped. Here's How. (Short answer: another billion quizzes is not how you flip the deficit lens.)



Of course, this is all a topic for a book, not a blog post, so I'll stop for now... but I'll be back next week to start a new approach to these datamongering round-ups: a commentary on Shoshana Zuboff's Surveillance Capitalism. Of all the concepts in play here, the one that is most important to me is what Zuboff's calls our "right to the future tense." So, I will work through her book chapter by chapter in the coming weeks, and hopefully that will make it more clear just why it is that I object so strongly to Instructure's predictive analytics experiment.

~ ~ ~

I want to close here with Bettina Love's TED talk; take a look/listen and see what you think: I think she is brilliant! More also at her website.


Speaking for myself, I'll take dance and lyrics over data analytics any day. So, keep on dancing, people! And I'll be back next week with Shoshana Zuboff's book and our right to the future tense. :-)

Sunday, July 7, 2019

Data Mongering (12): Special InstructureCon Edition

I began this #datamongering project back in March when I first learned of Instructure's plans to exploit existing user data to create predictive algorithms built with AI and machine learning: Soylent Canvas. I am still hoping that those of us who are opposed to the use of predictive algorithms in education will be able to OPT OUT so that Instructure will not be able to use our data to develop its algorithms and train its AI system (even better would be opt-in, but I don't actually have any hope for that one).


I submitted a question about data opt-out to the InstructureCon Engineering panel (my question). Coincidentally (?), Instructure then published a blog post about its privacy policy, so I reiterated that my question is not about privacy; it is about opting out of Instructure's plans to mine my data, all our data, for machine learning (my follow-up):


In talking to people about this, I've found that many educators are still not really sure just what AI and predictive algorithms mean for education, how LMS companies do data mining, what the difference is between machine learning and traditional statistical analysis, etc. etc. Over the past five months, I've been collecting online materials on these topics, so in this special "InstructureCon Edition" of my #datamongering round-ups, I've listed what I see as some of the most valuable resources people can use to learn more. Read on:


1. Instructure: Plans to expand beyond Canvas LMS into machine learning and AI by Phil Hill. This blog post is where I first learned about the big shift at Instructure, and you will find extensive quotes from Instructure's new CEO, Dan Goldsmith. This is a must-read for anyone whose school is using Canvas LMS:


For more on ed tech companies and their data, see also: EdTech Companies With The Most Student Data by Justin Menard.

2. Despite Dan Goldsmith's claims about Instructure's database, there is nowhere near enough data in Canvas to model real learning by real students. What kind of surveillance will be required to get the actual data required? China has a Next Generation Artificial Intelligence Development Plan (NGAIDP) that is bringing full-scale student surveillance to the classroom; there is detailed reporting here from Xue YujieCamera Above the Classroom. If you are going to read just one article on AI in education, this is the one to read.



3. For a student perspective, you can listen to the story of Bryan Short, a student at the University of British Columbia in Canada. There is an interview with Bryan at EdSurge: Inside a Student’s Hunt for His Own Learning Data (podcast with transcript), plus an article from the UBC student newspaper that puts Bryan's story in context: Canvas is tracking your data. What is UBC doing with it? by Zak Vescera.


As you can see, if the LMS does not give students the opportunity to opt out, things get very complicated as Bryan learned when he opted out on his own. This is why Instructure needs to give individuals more control over who is allowed to use their data and for what purposes.

4. Increasing surveillance of students is an issue of great concern for both higher ed and for K-12. On K-12, see this important piece by Benjamin Herold in EdWeek: Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming.



5. For resisting surveillance, and LMS surveillance in particular, you will find a good discussion here: Ethics and LMS Surveillance which is part of #DHSI19: Balancing Issues of Critical Digital Pedagogy containing contributions from Chris Friend and many others.


And for more, see also Erin Glass writing at HASTAC: Ten weird tricks for resisting surveillance capitalism in and through the classroom.

6. By mining student work to create new products, Instructure is following the lead of TurnItIn, a company which recently sold for $1.75 billion (not a typo). For an overview, see Automating Mistrust by Ben Williamson.


Also, this piece on TurnItIn from two years ago is still as relevant as ever: A Guide for Resisting Edtech: the Case against Turnitin by Sean Michael Morris and Jesse Stommel.

7. Did you notice that Canvas rebranded itself in June as a platform, not just an LMS? (details at the official Canvas blog). For an idea of just what the platforming of education means, here's a great piece, also from Ben Williamson: The platform university: a new data-driven business model for profiting from HE.


And for more on education-as-platform, see also Platform Capitalism and the Governance of Knowledge Infrastructure by Leslie Chan.

8. Matt Crosslin is more optimistic than I am that there is real value in data analytics, and he also recognizes some real pitfalls too; this blog post provides a great overview: So What Do You Want From Learning Analytics?


And for some perspective over time, see Lawrie Phipps's disenchanted take on algorithms: Postdigital: 10 years later, Algorithms and Agency.

9. Anyone going forward with algorithms needs to be aware of the dangers involved, and there are indeed many dangers. This resource from MIT points out some of them: AI Blindspot: A discovery process for spotting unconscious biases and structural inequalities in AI systems.


And here's another good read from a teacher's perspective: 10 Ways Data Can Sabotage Your Teaching by Terry Heick.

10. Finally I want to close with a brilliant film from sava saheli singh's project Screening Surveillance. The film is not about education, but it's easy to see just how this model employee could be re-imagined as a model student. Leila Khalilzadeh is the director, with a screenplay by Tim Maughan: Model Employee.



So, keep on reading, people! We cannot afford to be ignorant about AI, because . . . The AI Supply Chain Runs on Ignorance.

And if anybody is at the Instructure Engineering panel at InstructureCon on Thursday (July 11) 4:20PM in the Long Beach Convention Center, GB-B, please let me know if they say anything about a data opt-out. I don't know if my question will make the cut or not... but I have not given up hope yet.



Sunday, June 30, 2019

Data Mongering (11): A TurnItIn-Amazon-Gates Trifecta

This is my eleventh round-up; you can see them all here: Data-Mongering Round-Ups. As always, no shortage of items to report on!

I want to start with an announcement for Canvas users: at InstructureCon, there will be an AMA-style panel with engineering leadership from Instructure, and you can submit questions in advance here. I submitted a question about data-mining, and also one about search (yep, they mine our data but we cannot search our own course content; details). So, chime in at the Canvas Community in advance and, if you'll be at InstCon, the panel itself is Thursday, Jul 11 at 4:20-5:00 PM.


And now, this week in datamongering:

An important new blog post from Ben Williamson on TurnItIn: Automating mistrust. I see TurnItIn as being the ominous harbinger of an approach we now see spreading throughout the related world of the LMS, so this is an important read for all educators, not just those of us who teach writing. quote "Turnitin is also reshaping relationships between universities and students. Students are treated by default as potential essay cheats by its plagiarism detection algorithm. [...] Turnitin’s continued profitability depends on manufacturing and maintaining mistrust between students and academic staff, while also foregrounding its automated algorithm over teachers’ professional expertise." Ben's post contains lots of links in turn to pieces by Jesse Stommel, John Warner, Lucas Introna, and others and he also discusses an aspect of TurnItIn operations that I find especially troubling: the WriteCheck service which allows students to TurnItIn-proof their work before they submit it, for a steep fee of course. The student who first alerted me to the existence of WriteCheck dubbed it "Write-Me-A-Check" ($8 per paper, discounts for repeat users).

Plus more about TurnItIn in the news this week at CampusTechnology: Turnitin Partnership Adds Plagiarism Checking to College Admissions. In response to that, an excellent comment from Susan Blum:


Susan would know; she is the author of My Word!: Plagiarism and College Culture (the Kindle is just $7.99, people!). Table of contents: 1 A Question of Judgment / 2 Intertextuality, Authorship, and Plagiarism / 3 Observing the Performance Self / 4 Growing Up in the College Bubble / 5 No Magic Bullet.

Meanwhile, this piece from Anya Kamenetz at NPR has a theme that is really relevant to the question of (mis)trust: instead of monitoring, we need to be mentoring! At Your Wits' End With A Screen-Obsessed Kid? Read This. quote "Heitner advises that families like this one need to switch from monitoring to mentoring. Policing their kids' device use isn't working. They need to understand why their kids are using devices and what their kids get out of those devices so they can help the kids shift their habits." (Devorah Heitner is the author of Screenwise: Helping Kids Thrive (and Survive) in Their Digital World.) This same advice applies IMO to teachers: if students are not writing well, policing with TurnItIn is not going to give us the information we need to do better. Instead, we need to understand why students write well, or not, and what we can do to create more meaningful writing/learning experiences.

And now, moving on from TurnItIn this week to... Amazon. There is a great piece by Will Oremus at OneZero: Amazon Is Watching. quote "Imagine Ring surveillance cameras on cars and delivery drones, Ring baby monitors in nurseries, and Amazon Echo devices everywhere from schools to hotels to hospitals. Now imagine that all these Alexa-powered speakers and displays can recognize your voice and analyze your speech patterns to tell when you’re angry, sick, or considering a purchase. A 2015 patent filing reported last week by the Telegraph described a system that Amazon called “surveillance as a service,” which seems like an apt term for many of the products it’s already selling." 


Amazon has yet to make its big play for education; will it be Alexa in schools everywhere...? More on EchoDot for kids, plus a lawsuit on Amazon child surveillance). And don't forget the drones: With Amazon’s New Drone Patent, The Company’s Relationship With Surveillance Is About To Get Even More Complicated.

And on Amazon Rekognition, see this important piece: Amazon's Facial Analysis Program Is Building A Dystopic Future For Trans And Nonbinary People by Anna Merlan and Dhruv Mehrotra at Jezebel. This is a long and detailed article, with both big-picture information and also results of a specific Rekognition experiment. quote "Rekognition misgendered 100% of explicitly nonbinary individuals in the Broadly dataset. This isn’t because of bad training data or a technical oversight, but a failure in engineering vocabulary to address the population. That their software isn’t built with the capacity or vocabulary to treat gender as anything but binary suggests that Amazon’s engineers, for whatever reason, failed to see an entire population of humans as worthy of recognition."

And to complete the trifecta this week, here's more on Bill Gates's ambitions for higher ed via John Warner at IHE: Bill Gates, Please Stay Away from Higher Education. quote "These large, seemingly philanthropic efforts undertaken by billionaires like Gates are rooted in a desire to preserve the status quo where they sit atop the social order. Rather than putting his money into the hands of education experts or directly funding schools or students, he engineers programs, which replicate his values."

And for a related fail in education this week: AltSchool’s out: Zuckerberg-backed startup that tried to rethink education calls it quits. quote "AltSchool wooed parents and tech investors with a vision of bringing the classroom into the digital age. Engineers and designers on staff developed software for assisting teachers, and put it to work at a group of small schools in the Bay Area and New York run by the startup. At those outposts, kids weren’t just students; they served as software testers, helping AltSchool refine its technology for sale to other schools." Specifically on the subject of students as software testers, see these concerns expressed much earlier about exploiting students as data sources from Connie Loizos at TechCrunch: AltSchool wants to change how kids learn, but fears have surfaced that it’s failing students. quote "Compounding their anger these days is AltSchool’s more recent revelation that its existing network of schools, which had grown to seven locations, is now being pared back to just four — two in California and two in New York. The move has left parents to wonder: did AltSchool entice families into its program merely to extract data from their children, then toss them aside?"

And yes, there are more items that I bookmarked... but surely that's enough for this week. Eeek. 

On an up side, thanks to Tom Woodward I learned about this data-mongering resistance tool: it opens a 100 tabs in your browser designed to distort your profile. I'm not sure I want non-stop streetwear ads... but it would definitely skew my profile which currently delivers an endless stream of ads for books (no surprise) and for, yep, CanvasLMS, ha ha, as if I am in the market for an LMS. More at TrackThis.link.


And the graphic this week also comes from Tom at Twitter:


Plus XKCD on predictive modeling........


No docks at midnight... but I'll see you here again next week. And if you have #datamongering items to share at Twitter, use the hashtag and we can connect.



Saturday, June 22, 2019

Data Mongering (10): Don't fence me in!

This is my tenth round-up; you can see them all here: Data-Mongering Round-Ups. I've been out of town most of this week and not on Twitter much, but there's still plenty to report on... of course.

I'll start out with something lovely and uplifting: Maren Deepwell shared both the slides and the text of her ETUG keynote from Kamloops this week: You are more than a data point. Weaving a tapestry of hope and humanity. The presentation covers much more than just data, but she has some great remarks about data and its dangers as you can guess already from the title. quote: "As a woman I learn each and every day about the limits of technology and the reality of privilege, prejudice and power. Whether it’s inherent algorithmic biases, gender data gaps in the datasets used to train AIs or mobile phones that aren’t designed to fit in my ‘smaller than the average man’s’ palms, all of these examples and countless others highlight how important it is to question how technology works, to interrogate what’s behind the dashboards and predictive models, the disruptive technology that is hailed as the next big thing."


And here is a project, both hopeful and useful, that I learned about this week: tosdr.org offers annotated terms-of-service. You can follow them at Twitter also: @tosdr.


And for something less hopeful, an item from IHE: GPS to Track Student Attendance. This Cal Poly San Luis Obispo professor requires his students to check in using an app he created which accesses their phone GPS data: quote "Once students enter this radius, a geofence, they push a button on the app noting that they’ve arrived for class." Geofencing, from the app's website:


Instead of gathering data on his students (and, I suppose, docking their grade based on attendance?), it seems to me this professor could instead be asking his students why they do, or don't, show up for class. Geofencing and GPS data are not telling him what he needs to know in order to improve the class, but getting feedback about the class, both from students who attend and those who choose not to attend, could actually be helpful. And for a great piece on feedback from students, see this piece in Edutopia: Can 360-Degree Feedback Empower Students and Teachers? quote "perhaps the most important effect of this collaboration is the relationship building and personal understanding between teachers and students. Those strengthened bonds offer a stronger foundation of cultural sensitivity across the community" ⁠— the complete opposite of the way that surveillance technology undermines mutual trust between students and teachers.

This next piece comes from social work, but there is lots here for educators to ponder: Stuck On Algorithms by Sean Erreger. He notes the importance of the right to contest algorithm errors: quote "Also important to social workers should be the Right To Contest. That if one of these common blindspots are found, there is means to reconcile this. Is there enough transparency in the algorithm to fix “the problem. This is important when thinking about empowering the individuals and families we serve."

So too for students we serve, and also for ourselves if, indeed, our schools are going to start evaluating our work by surveilling us and using analytics. On that subject, here's an IHE piece  a couple years ago: Refusing to Be Evaluated by a Formula, and more from David Hughes of Rutgers here: Academic Analytics: Action Requested.

Meanwhile, you can find out more about "right to contest" and other AI pitalls in this great graphic from MIT Media Lab: AI Blindspot.



Also not about education directly, but with profound (and frightening) implications for education is this ACLU report: The Dawn of Robot Surveillance: AI, Video Analytics, and Privacy (download report from the link) quote "Analyzing video is going to become just as cheap as collecting it. While no company or government agency will hire the armies of expensive and distractible humans that would be required to monitor all the video now being collected, AI agents — which are cheap and scalable — will be available to perform the same tasks. And that will usher in something entirely new in the history of humanity: a society where everyone’s public movements and behavior are subject to constant and comprehensive evaluation and judgment by agents of authority — in short, a society where everyone is watched."


In particular, this report shows why we need to hear from LMS companies about limits to the data they will collect, limits to the data they will keep, and limits to the ways they will use that data. We cannot let those limits be (re)defined by the ever cheaper technology of surveillance and analysis; just because they can afford to gather and analyze the data does not mean that they should. See, for example, the gung-ho big data argument by Vince Kellen at Educause, 21st-Century Analytics: New Technologies and New Rules, insisting that cheap technology in and of itself justifies collecting all the data: quote "We try to bring in all the data that we can find in any given stream, whether we think we will use the data or not." I disagree; just because the data can be collected does not mean that it should be collected! And on the need for setting those limits, a hopeful counterpoint from New York state: Legislation to suspend facial recognition in schools passes state Assembly

Finally, on the unintended consequences of too much data, I learned a new word from this article: orthosomnia, which is perfectionism about sleep induced by sleep-tracking apps: That Sleep Tracker Could Make Your Insomnia Worse by Karen Zraick and Sarah Mervosh (NYTimes). quote "Sleep specialists caution that these apps and devices may provide inaccurate data and can even exacerbate symptoms of insomnia. Fiddling with your phone in bed, after all, is bad sleep hygiene. And for some, worrying about sleep goals can make bedtime anxiety even worse. There’s a name for an unhealthy obsession with achieving perfect sleep: orthosomnia."

Perfectionism is already a huge problem in education; we don't need to feed that problem with big data, especially superficial and inaccurate data.

And for a closing graphic this week, here's a reminder about Maha Bali's event tomorrow, Monday, June 24: The other side of  student empowerment in a digital world #FOEcast. I'll be traveling on Monday, but there's lots to explore there in the blog post too; see the post for links and lots of read and ponder.


Sunday, June 16, 2019

Data Mongering (9): Domains and More

Today's round-up is a bit different: I haven't been keeping up with Twitter this week so much except for #Domains19 and the aftermath, so I've just got a couple of Twitter items to share... but I also have some items to share from Domains19 since surveillance was indeed a theme of the conference, so scroll on down for that. As for the round-ups, This is my ninth round-up; you can see previous round-ups here: Data-Mongering Round-Ups. And I'm using #datamongering as a hashtag at Twitter; if others want to start using that hashtag to connect and share, that would be super!

From the Twitterverse...

My favorite item from Twitter this week was this very helpful blog post from Matt Crosslin (@grandeped): So What Do You Want From Learning Analytics? The whole post is a great read; here are the topic headings:
Mandatory training for all LA researchers in the history of educational research, learning theory, educational psychology, learning science, and curriculum & instruction. / Mandatory training for all LA researchers in structural inequalities and the role of tech and algorithms in creating and enforcing those inequalities. / Require all LA research projects to include instructional designers, learning theorists, educational psychologists, actual instructors, real students, people trained in dealing with structural inequalities, etc as part of the research team from the very beginning. / Be honest about the limitations and bias of LA. / Commit to creating realistic practical applications for instructors and students. / Make protecting privacy your guiding principle. Period. / Openness. 


And here's an event coming up on June 24 with Maha Bali and Bryan Alexander; I'm going to be traveling that day so I can't join in the live session, but I hope I can find some time to annotate. Links and lots of great stuff to read and explore at Maha's post about the event: The other side of  student empowerment in a digital world #FOEcast.


And now....

DOMAINS 2019

Tim is uploading all the Domains19 presentations (screens and audio) to YouTube, so keep an eye on Reclaim Hosting's YouTube channel for more as they arrive. The great folks at Reclaim Hosting did a fantastic job with every aspect of this event. Surveillance, privacy, and data ownership were some of the main themes of the conference, and it was a focus of Chris Gilliard and sava saheli singh's keynote on Monday, and also of Martin Hawksey's keynote on Tuesday. Those keynotes are not up at YouTube yet, but they will be soon. The Domains19 Schedule will also have links presentation by presentation; you will see that many of the presentations are relevant to surveillance, datamongering, privacy, etc. For now, I've embedded the brilliant films from sava's Screening Surveillance project, along with some of the Domains presentations that are already up and running at YouTube (thank you for sharing all that out, Tim!).

Model Employee
(and also a Q&A session about this video)









I'll add on to this list below as more of the presentations relevant to datamongering come online at Reclaim's YouTube:





Sunday, June 9, 2019

Data Mongering (8): Surveilling Students' Social Media

This is my eighth round-up; you can see them all here: Data-Mongering Round-Ups. And despite the usual bad news (see below), today is a good day: after writing this post, I'll be heading down to Durham for Domains19, where surveillance is one of the themes — all kinds of good stuff will be going on! You can see the Schedule here, follow the #Domains19 hashtag, and join in with Virtually Connecting too. The Tuesday VC includes Chris Gilliard and Tim Maughan, both of whom have shown up in previous data-mongering round-ups here at this blog. I am excited about getting to meet them in person!


And now... time for the data-mongering:

An important item this week was the Northumbria-Civitas plan for mental health services based on surveilling students: Northumbria University to lead transformation in how the Higher Education sector identifies mental health issues in students. Their commercial partner in this surveillance project: Civitas Learning. It's all about scale of course: quote  "Dr. Mark Milliron, Chief Learning Officer and Co-Founder of Civitas Learning said: “We help Higher Education institutions make the most of their learning data so that they know what is working for their students and can better personalise and scale student supports." Personalise here means the opposite of what it used to mean: impersonal automation instead of person-to-person care and support. Meanwhile... ka-ching goes the cash register as Civitas will have all that student data to use to build the algorithm products that they can then market to other schools who want to "scale" (automate) student support services.

Coverage also in the Telegraph newspaper: Universities to trawl through students’ social media to look for suicide risk. quote "The university has been running a project for the past two years where a team monitor students’ library use, lecture attendance and academic performance. They use this information to “nudge” students when their engagement drops off. Under the new OfS-backed scheme, the data collected on each student would extend to monitoring social media posts, conversations they have with individual members of staff and information held by their accommodation provider." So, as if the other monitoring were not bad enough, not it will include social media... and on surveilling without student consent, see Adrian Short.


Lots of good commentary at Twitter from Donna Lanclos, among others:


More on student surveillance by Jim Shultz at the New York Times: Spying on Children Won’t Keep Them Safe. quote "I have a 16-year-old daughter, and like every parent in the United States today, I worry about her safety when she’s in school. But here in Western New York’s Lockport City School District, those fears have led to a wasteful and dangerous experiment. This week the district’s eight public schools began testing a system called Aegis, which includes facial recognition technology, that could eventually be used to track and map student movements in our schools. How that happened is a cautionary tale for other schools across the country."

In contrast, here's an article about investing in people, not in surveillance and algorithms: With growing calls for more mental health services, states tackle school counselor caseloads by Linda Jacobson at Education Dive. quote "Research shows California schools are now relying more on counselors in order to improve outcomes for students in areas such as attendance and graduation. A report released last year points to how districts have used the flexibility under a revised funding formula to hire counselors and social workers to serve low-income students, English learners, and foster youth.” In other words: human support, not surveillance and bots.

An item from earlier this year that I just noticed this week: Aiha Nguyen and Alexandra Mateescu writing at Data and Society: Explainer: Algorithmic Management in the Workplace (PDF link). Not directly bout education but obviously very relevant as we see more and more algorithms deployed in education: quote "The authors outline existing research on the ways that algorithmic management is manifesting across various labor industries, shifting workplace power dynamics, and putting workers at a disadvantage. It can enable increased surveillance and control while removing transparency."


And here's a piece about the standardized testing industry and student guinea pigs by Valerie Stauss at the Washington: Post Millions of kids take standardized tests simply to help testing companies make better tests. (Really.) Like all the other humans whose labor is required behind the scenes for the "magic" to work, these students are being made to build the data system, and it's uncompensated labor, of course.

Plus more on that human labor to make the machines go: The AI gig economy is coming for you by Karen Hao at MIT Technology Review. This is an interview with Mary Gray, co-author with Siddharth Suri of Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. quote "Human workers don’t just label the data that makes AI work. Sometimes humans workers are the artificial intelligence. Behind Facebook’s content-moderating AI are thousands of content moderators; behind Amazon Alexa is a global team of transcribers; and behind Google Duplex are sometimes very human callers mimicking the AI that mimics humans. Artificial intelligence doesn’t run on magic pixie dust. It runs on invisible laborers who train algorithms relentlessly until they’ve automated their own jobs away."

My comment at Twitter:



There continues to be important discussion about Knowledge Unlatched, like this post from Punctum Books by Eileen Joy: The Enclosure of Scholarly Infrastructures, Open Access Books & the Necessity of Community by Eileen Joy ... and here's the latest Elsevier debacle via Colleen Cressman: quote "Elsevier's new authoring platform, Elsa, has deeply troubling terms of service. Whereas most tools that enable user-created content slap a restrictive (to the user and end users) license on the content, Elsevier says users fork over their rights under (C)."


And another screenshot from Colleen that is Elsa-specific:


On the fighting back front, here's something wonderful from Chris Friend and #DHSI19: Balancing Issues of Critical Digital Pedagogy, which contains an Ethics section, include a page on LMS Surveillance. quote "Subverting Surveillance. In critically assessing who de facto benefits from the surveillance in Learning Management Systems and in what ways, while also considering who is thought to benefit from surveillance, we can create architectures that promote a culture of consent by using digital platforms that liberate rather than monitor, surveill and assess."


And for the it's-not-data-it's-marcomm files: What 10,000 Steps Will Really Get You by Amanda Mull at the Atlantic. Not that walking isn't good for you... but 10,000 is a marcomm thing, not a data thing. quote "I-Min Lee, a professor of epidemiology, began looking into the step rule because she was curious about where it came from. “It turns out the original basis for this 10,000-step guideline was really a marketing strategy,” she explains. “In 1965, a Japanese company was selling pedometers, and they gave it a name that, in Japanese, means ‘the 10,000-step meter.’ Lee believes that name was chosen for the product because the character for “10,000” looks sort of like a man walking. As far as she knows, the actual health merits of that number have never been validated by research."


And for more medical marcomm and also data-mongering, check out the write-up about 23andMe in Forbes: Live Long And Prosper: How Anne Wojcicki’s 23andMe Will Mine Its Giant DNA Database For Health And Wealth. Plus a nightmare article from Harvard Business Review: How Bots Will Change the Doctor-Patient Relationship by David A. Asch, Sean Nicholson and Marc L. Berger. And teachers are presumably the bank tellers of education who will be replaced by ATMs.

Finally, for this week's graphic here's a gif from Twitter: you can try to nudge your dog to eat more slowly with an intervention... but the dog is still going to do their own thing! Now when I read about algorithms that nudge people this way or that, I am going to think about this dog. Go, Dog!