Pages

June 30, 2019

Data Mongering (11): A TurnItIn-Amazon-Gates Trifecta

This is my eleventh round-up; you can see them all here: Data-Mongering Round-Ups. As always, no shortage of items to report on!

I want to start with an announcement for Canvas users: at InstructureCon, there will be an AMA-style panel with engineering leadership from Instructure, and you can submit questions in advance here. I submitted a question about data-mining, and also one about search (yep, they mine our data but we cannot search our own course content; details). So, chime in at the Canvas Community in advance and, if you'll be at InstCon, the panel itself is Thursday, Jul 11 at 4:20-5:00 PM.


And now, this week in datamongering:

An important new blog post from Ben Williamson on TurnItIn: Automating mistrust. I see TurnItIn as being the ominous harbinger of an approach we now see spreading throughout the related world of the LMS, so this is an important read for all educators, not just those of us who teach writing. quote "Turnitin is also reshaping relationships between universities and students. Students are treated by default as potential essay cheats by its plagiarism detection algorithm. [...] Turnitin’s continued profitability depends on manufacturing and maintaining mistrust between students and academic staff, while also foregrounding its automated algorithm over teachers’ professional expertise." Ben's post contains lots of links in turn to pieces by Jesse Stommel, John Warner, Lucas Introna, and others and he also discusses an aspect of TurnItIn operations that I find especially troubling: the WriteCheck service which allows students to TurnItIn-proof their work before they submit it, for a steep fee of course. The student who first alerted me to the existence of WriteCheck dubbed it "Write-Me-A-Check" ($8 per paper, discounts for repeat users).

Plus more about TurnItIn in the news this week at CampusTechnology: Turnitin Partnership Adds Plagiarism Checking to College Admissions. In response to that, an excellent comment from Susan Blum:


Susan would know; she is the author of My Word!: Plagiarism and College Culture (the Kindle is just $7.99, people!). Table of contents: 1 A Question of Judgment / 2 Intertextuality, Authorship, and Plagiarism / 3 Observing the Performance Self / 4 Growing Up in the College Bubble / 5 No Magic Bullet.

Meanwhile, this piece from Anya Kamenetz at NPR has a theme that is really relevant to the question of (mis)trust: instead of monitoring, we need to be mentoring! At Your Wits' End With A Screen-Obsessed Kid? Read This. quote "Heitner advises that families like this one need to switch from monitoring to mentoring. Policing their kids' device use isn't working. They need to understand why their kids are using devices and what their kids get out of those devices so they can help the kids shift their habits." (Devorah Heitner is the author of Screenwise: Helping Kids Thrive (and Survive) in Their Digital World.) This same advice applies IMO to teachers: if students are not writing well, policing with TurnItIn is not going to give us the information we need to do better. Instead, we need to understand why students write well, or not, and what we can do to create more meaningful writing/learning experiences.

And now, moving on from TurnItIn this week to... Amazon. There is a great piece by Will Oremus at OneZero: Amazon Is Watching. quote "Imagine Ring surveillance cameras on cars and delivery drones, Ring baby monitors in nurseries, and Amazon Echo devices everywhere from schools to hotels to hospitals. Now imagine that all these Alexa-powered speakers and displays can recognize your voice and analyze your speech patterns to tell when you’re angry, sick, or considering a purchase. A 2015 patent filing reported last week by the Telegraph described a system that Amazon called “surveillance as a service,” which seems like an apt term for many of the products it’s already selling." 


Amazon has yet to make its big play for education; will it be Alexa in schools everywhere...? More on EchoDot for kids, plus a lawsuit on Amazon child surveillance). And don't forget the drones: With Amazon’s New Drone Patent, The Company’s Relationship With Surveillance Is About To Get Even More Complicated.

And on Amazon Rekognition, see this important piece: Amazon's Facial Analysis Program Is Building A Dystopic Future For Trans And Nonbinary People by Anna Merlan and Dhruv Mehrotra at Jezebel. This is a long and detailed article, with both big-picture information and also results of a specific Rekognition experiment. quote "Rekognition misgendered 100% of explicitly nonbinary individuals in the Broadly dataset. This isn’t because of bad training data or a technical oversight, but a failure in engineering vocabulary to address the population. That their software isn’t built with the capacity or vocabulary to treat gender as anything but binary suggests that Amazon’s engineers, for whatever reason, failed to see an entire population of humans as worthy of recognition."

And to complete the trifecta this week, here's more on Bill Gates's ambitions for higher ed via John Warner at IHE: Bill Gates, Please Stay Away from Higher Education. quote "These large, seemingly philanthropic efforts undertaken by billionaires like Gates are rooted in a desire to preserve the status quo where they sit atop the social order. Rather than putting his money into the hands of education experts or directly funding schools or students, he engineers programs, which replicate his values."

And for a related fail in education this week: AltSchool’s out: Zuckerberg-backed startup that tried to rethink education calls it quits. quote "AltSchool wooed parents and tech investors with a vision of bringing the classroom into the digital age. Engineers and designers on staff developed software for assisting teachers, and put it to work at a group of small schools in the Bay Area and New York run by the startup. At those outposts, kids weren’t just students; they served as software testers, helping AltSchool refine its technology for sale to other schools." Specifically on the subject of students as software testers, see these concerns expressed much earlier about exploiting students as data sources from Connie Loizos at TechCrunch: AltSchool wants to change how kids learn, but fears have surfaced that it’s failing students. quote "Compounding their anger these days is AltSchool’s more recent revelation that its existing network of schools, which had grown to seven locations, is now being pared back to just four — two in California and two in New York. The move has left parents to wonder: did AltSchool entice families into its program merely to extract data from their children, then toss them aside?"

And yes, there are more items that I bookmarked... but surely that's enough for this week. Eeek. 

On an up side, thanks to Tom Woodward I learned about this data-mongering resistance tool: it opens a 100 tabs in your browser designed to distort your profile. I'm not sure I want non-stop streetwear ads... but it would definitely skew my profile which currently delivers an endless stream of ads for books (no surprise) and for, yep, CanvasLMS, ha ha, as if I am in the market for an LMS. More at TrackThis.link.


And the graphic this week also comes from Tom at Twitter:


Plus XKCD on predictive modeling........


No docks at midnight... but I'll see you here again next week. And if you have #datamongering items to share at Twitter, use the hashtag and we can connect.



June 22, 2019

Data Mongering (10): Don't fence me in!

This is my tenth round-up; you can see them all here: Data-Mongering Round-Ups. I've been out of town most of this week and not on Twitter much, but there's still plenty to report on... of course.

I'll start out with something lovely and uplifting: Maren Deepwell shared both the slides and the text of her ETUG keynote from Kamloops this week: You are more than a data point. Weaving a tapestry of hope and humanity. The presentation covers much more than just data, but she has some great remarks about data and its dangers as you can guess already from the title. quote: "As a woman I learn each and every day about the limits of technology and the reality of privilege, prejudice and power. Whether it’s inherent algorithmic biases, gender data gaps in the datasets used to train AIs or mobile phones that aren’t designed to fit in my ‘smaller than the average man’s’ palms, all of these examples and countless others highlight how important it is to question how technology works, to interrogate what’s behind the dashboards and predictive models, the disruptive technology that is hailed as the next big thing."


And here is a project, both hopeful and useful, that I learned about this week: tosdr.org offers annotated terms-of-service. You can follow them at Twitter also: @tosdr.


And for something less hopeful, an item from IHE: GPS to Track Student Attendance. This Cal Poly San Luis Obispo professor requires his students to check in using an app he created which accesses their phone GPS data: quote "Once students enter this radius, a geofence, they push a button on the app noting that they’ve arrived for class." Geofencing, from the app's website:


Instead of gathering data on his students (and, I suppose, docking their grade based on attendance?), it seems to me this professor could instead be asking his students why they do, or don't, show up for class. Geofencing and GPS data are not telling him what he needs to know in order to improve the class, but getting feedback about the class, both from students who attend and those who choose not to attend, could actually be helpful. And for a great piece on feedback from students, see this piece in Edutopia: Can 360-Degree Feedback Empower Students and Teachers? quote "perhaps the most important effect of this collaboration is the relationship building and personal understanding between teachers and students. Those strengthened bonds offer a stronger foundation of cultural sensitivity across the community" ⁠— the complete opposite of the way that surveillance technology undermines mutual trust between students and teachers.

This next piece comes from social work, but there is lots here for educators to ponder: Stuck On Algorithms by Sean Erreger. He notes the importance of the right to contest algorithm errors: quote "Also important to social workers should be the Right To Contest. That if one of these common blindspots are found, there is means to reconcile this. Is there enough transparency in the algorithm to fix “the problem. This is important when thinking about empowering the individuals and families we serve."

So too for students we serve, and also for ourselves if, indeed, our schools are going to start evaluating our work by surveilling us and using analytics. On that subject, here's an IHE piece  a couple years ago: Refusing to Be Evaluated by a Formula, and more from David Hughes of Rutgers here: Academic Analytics: Action Requested.

Meanwhile, you can find out more about "right to contest" and other AI pitalls in this great graphic from MIT Media Lab: AI Blindspot.



Also not about education directly, but with profound (and frightening) implications for education is this ACLU report: The Dawn of Robot Surveillance: AI, Video Analytics, and Privacy (download report from the link) quote "Analyzing video is going to become just as cheap as collecting it. While no company or government agency will hire the armies of expensive and distractible humans that would be required to monitor all the video now being collected, AI agents — which are cheap and scalable — will be available to perform the same tasks. And that will usher in something entirely new in the history of humanity: a society where everyone’s public movements and behavior are subject to constant and comprehensive evaluation and judgment by agents of authority — in short, a society where everyone is watched."


In particular, this report shows why we need to hear from LMS companies about limits to the data they will collect, limits to the data they will keep, and limits to the ways they will use that data. We cannot let those limits be (re)defined by the ever cheaper technology of surveillance and analysis; just because they can afford to gather and analyze the data does not mean that they should. See, for example, the gung-ho big data argument by Vince Kellen at Educause, 21st-Century Analytics: New Technologies and New Rules, insisting that cheap technology in and of itself justifies collecting all the data: quote "We try to bring in all the data that we can find in any given stream, whether we think we will use the data or not." I disagree; just because the data can be collected does not mean that it should be collected! And on the need for setting those limits, a hopeful counterpoint from New York state: Legislation to suspend facial recognition in schools passes state Assembly

Finally, on the unintended consequences of too much data, I learned a new word from this article: orthosomnia, which is perfectionism about sleep induced by sleep-tracking apps: That Sleep Tracker Could Make Your Insomnia Worse by Karen Zraick and Sarah Mervosh (NYTimes). quote "Sleep specialists caution that these apps and devices may provide inaccurate data and can even exacerbate symptoms of insomnia. Fiddling with your phone in bed, after all, is bad sleep hygiene. And for some, worrying about sleep goals can make bedtime anxiety even worse. There’s a name for an unhealthy obsession with achieving perfect sleep: orthosomnia."

Perfectionism is already a huge problem in education; we don't need to feed that problem with big data, especially superficial and inaccurate data.

And for a closing graphic this week, here's a reminder about Maha Bali's event tomorrow, Monday, June 24: The other side of  student empowerment in a digital world #FOEcast. I'll be traveling on Monday, but there's lots to explore there in the blog post too; see the post for links and lots of read and ponder.


June 16, 2019

Data Mongering (9): Domains and More

Today's round-up is a bit different: I haven't been keeping up with Twitter this week so much except for #Domains19 and the aftermath, so I've just got a couple of Twitter items to share... but I also have some items to share from Domains19 since surveillance was indeed a theme of the conference, so scroll on down for that. As for the round-ups, This is my ninth round-up; you can see previous round-ups here: Data-Mongering Round-Ups. And I'm using #datamongering as a hashtag at Twitter; if others want to start using that hashtag to connect and share, that would be super!

From the Twitterverse...

My favorite item from Twitter this week was this very helpful blog post from Matt Crosslin (@grandeped): So What Do You Want From Learning Analytics? The whole post is a great read; here are the topic headings:
Mandatory training for all LA researchers in the history of educational research, learning theory, educational psychology, learning science, and curriculum & instruction. / Mandatory training for all LA researchers in structural inequalities and the role of tech and algorithms in creating and enforcing those inequalities. / Require all LA research projects to include instructional designers, learning theorists, educational psychologists, actual instructors, real students, people trained in dealing with structural inequalities, etc as part of the research team from the very beginning. / Be honest about the limitations and bias of LA. / Commit to creating realistic practical applications for instructors and students. / Make protecting privacy your guiding principle. Period. / Openness. 


And here's an event coming up on June 24 with Maha Bali and Bryan Alexander; I'm going to be traveling that day so I can't join in the live session, but I hope I can find some time to annotate. Links and lots of great stuff to read and explore at Maha's post about the event: The other side of  student empowerment in a digital world #FOEcast.


And now....

DOMAINS 2019

Tim is uploading all the Domains19 presentations (screens and audio) to YouTube, so keep an eye on Reclaim Hosting's YouTube channel for more as they arrive. The great folks at Reclaim Hosting did a fantastic job with every aspect of this event. Surveillance, privacy, and data ownership were some of the main themes of the conference, and it was a focus of Chris Gilliard and sava saheli singh's keynote on Monday, and also of Martin Hawksey's keynote on Tuesday. Those keynotes are not up at YouTube yet, but they will be soon. The Domains19 Schedule will also have links presentation by presentation; you will see that many of the presentations are relevant to surveillance, datamongering, privacy, etc. For now, I've embedded the brilliant films from sava's Screening Surveillance project, along with some of the Domains presentations that are already up and running at YouTube (thank you for sharing all that out, Tim!).

Model Employee
(and also a Q&A session about this video)









I'll add on to this list below as more of the presentations relevant to datamongering come online at Reclaim's YouTube:





June 9, 2019

Data Mongering (8): Surveilling Students' Social Media

This is my eighth round-up; you can see them all here: Data-Mongering Round-Ups. And despite the usual bad news (see below), today is a good day: after writing this post, I'll be heading down to Durham for Domains19, where surveillance is one of the themes — all kinds of good stuff will be going on! You can see the Schedule here, follow the #Domains19 hashtag, and join in with Virtually Connecting too. The Tuesday VC includes Chris Gilliard and Tim Maughan, both of whom have shown up in previous data-mongering round-ups here at this blog. I am excited about getting to meet them in person!


And now... time for the data-mongering:

An important item this week was the Northumbria-Civitas plan for mental health services based on surveilling students: Northumbria University to lead transformation in how the Higher Education sector identifies mental health issues in students. Their commercial partner in this surveillance project: Civitas Learning. It's all about scale of course: quote  "Dr. Mark Milliron, Chief Learning Officer and Co-Founder of Civitas Learning said: “We help Higher Education institutions make the most of their learning data so that they know what is working for their students and can better personalise and scale student supports." Personalise here means the opposite of what it used to mean: impersonal automation instead of person-to-person care and support. Meanwhile... ka-ching goes the cash register as Civitas will have all that student data to use to build the algorithm products that they can then market to other schools who want to "scale" (automate) student support services.

Coverage also in the Telegraph newspaper: Universities to trawl through students’ social media to look for suicide risk. quote "The university has been running a project for the past two years where a team monitor students’ library use, lecture attendance and academic performance. They use this information to “nudge” students when their engagement drops off. Under the new OfS-backed scheme, the data collected on each student would extend to monitoring social media posts, conversations they have with individual members of staff and information held by their accommodation provider." So, as if the other monitoring were not bad enough, not it will include social media... and on surveilling without student consent, see Adrian Short.


Lots of good commentary at Twitter from Donna Lanclos, among others:


More on student surveillance by Jim Shultz at the New York Times: Spying on Children Won’t Keep Them Safe. quote "I have a 16-year-old daughter, and like every parent in the United States today, I worry about her safety when she’s in school. But here in Western New York’s Lockport City School District, those fears have led to a wasteful and dangerous experiment. This week the district’s eight public schools began testing a system called Aegis, which includes facial recognition technology, that could eventually be used to track and map student movements in our schools. How that happened is a cautionary tale for other schools across the country."

In contrast, here's an article about investing in people, not in surveillance and algorithms: With growing calls for more mental health services, states tackle school counselor caseloads by Linda Jacobson at Education Dive. quote "Research shows California schools are now relying more on counselors in order to improve outcomes for students in areas such as attendance and graduation. A report released last year points to how districts have used the flexibility under a revised funding formula to hire counselors and social workers to serve low-income students, English learners, and foster youth.” In other words: human support, not surveillance and bots.

An item from earlier this year that I just noticed this week: Aiha Nguyen and Alexandra Mateescu writing at Data and Society: Explainer: Algorithmic Management in the Workplace (PDF link). Not directly bout education but obviously very relevant as we see more and more algorithms deployed in education: quote "The authors outline existing research on the ways that algorithmic management is manifesting across various labor industries, shifting workplace power dynamics, and putting workers at a disadvantage. It can enable increased surveillance and control while removing transparency."


And here's a piece about the standardized testing industry and student guinea pigs by Valerie Stauss at the Washington: Post Millions of kids take standardized tests simply to help testing companies make better tests. (Really.) Like all the other humans whose labor is required behind the scenes for the "magic" to work, these students are being made to build the data system, and it's uncompensated labor, of course.

Plus more on that human labor to make the machines go: The AI gig economy is coming for you by Karen Hao at MIT Technology Review. This is an interview with Mary Gray, co-author with Siddharth Suri of Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. quote "Human workers don’t just label the data that makes AI work. Sometimes humans workers are the artificial intelligence. Behind Facebook’s content-moderating AI are thousands of content moderators; behind Amazon Alexa is a global team of transcribers; and behind Google Duplex are sometimes very human callers mimicking the AI that mimics humans. Artificial intelligence doesn’t run on magic pixie dust. It runs on invisible laborers who train algorithms relentlessly until they’ve automated their own jobs away."

My comment at Twitter:



There continues to be important discussion about Knowledge Unlatched, like this post from Punctum Books by Eileen Joy: The Enclosure of Scholarly Infrastructures, Open Access Books & the Necessity of Community by Eileen Joy ... and here's the latest Elsevier debacle via Colleen Cressman: quote "Elsevier's new authoring platform, Elsa, has deeply troubling terms of service. Whereas most tools that enable user-created content slap a restrictive (to the user and end users) license on the content, Elsevier says users fork over their rights under (C)."


And another screenshot from Colleen that is Elsa-specific:


On the fighting back front, here's something wonderful from Chris Friend and #DHSI19: Balancing Issues of Critical Digital Pedagogy, which contains an Ethics section, include a page on LMS Surveillance. quote "Subverting Surveillance. In critically assessing who de facto benefits from the surveillance in Learning Management Systems and in what ways, while also considering who is thought to benefit from surveillance, we can create architectures that promote a culture of consent by using digital platforms that liberate rather than monitor, surveill and assess."


And for the it's-not-data-it's-marcomm files: What 10,000 Steps Will Really Get You by Amanda Mull at the Atlantic. Not that walking isn't good for you... but 10,000 is a marcomm thing, not a data thing. quote "I-Min Lee, a professor of epidemiology, began looking into the step rule because she was curious about where it came from. “It turns out the original basis for this 10,000-step guideline was really a marketing strategy,” she explains. “In 1965, a Japanese company was selling pedometers, and they gave it a name that, in Japanese, means ‘the 10,000-step meter.’ Lee believes that name was chosen for the product because the character for “10,000” looks sort of like a man walking. As far as she knows, the actual health merits of that number have never been validated by research."


And for more medical marcomm and also data-mongering, check out the write-up about 23andMe in Forbes: Live Long And Prosper: How Anne Wojcicki’s 23andMe Will Mine Its Giant DNA Database For Health And Wealth. Plus a nightmare article from Harvard Business Review: How Bots Will Change the Doctor-Patient Relationship by David A. Asch, Sean Nicholson and Marc L. Berger. And teachers are presumably the bank tellers of education who will be replaced by ATMs.

Finally, for this week's graphic here's a gif from Twitter: you can try to nudge your dog to eat more slowly with an intervention... but the dog is still going to do their own thing! Now when I read about algorithms that nudge people this way or that, I am going to think about this dog. Go, Dog!




June 8, 2019

Tarot Card Javascript

For the upcoming Domains conference (whoo-hoo! and yes, this is for you, Autumm...), I built a new randomizing widget using the Rider-Waite-Smith tarot deck drawn by the remarkable Pamela Colman Smith.





To create the widget, I snagged the card images at Wikipedia, along with the link to the Wikipedia article for each card. Below each image, I've added a link back to this blog post. I've created two versions of the script: one that is 300 pixels wide (the width of the card images at Wikipedia), and also a smaller 200 pixel version suitable for blog sidebars. You can see the 300-pixel version below, and the 200-pixel version in the sidebar of this blog.

If you want to use the script yourself, you can copy-and-paste the information below. 

300 Pixels:

<script src="https://widgets.lauragibbs.net/domains/tarot/tarot300.js" type="text/javascript"></script>

200 Pixels:

<script src="https://widgets.lauragibbs.net/domains/tarot/tarot200.js" type="text/javascript"></script>

You can also use the script in Canvas LMS or other javascript-hostile environments by pasting in this iframe version; that way the script is running outside of Canvas, which makes Canvas happy:

<iframe src="https://widgets.lauragibbs.net/canvas/tarot.html" width="350" height="800"></iframe>

Behold: a randomizing Tarot Deck inside a Canvas LMS Page.


You can see the spreadsheet I used to generate the script here, converting the HTML in the spreadsheet to javascript with RotateContent.com. Detailed instructions here: Turn HTML Tables into Javascripts.

* * *

Appendix: Reversed cards included

People have strong feelings sometimes about whether to play with reversed cards or not; I've created a separate script which has the reversed cards included:

300 Pixels:

<script src="https://widgets.lauragibbs.net/domains/tarot/tarot300reversed.js" type="text/javascript"></script>

200 Pixels:

<script src="https://widgets.lauragibbs.net/domains/tarot/tarot200reversed.js" type="text/javascript"></script>




June 4, 2019

End-of-Semester Evals: Grading and Creativity


I've copied and pasted this old post from my Canvas Community blog in order to reference it here. Originally published December 2018.


Hi everybody! Since we just got our end-of-semester evaluations today, I wanted to write up a post about that. Every semester, I go through the comments back from the students; I absolutely loathe the number part of the evaluation (they rank faculty against each other with percentiles, which I think is awful, even worse than ABCDF grading)... but I really value getting comments back from students.

How I Use the Comments. Admittedly, the evaluations can be kind of overwhelming since students' comments can be all over the place, so what I've been doing for a couple of years now is to read the comments and then focus in on two issues that are most important to me: grading and creativity. I do not do any grading in my classes and instead ask the students to do their own grading (more about that here), so it's important to me to know that this unusual system is okay with them. I also choose to build the courses around the students' own creative work, which is definitely not what students expect, so it's important for me to know that this is a good choice on my part. If I got feedback from students that they do not like the grading approach and/or that they do not like the creative writing, then I would redesign my classes.

But as I learn from the evaluations, semester after semester, the students report high satisfaction with the approach to the grading and they also report that the creative aspects of the classes are what they like most -- and that continued to be the case this semester. You can see the collected student comments here:


These two dimensions of the class are actually interconnected: one of the main reasons why I do not do any grading (lots of feedback, LOTS of feedback -- but no grading) is in order to free up a sense of creative experimentation. For more about promoting creativity, here's a presentation on creativity I did for Can*Innovate this fall.

Anyway, having this focus for analyzing the feedback from students every semester has been really helpful for me, and I am so grateful that Michelle Pacansky-Brock suggested to me several years ago the idea of harvesting comments from students like this as testimony to the un-grading approach. Because that was so successful, I decided to start doing the comments about creativity also.

Cool Option at OU! And here's something really cool that is available at my school (University of Oklahoma) because of our online evaluation system: if I ever want to add another search term to analyze every semester, the evaluation system we use allows me to get a PDF file that contains ALL the student comments for ALL the courses I've taught since back in 2010, which is when our evaluation system went online. It's a trick you can use with the dropdown menu. The default display is for the current semester only, but if you scroll all the way up to the top of the dropdown you will see "all semesters" and then you can click on "Export to PDF" to get a total compendium of all courses for all semesters since Fall 2010:


In my case, it generates a PDF that is 203 pages long! The PDF contains all the evaluations from all the courses for the past 19 semesters. So, a gigantic report like that is too long to read (although I guess you could read it... that would be a weird experience, ha ha), but it is perfect for searching on a text string.


You can use that same trick to get all courses for a single semester as a single PDF also. That's what I now do each semester: I generate the PDF for the current semester, use Control-A to select all, and then copy into a text document. Then, I search on grad* and creat* to find the new comments on grading and creativity which I copy-and-paste into my ongoing collections.

So, if you are starting a collection from scratch, you can use the "all semesters" option to get going, and then just update your collection each semester with new comments, getting a picture over time of the topics that are important to you as a teacher. I really hate the numbers (and, yes, ugh, the number charts are in the PDF also), but the student comments are the most important information I have for strategizing about course design, and I really appreciate how easy it is for me to get a searchable text of those student comments in order to look for patterns in the feedback from semester to semester.

Happy End-of-Semester, everybody!!!



June 2, 2019

Data Mongering (7): Say it with a nudge...

This is my seventh round-up; you can see them all here: Data-Mongering Round-Ups. I'm also annotating as I go in Diigo, so you can the Diigo files here: Data-Mongering articles. (The editorial comments there in Diigo are just copied-and-pasted from the blog posts.)

I want to start with two really important items from last week, one at EdSurge and one at EdWeek.


Inside a Student’s Hunt for His Own Learning Data by Sydney Johnson at EdSurge (podcast with transcript). You might remember Zak Vescera's reporting on Canvas at UBC and Bryan Short's story there (Canvas is tracking your data. What is UBC doing with it?), and now here at EdSurge is a detailed interview with Bryan, very much worth reading; Bryan wanted to opt out, but the Canvas system provides no accommodations of any kind, and that is my concern also. If  a student refuses to use TurnItIn (as I think they should!), that's easy to accommodate; the teacher just has to read the paper on their own without machine assistance. But with the LMS, it's hard to come up with a do-it-yourself opt-out. Here's what happened to Bryan: quote "I decided not to opt into Canvas because I was unhappy with the way that my data was being collected. I was unhappy with the way that I had to go about accessing it. I was proposing the creation of a bill of rights around student data, a policy at the university [for using student data]. So, I didn’t opt into the use of it and it really caused quite a bit of tension. It put a huge burden on the instructors who relied upon this technology to conduct their courses because they would have to email me things separately rather than just blasting stuff out to a class. I couldn’t participate in discussions that were taking place online through the learning management system. Ultimately, I think it probably hurt my grades in certain circumstances." ... I hope that Instructure is going to make an effort to develop some opt-out options, a need that is going to become even more urgent as they carry out their own expansion into AI and machine learning, and also as Instructure partners with other ed-tech companies who are in the business of surveilling students. Based on a conversation I had with some folks at Instructure a couple weeks ago, I am hoping for something to perhaps appear at the Instructure and/or Canvas blog about data and opt-outs (fingers crossed).

And to learn more about the student-surveillance business, here is the article from EdWeek: Schools Are Deploying Massive Digital Surveillance Systems. The Results Are Alarming by Benjamin Herold. There are lots of important issues in this article, and in particular I want to highlight the use of sentiment analysis: quote "When Securly launched in 2013, its lone offering was a web filter to block students’ access to obscene and harmful content. A year later, though, Securly also began offering “sentiment analysis” of students’ social media posts, looking for signs they might be victims of cyberbullying or self-harm. In 2016, the company expanded that analysis to students’ school email accounts, monitoring all messages sent over district networks. It also created an “emotionally intelligent” app that sends parents weekly reports and automated push notifications detailing their children’s internet searches and browsing histories." 


Just speaking for myself as a teacher, I am very uncomfortable with the idea of a company mining my students' schoolwork for the purposes of sentiment analysis, and it seems almost inevitable (?) as LMS companies get into the AI/ML business, they are going to start mining the discussion boards for sentiment data to use in their predictive algorithms. This would transform the discussion boards from mere boredom into something far more sinister IMO.

To see what happens when you add facial recognition into the mix, be sure to read Yuije Xue's Camera Above the Classroom is a must-read (I keep sharing this article again and again because it is one of the best I've read, and I really appreciate the inclusion of student voices). And for more on research and tech develop in China, check out UnConstrained College Students and Duke MTMC via the MegaPixels Project. From the Duke MTMC story: quote "Since its publication in 2016, more than twice as many research citations originated in China as in the United States. Among these citations were papers links to the Chinese military and several of the companies known to provide Chinese authorities with the oppressive surveillance technology used to monitor millions of Uighur Muslims." (Thanks to @tanbob for these two items!)

And via @readywriting here's an item from a couple years ago: Hiding from artificial intelligence in the age of total surveillance by Victoria Zavyalova. It's about make-up designed by Grigory Bakunov to thwart facial recognition algorithms, although I suppose in the fast-paced game of cat-and-mouse (where we are the mice, naturally), the cats have probably got a work-around already...?


Meanwhile, on the related topic of OER-mongering: last week, I shared some of the negative reaction to Knowledge Unlatched, and here is another item for that file: The Open Research Library: Centralisation without Openness by Marcel Knöchelmann writing at the London School of Economics Impact blog. And for some thoughts about developments at Lumen, see: The Business of Fettering OER by Billy Meinke-Lau.


But don't despair: here is a really informative and encourage post from Erin Glass at HASTAC: Ten weird tricks for resisting surveillance capitalism in and through the classroom.


And an image for this week is supplied by none other than the Twitter algorithm itself! Someone DMed me this hilarious promoted tweet that showed up in their stream caused (presumably) by my tweet about nudges: it's all about the new behaviorism. In the spirit of B. F. Skinner's conditioned pigeons, along with all the good dogs out there, just "say it with a nudge!" (For more about ed-tech and nudges, see Predictive Analytics: Nudging, Shoving, and Smacking Behaviors in Higher Education by Kevin Desouza and Kendra Smith at Educause.)




Magic 8-Ball Demo



Magic 8-Ball


This is an example for my Randomizer presentation at Domains19. Find out more: Domains.LauraGibbs.net. I'm excited to have a chance to share this fun tool with others!

You can see the table with the classic 8-ball responses.

Here's the javascript; it just delivers the text. You can position it with an image if you want, like I did below.

<script src="https://widgets.lauragibbs.net/domains/magic8classic.js" type="text/javascript"></script>

Here's a Canvas-friendly version (see it in Canvas); it's an iframe which also includes an 8-ball image.

<iframe src="https://widgets.lauragibbs.net/canvas/magic8ball.html" width="420" height="550"></iframe>