A teenage boy charged with shooting and killing eight students and two teachers had been spurned by one of his victims after making aggressive advances, her mother says.
This is part two in my wrap up posts from OLC Innovate 2018. My third and final presentation was demoing and gathering feedback on a framework and tool that Amy Collier and I have been developing over the past few months.
This idea started after a panel presentation that Amy and I were on at OpenEd 2017 (my notes from my portion of that presentation are here). The idea that was floating around at that specific was that institutions involved in Domain of One’s Own projects, where ownership of data is given directly to the students via a domain registration process, uniquely positions those institutions to serve as what Amy has coined as a digital sanctuary, much like the recent rise of sanctuary cities in the United States. I talked about similar ideas. Mostly how, while DoOO has been often framed as the opposite of the LMS, we are now in a time where we can see its value over large public monolithic tools such as Facebook, Slack, Twitter, etc. particularly in relation to data and data privacy.
Shortly after, Amy and I quickly realized our ideas were closely aligned and we wanted to continue thinking about how we can educate publics specifically on the Terms of Service of EdTech tools. As an instructor, I’ve often given strong preference towards “real tools” over the institution-adopted tools, which often feel artificial, clunky, and lack the broader appeal of a openly networked environment. Yet, particularly in the U.S., we are now in the midst of a historical period where technology is outpacing law and the level of trust we have as it relates to a platforms ability to protect individual data is at an all time low.
The Facebook-Cambridge Analytica scandal that eventually led to Congress testimonies by Mark Zuckerberg highlighted an issue the normal user doesn’t often consider: something as simple as accessing a quiz which asks to access bits of your digital information in efforts to complete it can lead to unintentional and severe consequences.
This scenario overlaps quite nicely with what happens in a classroom often by good, well-intentioned instructors. I, myself, have done this exact thing. In one specific course I teach, I ask students to signup for account on Canva to design social media pieces. Part of the assignment objective is to learn and productively use the tool and the other objective is to compare and contrast the web-based tool to a desktop graphic design application.
But the truth is that I’ve never read Canva’s Terms of Service from the lens of either myself or my students. I couldn’t tell you if my students retain ownership of the work they create. I couldn’t tell you if they can permanently delete it should they choose.
There are websites out there, like tosdr.org, that are great for giving you information on some platforms, but only a few have educational use cases. Additionally, none of them seem to connect directly to the ToS themselves through something such as an annotation layer, which can bring full, real-time context into the specific statement, which is important in a time when Terms are often regularly updated.
So this is where Amy and I started to develop our idea. What can we build that would allow people to 1.) annotate terms of service related to tools they adopt in a classroom? and 2.) see an aggregated list of all current annotations. Last, if we were to start critically analyzing EdTech Terms of Service, what questions should we even ask?
The last question is where we started. Below are sixteen yes/no questions that make up the framework that we currently still under development. The framework has been informally developed by surveying what others have said (such as ToS;DR and Audrey Watters‘ Audrey Test) as well as integrating current institutional practices at specific European institutions, which we argue are ahead of the curve with respect to this issue. The questions:
These questions are still being developed, vetted, and consolidated as necessary. A portion of our presentation was gathering feedback on the framework. I’m openly requesting that you comment or annotate on this post or reach out directly if you have ideas/comments/questions.
The second portion of the tool is the tool itself. We wanted to build something that would allow anybody to add to the collective analysis, building upon Mike Caulfield’s concept of Choral Explanations, as we recognize that their often isn’t one definitive interpretation of a document. I built a simple form that asks the user to define the tool itself, the question they are seeking to answer, the answer itself, as well as some annotation metadata such as the specific URL and the text from the Terms of Service.
Once a “find” is submitted, that piece of data is then aggregated to a homepage which is organized by the tool.
There is much left to be desired as it relates to future development. For instance, once you see the process, you can see how this would integrate nicely into a tool such as hypothes.is, or perhaps more specific, the hypothes.is annotation toolkit, in a similar manner to which Jon Udell has written about and demonstrated in projects such as the Digital Polarization Initiative, a student-run project which allows students to investigate questions of truth and authority and publish their results, as well as Science in the Classroom, which are annotated research papers and accompanying teaching materials.
We received really positive feedback in the session, which was a very encouraging way to end the conference. Faculty said they would like to use this in their course as an activity as well as a resource, which was great to hear. We also quickly were able to uncover some interesting answers. For instance, did you know that TurnItIn claims fair use?
A U.S. District Court judge ruled that archiving student papers to assess originality of newly-submitted papers constitutes a fair use under the U.S. Copyright Act, provides “a substantial public benefit” and helps protect the papers from being exploited by others. Read the summary judgement.
In it’s current state, the tool isn’t quite ready for primetime. I’ll end with a request to those who are interested in exploring the tool more, whether its on the collection or development side, reach out to either myself or Amy (Twitter: @acroom and @amcollier) as we believe are inching closer to a phase where we can to soft launch the tool at specific institutions in order to gather further feedback.