At least they’re not using GTA as a data source

MIT’s Media Lab wants you to help crowd-source solutions to the Trolley Problem as a decision-making data set for self-driving cars. This is exciting news, because asking the internet to solve tricky moral dilemmas using binary decision making will surely reflect our societal values accurately.

Putting aside snarky skepticism, I had the following thoughts as I went through a judging session:

  • Having to pick one of these options without any “it depends” or “I don’t want to choose” selection got uncomfortable fast.
  • After a few scenarios I started to question the definiteness of the outcomes. How is there equal certainty that plowing straight ahead through four people will kill all of them, but swerving and colliding with a barrier will absolutely kill all passengers. Are self-driving cars not allowed to have airbags?
  • I wonder if they are storing data about how long people spend on each question and if they read the scenarios. Overall I wonder how they actually intend to use this data.
  • Best scenario I encountered on multiple trials was a self-driving car transporting two cats and a dog when its brakes fail with three pedestrians in the road in front of it. The two cats made the dog sit in the back seat.

Finally, a note on the “Results” page you get to – if you land there and find yourself bothered by some of the values it attributes to you, keep in mind the data sample is way too small relative to the number of things that change between scenarios. I just did a run through to see what my results looked like if, without considering any other details, I applied the principles that (1) if the choice exists to hit a barrier and harm those in the car rather than those outside the car, always take that choice, (2) if the choice must include hitting others, maintain your course (presumably hitting the brakes) and drive predictably rather than swerving erratically. Based on an application of ONLY those rules and the scenarios I happened to be served up, I was able to also create a 100% preference for saving children and criminals and always hitting physically fit people.

Exploring for information

There are a series of good quotes in this article about how librarians can get students to start understanding the scholarly frame for exploring information that highlight the general shape of the argument being made:

That exploration is important in learning: “When small children observe and imitate, they are testing the physical world around them and coming up with their own understanding of how things work. Explicit instruction short-circuits that process.”

That various pressures prevent students from seeing library research as exploration: “They are intensely curious about what the teacher wants, if not about the topic they’re researching, and often focus on getting that boring task done as efficiently as possible. It’s not just that there’s no time for creativity, or that they think creativity is a violation of the rule that you have to quote other people in this kind of writing. It’s simply too big of a risk.”

That further, they may not know exploration is the goal, particularly given the rules-focused manner they may have been taught about scholarly citation: “If you learn how to cite a source before you’ve had any experience seeing how scholarly writing is webbed together through these not-so-hyper links, if you’ve never sought a source that you first encountered in another source, this citation business is simply a matter of compiling an ingredients list that’s required by law.”

I’ve been having a few conversations recently about the start of college as a socialization process into academic norms and expectations, and about where and how much of that is required. The theme of how we read and why we read that way has come up, and it does seem to connect nicely to this idea of understanding how we explore a scholarly body of knowledge.

It also reminds me of a goal I need to get back to for my fall offering of programming. While I tell students that they may use Google searching, Stack Overflow, and the like as sources of ideas and even code for their homework so long as they comment where any copied code comes from, I also warn them that homework is written with an eye to what they have learned so far whereas internet sources may draw on the entire complexity of Java, so they may actually find it harder to get code copied off the internet to function within the constraints of an assignment than to go back to the textbook and class examples and think through a solution from there. A colleague pointed out that it might help drive that point home if I create an exercise that shows how the code that comes up on a search complicates rather than simplifies problem solving. I do this already with an exercise reinforcing the importance of reading the description of a method not just its name. So as a small piece to this much larger puzzle, I will be spending some time looking for a simple programming problem that the internet makes much too hard, but looking carefully at the resources you already have in front of you will make easy.

Getting to Effective Ed-Tech

This Chronicle article discussing the Jefferson Education incubator at University of Virginia has been rolling around in my head the past couple of days as I’ve been part of a number of conversations about education, computing, and classroom technology. The problem Jefferson Education says they are setting out to solve is that the ed-tech industry is light on efficacy research for the technologies they are selling, and “”universally, everyone thinks it’s not their fault or not their problem” that the research isn’t a bigger part of the purchasing equation.” So their group will study “the political, financial, and structural barriers that keep companies and their customers from conducting and using efficacy research when creating or buying ed-tech products”.

A chain of reactions that I have to this:

  • Of course technology companies sell products that they claim will make your lives better, faster, easier without any proof or research evidence. I’ve had a conversation with a vendor where I raised a question about the underlying assumptions of their product, citing relevant literature, and not surprisingly the sales team didn’t engage in the question of whether we should want to do what their technology did, instead returning the conversation as quickly as possible to how well their technology does what it does.
  • Educational institutions may suffer if their purchasing processes for ed-tech are the same as their purchasing processes for infrastructural tech. Researching the best balance between cost and quality for wifi routers for the campus is not the same process as researching the best balance between cost and quality for an LMS, not least because there are understood benchmarks for wifi routers than can be independently tested fairly easily.
  • It’s almost a hopeless project to come up with one objective evaluation of the effectiveness of a piece of ed-tech as a stand alone artifact to purchase or not, because its effectiveness is entirely related to how an instructor uses it and, prior to that even, what the goals are in using that piece of technology in the first place.
  • The availability of funds to press innovative ed-tech into classrooms likely as a role in where we’ve ended up, because institutions have been financially rewarded for taking big steps quickly, I suspect often having purchases of technology occur before conversations about how it will be used. This incentivizes ed-tech companies to move quickly and market the novelty of their technology rather than move judiciously and market the proveness of their technology.
  • We should all keep in mind that this experimentation is taking place in classrooms where the individual students are getting their one and only education. If ed-tech companies are following the same trends as other startup tech companies, there’s a lot of advice out there to “fail fast” on the way to innovation. But “failing fast” in ed-tech means that not only did a school spend money poorly, a group of students may have been deprived of the effective education they would otherwise have received.

Prioritizing beyond deadlines

A friend shared this article about university students struggling to read entire books on Facebook, and while there are many thought provoking things here, one quote in particular struck me:

“I would say that it is simply a case of needing to prioritise,” said Ms Francis, “do you finish a book that you probably won’t write your essay on, or do you complete the seminar work that’s due in for the next day? I know what I’d rather choose.”

Reading that sentence, I had the dual reactions of “of course” and “the fact that that’s the decision is the problem”.

I am not going to pretend that I don’t have days (or at this time of year, weeks) when I’m just working to keep my head above water with deadlines. But I also think that one of the “soft skills” that college should help develop is how to manage your time in a manner that lets you both meet immediate deadlines and allocate needed time regularly to longer-term projects. In any job, you could let every day get filled up with short-term tasks that legitimately do need to get done – meetings, emails, forms and reports, etc. But the big, interesting, meaningful projects only get done if they also get regular attention, because by their nature, they can’t be completed at the deadline, or at least they can’t be completed with any level of quality at the deadline.

This skill of balancing short-term and long-term work reminds me of a conversation I saw somewhere (though I am forgetting where) where a recent college graduate was struggling with their productivity when given larger projects to complete and was wondering how to ask their boss to assign them all their work in broken down tasks with daily deadlines. While I applaud the self-awareness to realize their productivity problem, I wonder if more practice in college at balancing the demands of tomorrow’s exam with the need to keep making progress on the book being discussed in seminar next week would have helped them with their professional productivity now.

And I connect this to a struggle I have in my own teaching. Where is the balance between breaking things down into manageable pieces with checkpoints and regular deadlines versus giving students opportunities to practice setting their own agendas and priorities? Certainly the level of the course comes into play, but even at the upper level, it is hard to stand back and watch a student not doing the work they ought to be doing to stay on track. When that happens, my first reaction is to ask if I should add more to my syllabus – more deadlines, more check-ins, more progress reports, etc. Things to make sure students are keeping pace with their work or let me intervene if they aren’t. But if we all do that, isn’t that just adding to the problem, of more deadlines and less practice with setting ones priorities and managing the consequences if those priorities didn’t get you where you needed to go? Do we stop assigning books because students can’t find the time to read them, or do we keep assigning books because it is important that students think about what choices they could make to create the time to read them?

Beautiful Lumino City

I try out many more games than I finish – even when they’re short – from a combination of lack of attention span and lack of skill. So it stands out to me when I finish a game of more than trivial length. This weekend I played through the end of Lumino City and I can say with confidence that I would have finished this one off even without a snowstorm keeping me inside.

The major selling point of Lumino City, which of itself is enough to make it worthwhile, is the artwork. The scenes in the game are entirely handmade out of “paper, card, miniature lights and motors”. As you explore the game world, you navigate your character through scale models. The effect is beautiful, and when you remember to think about the work that must have gone into pulling it off, it’s awe inspiring. Even if you aren’t a point-and-click puzzle game fan (or a game fan in general) I highly recommend watching the game trailer at their site to get a sense of the scale of this thing. In fact, if you’ve played the game, go back and watch the video again – I had forgot that the literally built the whole game world as a massive model they could pan a camera around, though it makes sense as I remember how transitions between scenes take place as one portion of the world fades out of focus as the next fades in.

As far as game play, I enjoyed it quite a bit as well. I found the challenge level of the puzzles about right for a casual game where I wanted to enjoy the world as much as my activities in it. None of them rush you through, and they support the story. It’s natural in this sort of game to be aware of the puzzles as a somewhat artificial barrier constructed for the purpose of stopping you from progressing, but the content of the puzzles does support the presentation of the world and the progression of the story (though this is not a story-heavy game).

I really liked how the game built in hints, embedding them in a book the character’s grandfather gives her at the start of the game. The book is not just a short manual of puzzle solutions, though; to find the hints for a puzzle, you first have to solve a math problem based around properties of the puzzle you are stuck on to compute which of the 800-some pages in the manual you’ll find that hint on. I like that model of having to solve a smaller mini-puzzle to get the hint, as well as the fact that that extra bit of friction keeps you from turning to a hint page without being sure you want to or accidentally seeing a hint for the next puzzle when looking up the one you’re trying to find.

User Tracking Apps

There’s an interesting story out there about ads that play ultrasonic sounds that permit cross-device tracking. While this is being described as detecting devices that all belong to one user, it seems possible it would sometimes detect devices all belonging to the same family – a slightly different task but also one marketers are interested in solving. It likely depends on where and how frequently these linking ultrasonic sounds are emitted.

And, as I’ve seen others note and is alluded to late in this article, the SilverPush software development kit that is largely being credited for current implementations of this technique seems an awful lot like malware:

Use of ultrasonic sounds to track users has some resemblance to badBIOS, a piece of malware that a security researcher said used inaudible sounds to bridge air-gapped computers. No one has ever proven badBIOS exists, but the use of the high-frequency sounds to track users underscores the viability of the concept. Now that SilverPush and others are using the technology, it’s probably inevitable that it will remain in use in some form. But right now, there are no easy ways for average people to know if they’re being tracked by it and to opt out if they object.

Of course, this also reminds me of an article from a few weeks ago reviewing a study of 110 popular, free smartphone apps: User data plundering by Android and iOS apps is as rampant as you suspected. If you want to feel really helpless, consider the one piece of protective advice that article is able to suggest: “One thing app users can do to safeguard their personal information, the researchers suggest, is to supply false data when possible to app requests.” I wonder if paying for apps rather than choosing free alternatives would have any positive effect.

Advice for the Returning

It is the time of year that the media, newspapers, blogs and higher-ed focused venues put out articles on advice to college freshmen. I was thinking of adding to that collection, but it struck me that there’s an audience that could use some back-to-school advice as well but which seems to be largely ignored: sophomores.

It’s an interesting omission, given that missteps, meandering, or general malaise is so common in the sophomore year that there’s an entire phrase for it: the “sophomore slump”. And yet while a Google News search on “freshman advice” returns a top-ten links filled with tips for students starting college (8 of the 10, with the other two being tips for high school freshmen), the comparable Google News search on “sophomore advice” turns up one article of advice for students starting their second year of college, preceded by an article on avoiding the sophomore slump in one’s music career and followed by seven articles about college football and one article with advice for freshmen in high school.

And the evidence that sophomores need advice is based on more than the cliche of a sophomore slump. In talking to a group of near-graduation seniors last year about their college experiences, the students themselves identified that sophomore year can be a bit of a dead spot. Entering freshmen are greeted by ever-growing orientation programs and, at colleges like mine, entire courses their first semester dedicated to getting them up to speed at being a college student. By your junior year, you’ve progressed to far enough in your major that you’ve likely built a relationship with your advisor and have moved into smaller upper-level courses with increased contact and mentoring, which continues into your senior year. But sophomore year, you’ve moved past the safety nets of freshman year and are still working on finding a stable landing in your major.

So what is a sophomore to do? My first advice is: be aware of this and try to be proactive to counteract it. As a freshman, your advisor may have actively checked in on your progress, reminded you about deadlines, or been required to sign off on changes to your courses. As a sophomore, it’s time to take responsibility for your education on yourself, but a big part of that is recognizing those times that, as a freshman, you’d have been required to talk to an advisor and taking the initiative to request that advice yourself if you need it. Yes, you can now decide for yourself if you should switch your programming course to S/U, but that doesn’t mean you can’t ask your advisor if they have any thoughts on the decision. Bonus points if you approach this conversation in the spirit of helping you make a decision about what is right for you given your goals and pressures rather than a request that your advisor make the decision and tell you what to do.

And, as I think about it, the rest of my advice really flows from there – sophomore year is about taking responsibility for your education while learning to take advantage of the mentoring and support around you to be as successful as possible in that venture. Think carefully about who your relationships are with and if they are encouraging you to make the most of your college years. Surround yourself with people who press you to do be your best while also supporting you – whether your friends, your faculty, your coaches, or your collaborators and colleagues. You’re probably getting more freedom to choose your living situation and who to live with; consider this question broadly based not just on who will be fun to live with but who will let you have fun while also being successful.

Finally, now is the time to think ahead and ask yourself: What do I want my senior year to look like? What experiences do I want to be having? You may find that some of the great opportunities you want to pursue take a bit of planning and background effort. I encourage students to look at the college catalog and read the course descriptions of the 300-level courses – there’s a lot of great content there, and if you know as a sophomore that you want to make sure you take Quantum Mechanics before you graduate, you’ll have the time to get the prerequisites worked into your schedule in advance. Do this outside your major as well – if you feel like you aren’t getting a lot out of your general education or elective courses consider if you could be choosing those courses more thoughtfully. This same type of planning applies to goals of doing summer research, interning with a particular company, or studying abroad.

Security/Learning Linkdump

I’ve accumulated a big collection of links this summer that are roughly related to security and/or machine learning and mostly connected to personal identification or human characteristics that I’m intending to share with my senior students when they return to campus in a few weeks. Having just noticed quite how large the collection has grown, it seems kind to pull them together into a semi-organized structure, as compared to my original plan of hitting send on an email filled with URLs, for their sake as well as my own. Taken together, it’s a nice little reading list.

How your smartphone’s battery life can be used to invade your privacy: “A little-known feature of the HTML5 specification means that websites can find out how much battery power a visitor has left on their laptop or smartphone – and now, security researchers have warned that that information can be used to track browsers online.” (see also the original research paper The leaking battery: A privacy analysis of the HTML5 Battery Status API)

Gone (Cat)Fishing: How Language Dectives Tackle Online Anonymity: “According to Dr Tim Grant in an article for The Conversation, “everything from the way someone uses capitalization or personal pronouns, to the words someone typically omits or includes, to a breakdown of average word or sentence length, can help identify the writer of even a short text like a Tweet or text message.” So it might surprise you how much of your individual writing style you leave behind for linguists to rifle through, even if you are a success at pretending to be someone different on the internet.” This article as a whole is a treasure trove of links surveying forensic linguistics.

Typing patterns are the latest anonymity-shattering personal identifier: Several links here discussing behavioral biometrics, its various uses, and a bit of coverage of how to avoid it (most specifically to this case, the Keyboard Privacy Chrome extension).

Face Recognition by Thermal Imaging: “The technology identifies a person from their thermal signature and matches infrared images with ordinary photos. It uses a deep neural network system to process the pictures and recognise people in bad light or darkness.” (from the first linked article)

Facial recognition technology is everywhere. It may not be legal.: “There are no federal laws that specifically govern the use of facial recognition technology. But while few people know it, and even fewer are talking about it, both Illinois and Texas have laws against using such technology to identify people without their informed consent. That means that one out of every eight Americans currently has a legal right to biometric privacy.”

Yet Another New Biometric: Brainprints: “In “Brainprint,” a newly published study in academic journal Neurocomputing, researchers from Binghamton University observed the brain signals of 45 volunteers as they read a list of 75 acronyms, such as FBI and DVD. They recorded the brain’s reaction to each group of letters, focusing on the part of the brain associated with reading and recognizing words, and found that participants’ brains reacted differently to each acronym, enough that a computer system was able to identify each volunteer with 94 percent accuracy. The results suggest that brainwaves could be used by security systems to verify a person’s identity.”

Personal microbiomes shown to contain unique ‘fingerprints’: “A new study shows that the microbial communities we carry in and on our bodies—known as the human microbiome—have the potential to uniquely identify individuals, much like a fingerprint. Harvard T.H. Chan School of Public Health researchers and colleagues demonstrated that personal microbiomes contain enough distinguishing features to identify an individual over time from among a research study population of hundreds of people.” An interesting possible implication is having to revisit the anonymity assumptions for biological samples.

Privacy Badger 1.0 Is Here To Stop Online Tracking!: “The new Privacy Badger 1.0 release includes many improvements, including being able to detect certain kinds of super-cookies and browser fingerprinting—some of the more subtle and problematic methods that the online tracking industry employs to follow Internet users from site to site.”

New research suggests that hackers can track subway riders through their phones: “Every subway in the world has a unique fingerprint, the researchers said, and every time a train runs between two stations, that fingerprint can be read in the accelerometer, potentially giving attackers access to crucial information. […] To make this attack a reality, the researchers propose a new attack that learns each subway’s fingerprint and then installs malware on a target’s phone that steals accelerometer readings.”

Computer learning system detects emotional context in text messages: “The quantification was carried out by examining 5,000 posts on social media pages and, through statistical analysis, gearing a learning system to recognize content structure that could be identified as condescending or slang. The system was constructed to identify key words and grammatical habits that were characteristic of sentence structure implied by the content’s sentiments.”

Google Research Boosts Pedestrian Detection with GPUs: “Outside of providing web-based services (for instance, automatically tagging images or picking out semantic understanding from video) the real use cases for how GPUs will power real-time services off the web are still developing. Pedestrian detection is one of those areas where, when powered by truly accurate and real-time capabilities, could mean an entirely new wave of potential services around surveillance, traffic systems, driverless cars, and beyond.” The Google focus is on solving the problem not just in real-time but with high accuracy.

Can a New Smartphone App Predict GPA?: “We show that there are a number of important behavioral factors automatically inferred from smartphones that significantly correlate with term and cumulative GPA, including time series analysis of activity, conversational interaction, mobility, class attendance, studying, and partying. We propose a simple model based on linear regression with lasso regularization that can accurately predict cumulative GPA.” (from research paper abstract)

Analysing galaxy images with artificial intelligence: astronomers teach a machine how to ‘see’: “Mr Hocking, who led the new work, commented: “The important thing about our algorithm is that we have not told the machine what to look for in the images, but instead taught it how to ‘see’.” His supervisor and fellow team member Dr James Geach added: “A human looking at these images can intuitively pick out and instinctively classify different types of object without being given any additional information. We have taught a machine to do the same thing.””

Robots are great but where will I put all my stuff?

I was catching up on some podcasts on a recent roadtrip and listened to an interesting two-part series on vehicle automation from 99% Invisible: Episode 170: Children of the Magenta which looks at the effect of fly-by-wire and airplane flight automation on flight safety and Episode 171: Johnnycab on automotive automation.

Overall, the two episodes focus on the “automation paradox”, roughly the idea that as we automate more, we reduce our capability to deal with problems when automation fails. So, if automated cars become the norm, for the first stretch of time, essentially all drivers will still have experience driving without automation. But, after a generation of automated cars being the norm, the average driver will no longer have experience in taking control of a car if the automation fails. Within the airline industry, one proposed practice to counteract the paradox is to have pilots regularly turn off automation to maintain their manual flight skills. However, in the case of cars, that would require automated cars that still have the necessary components, like steering wheels and pedals, to enable manual driving, which is not everybody’s vision of automated cars. It’s an interesting discussion of how to design for safety and what safety goals one even has for automated vehicles.

As part of the second episode, another assumption behind automated cars was discussed which I’ve seen elsewhere, that in a world of automated cars, people would no longer own their vehicles but would simply call for and use cars on an as-needed basis: a world of robot-taxis. Various objections or resistances to this idea are discussed, but one I’ve not seen mentioned is how poorly this model would work for many families. I think about my friends with three children all of car-seat age – would they have to put in and remove car seats every time they went somewhere? Request and only be served by vehicles with three car-seats (of exactly the right combination of sizes) pre-installed? And what about all of the “stuff” that you travel with when you have children? Most parents I know have not only their diaper bag they carry with them, but a stash of backup supplies in their car – does that now get carried with you every place you go?

If your primary model of car-usage is commuting, and particularly if you live in a setting where your daily commute is more than 10-15 minutes, I can see robot-taxis replacing traditional car ownership. There are already car-share programs out there that seem fairly successful. But when automated car discussions start moving towards plans where only automated cars are on the road (so as to, say, enable narrower highway lanes to increase capacity), there are a lot more complicated barriers that would have to be overcome.

Argument for Ambiguity

I got directed to a recent piece about tolerance for ambiguity as a job requirement and a skill education should help develop through this quote from a responding blog post: “To the extent that we can provide assignments and experiences in and among classes that give students the experience of getting a little lost and finding their way back, we may be able to build some of that tolerance for ambiguity in the kind of settings Selingo discusses.”

While the original article focuses more on the idea of a “growth mind-set” and encouraging students to think of perseverance rather than innate intelligence as their most valuable asset, from a higher education perspective, I find the reflections about the value of introducing ambiguity into assignments more compelling. Another quote that echoes what I see in my students when presenting them with open-ended, and thus ambiguous, assignments: “In thinking about my own tolerance for ambiguity, I wouldn’t call it high or low. It varies, and I think the major independent variable is my own feeling of competence in the situation. When I feel like I can handle whatever the situation is likely to throw at me, ambiguity isn’t a problem. When I’m utterly lost, ambiguity can feel threatening. The key issue isn’t so much ambiguity or the lack thereof, but its possible outcome and my own sense of vulnerability.”

I see precisely this tension in students every semester, particularly those that are new to our courses and the expectation that they take responsibility for exploring options and refining the scope of a problem for themselves that is common through most of them. It’s a tricky balancing act to present just enough uncertainty in assignments that they get to have this valuable experience, but not so much that the feeling of vulnerability blocks their openness to exploring. This framing of why the ambiguity is intentional, and its role as an employment skill, is an interesting angle on explaining the assignment design to students.

On a bit of a tangent, and returning to the original article, there is one sentence that jumped out to me as odd: “As artificial intelligence increasingly makes many jobs obsolete, success in the future will belong to those able to tolerate ambiguity in their work.” I suspect the point here is that tolerance for ambiguity is one of the higher-level problem-solving skills that are hard to automate out of the work force. But from an artificial intelligence standpoint, this statement is odd because the gap between artificial intelligence versus simply computer automation frequently comes from AI being able to tolerate ambiguity and still function. This doesn’t invalidate the larger point – if ability to function outside strict parameters is one of our tests for successful artificial intelligence, no surprise that employers would like the same characteristic in their intelligent human employees. But on a technical level, this statement jumped out at me as missing some of what is exciting in AI work.