Readability of rainbow schemes

The core argument in this discussion of color schemes in maps of Hurricane Harvey rainfall makes sense to me – darkness and light have intuitive intensity meanings to us and it is a problem when a visualization violates those meanings and expects a key to do the work of remapping our understanding. But the suggestion to rework the map with an entirely different visualization technique based on a gradient of color (perhaps with a slight hue shift as well) rather than a rainbow scheme seems to miss what I, at least, find to be functional about the rainbow scheme. I’m accustomed enough to how the rainbow scheme maps to amounts of rainfall in the online weather maps I use that I now have a learned sense of what “dark green rainfall” is like as compared to “light orange rainfall”, etc. It goes back to what one thinks the purpose of the map is. In the linked article, the Washington Post version does a nicer job of showing historical rainfall data about the region. But if the main purpose of National Weather Service maps is to help people understand the weather conditions they are about to experience, with a historical map of accumulated rainfall in a case like this being an outlier use case, I feel like the rainbow scheme makes that easier for me as a user of the visualization than a version using only a light-to-dark shift in a single color.

Defining a Liberal Arts Major

Over the past few months I’ve been spending more time than usual in discussions about the value and mission of liberal arts education, coming at it from a few different directions. This seems to align with an increased number of articles in various sources (mainstream and higher-ed focused) about the value of the liberal arts. There are a lot of pieces to the challenging problem of explaining liberal arts education. One piece I keep coming back to, though, is my frustration with the phrase “liberal arts majors”, generally intended to mean arts and humanities majors.

If it were up to me, we would insist on being clear that at a liberal arts institution that truly embraces its mission, all of its majors are liberal arts majors. I understand that underlying much of these conversations is a need to defend the value of the humanities and arts, but from my own disciplinary perspective I fear that this lets science and technology programs off the hook for their own obligations towards a liberal arts philosophy.

If done properly, a liberal arts STEM major is not just a STEM major as would be experienced at any institution with some extra gen ed distribution requirements tacked on. The program itself should reflect the interconnectedness of disciplines from across the institution and ensures students can approach problems broadly as well as deeply. Further, taking on a liberal arts perspective ought to change how each course itself is taught; it is a mindset that the instructor should adopt towards the range of interests and priorities their students might have. This might be reflected in pedagogy, in course problems and examples, or even in the scope and specifics of course content.

While certainly not an absolute, it is frequent that liberal arts institutions or liberal arts programs within larger institutions are relatively small entities. This means that the faculty can work together to build a shared vision of a liberal arts education and integrate it within their own disciplinary perspective. STEM faculty have an obligation to provide a robust liberal arts education the same as their colleagues in other disciplines. Falling into informal language suggesting that some disciplines are what constitute the “liberal arts” and other disciplines simply coexist with them works against the unified mission that ought to exemplify a liberal arts education.

Finding a use for Twitter

As part of the obligatory year-end reflections, I have noticed that despite consistent good intentions, I haven’t been posting here regularly this fall. As always, I hope to remedy that as I don’t imagine ever entirely abandoning Screenshot.

However, in my absence from this space, I have been somewhat more active in another corner of the internet. After some false starts and a general sense of apathy about the service, I have found a use for Twitter that seems to be working for me, mostly as a replacement for Delicious which I found became cumbersome at some point a few years ago (at least for the ways I was using it).

So, tweeting under @ProfAMH, I’ve been linking to stories I want to keep track of but don’t have full weblog posts about. Most of the time, this means they’re stories I want to be able to pull up again for use in a class I’m going to teach, which means most of my tweets are labeled with hashtags like #cis105 (for my games course), #cis335 (for my security course), and so on. At my most optimistic I hope this might be interesting for students (or even alumni) and more appealing than following my very-sporadically-updated weblog. I’ve also started collecting some links under an #InterdisciplinaryCS tag as I’ve found myself involved in more projects related to interdisciplinary computing, CS in the liberal arts, and related topics over the past year.

I’m not sure if I’ll ever get on board with the social interaction aspect of Twitter, but for now this is helping me keep track of content and if others find it interesting or helpful that’s just bonus. My goal for the coming year is to make consistent use of it while phasing back into the habit of posting here as well.

And so into the new year we go, with the best of intentions for positive habits of social media usage!

Next, they rise up and kill us all….

My most recent weblog post was on teaching ethics to self-driving cars, flippantly titled At least they’re not using GTA as a data source. Except….

Self-Driving Cars Can Learn a Lot by Playing Grand Theft Auto

Let’s console ourselves that “there’s little chance of a computer learning bad behavior by playing violent computer games” and instead admire the clever efficiency of allowing them to get practice navigating the complexities of realistic roads. And, in this case, it does seem that they are just extracting photo-realistic screenshots rather than having to produce authentic training data, which is a cool trick.

But the fact that the type of data you feed into a machine learning algorithm affects the type of results you get does keep rearing its head.

At least they’re not using GTA as a data source

MIT’s Media Lab wants you to help crowd-source solutions to the Trolley Problem as a decision-making data set for self-driving cars. This is exciting news, because asking the internet to solve tricky moral dilemmas using binary decision making will surely reflect our societal values accurately.

Putting aside snarky skepticism, I had the following thoughts as I went through a judging session:

  • Having to pick one of these options without any “it depends” or “I don’t want to choose” selection got uncomfortable fast.
  • After a few scenarios I started to question the definiteness of the outcomes. How is there equal certainty that plowing straight ahead through four people will kill all of them, but swerving and colliding with a barrier will absolutely kill all passengers. Are self-driving cars not allowed to have airbags?
  • I wonder if they are storing data about how long people spend on each question and if they read the scenarios. Overall I wonder how they actually intend to use this data.
  • Best scenario I encountered on multiple trials was a self-driving car transporting two cats and a dog when its brakes fail with three pedestrians in the road in front of it. The two cats made the dog sit in the back seat.

Finally, a note on the “Results” page you get to – if you land there and find yourself bothered by some of the values it attributes to you, keep in mind the data sample is way too small relative to the number of things that change between scenarios. I just did a run through to see what my results looked like if, without considering any other details, I applied the principles that (1) if the choice exists to hit a barrier and harm those in the car rather than those outside the car, always take that choice, (2) if the choice must include hitting others, maintain your course (presumably hitting the brakes) and drive predictably rather than swerving erratically. Based on an application of ONLY those rules and the scenarios I happened to be served up, I was able to also create a 100% preference for saving children and criminals and always hitting physically fit people.

Exploring for information

There are a series of good quotes in this article about how librarians can get students to start understanding the scholarly frame for exploring information that highlight the general shape of the argument being made:

That exploration is important in learning: “When small children observe and imitate, they are testing the physical world around them and coming up with their own understanding of how things work. Explicit instruction short-circuits that process.”

That various pressures prevent students from seeing library research as exploration: “They are intensely curious about what the teacher wants, if not about the topic they’re researching, and often focus on getting that boring task done as efficiently as possible. It’s not just that there’s no time for creativity, or that they think creativity is a violation of the rule that you have to quote other people in this kind of writing. It’s simply too big of a risk.”

That further, they may not know exploration is the goal, particularly given the rules-focused manner they may have been taught about scholarly citation: “If you learn how to cite a source before you’ve had any experience seeing how scholarly writing is webbed together through these not-so-hyper links, if you’ve never sought a source that you first encountered in another source, this citation business is simply a matter of compiling an ingredients list that’s required by law.”

I’ve been having a few conversations recently about the start of college as a socialization process into academic norms and expectations, and about where and how much of that is required. The theme of how we read and why we read that way has come up, and it does seem to connect nicely to this idea of understanding how we explore a scholarly body of knowledge.

It also reminds me of a goal I need to get back to for my fall offering of programming. While I tell students that they may use Google searching, Stack Overflow, and the like as sources of ideas and even code for their homework so long as they comment where any copied code comes from, I also warn them that homework is written with an eye to what they have learned so far whereas internet sources may draw on the entire complexity of Java, so they may actually find it harder to get code copied off the internet to function within the constraints of an assignment than to go back to the textbook and class examples and think through a solution from there. A colleague pointed out that it might help drive that point home if I create an exercise that shows how the code that comes up on a search complicates rather than simplifies problem solving. I do this already with an exercise reinforcing the importance of reading the description of a method not just its name. So as a small piece to this much larger puzzle, I will be spending some time looking for a simple programming problem that the internet makes much too hard, but looking carefully at the resources you already have in front of you will make easy.

Getting to Effective Ed-Tech

This Chronicle article discussing the Jefferson Education incubator at University of Virginia has been rolling around in my head the past couple of days as I’ve been part of a number of conversations about education, computing, and classroom technology. The problem Jefferson Education says they are setting out to solve is that the ed-tech industry is light on efficacy research for the technologies they are selling, and “”universally, everyone thinks it’s not their fault or not their problem” that the research isn’t a bigger part of the purchasing equation.” So their group will study “the political, financial, and structural barriers that keep companies and their customers from conducting and using efficacy research when creating or buying ed-tech products”.

A chain of reactions that I have to this:

  • Of course technology companies sell products that they claim will make your lives better, faster, easier without any proof or research evidence. I’ve had a conversation with a vendor where I raised a question about the underlying assumptions of their product, citing relevant literature, and not surprisingly the sales team didn’t engage in the question of whether we should want to do what their technology did, instead returning the conversation as quickly as possible to how well their technology does what it does.
  • Educational institutions may suffer if their purchasing processes for ed-tech are the same as their purchasing processes for infrastructural tech. Researching the best balance between cost and quality for wifi routers for the campus is not the same process as researching the best balance between cost and quality for an LMS, not least because there are understood benchmarks for wifi routers than can be independently tested fairly easily.
  • It’s almost a hopeless project to come up with one objective evaluation of the effectiveness of a piece of ed-tech as a stand alone artifact to purchase or not, because its effectiveness is entirely related to how an instructor uses it and, prior to that even, what the goals are in using that piece of technology in the first place.
  • The availability of funds to press innovative ed-tech into classrooms likely as a role in where we’ve ended up, because institutions have been financially rewarded for taking big steps quickly, I suspect often having purchases of technology occur before conversations about how it will be used. This incentivizes ed-tech companies to move quickly and market the novelty of their technology rather than move judiciously and market the proveness of their technology.
  • We should all keep in mind that this experimentation is taking place in classrooms where the individual students are getting their one and only education. If ed-tech companies are following the same trends as other startup tech companies, there’s a lot of advice out there to “fail fast” on the way to innovation. But “failing fast” in ed-tech means that not only did a school spend money poorly, a group of students may have been deprived of the effective education they would otherwise have received.

Prioritizing beyond deadlines

A friend shared this article about university students struggling to read entire books on Facebook, and while there are many thought provoking things here, one quote in particular struck me:

“I would say that it is simply a case of needing to prioritise,” said Ms Francis, “do you finish a book that you probably won’t write your essay on, or do you complete the seminar work that’s due in for the next day? I know what I’d rather choose.”

Reading that sentence, I had the dual reactions of “of course” and “the fact that that’s the decision is the problem”.

I am not going to pretend that I don’t have days (or at this time of year, weeks) when I’m just working to keep my head above water with deadlines. But I also think that one of the “soft skills” that college should help develop is how to manage your time in a manner that lets you both meet immediate deadlines and allocate needed time regularly to longer-term projects. In any job, you could let every day get filled up with short-term tasks that legitimately do need to get done – meetings, emails, forms and reports, etc. But the big, interesting, meaningful projects only get done if they also get regular attention, because by their nature, they can’t be completed at the deadline, or at least they can’t be completed with any level of quality at the deadline.

This skill of balancing short-term and long-term work reminds me of a conversation I saw somewhere (though I am forgetting where) where a recent college graduate was struggling with their productivity when given larger projects to complete and was wondering how to ask their boss to assign them all their work in broken down tasks with daily deadlines. While I applaud the self-awareness to realize their productivity problem, I wonder if more practice in college at balancing the demands of tomorrow’s exam with the need to keep making progress on the book being discussed in seminar next week would have helped them with their professional productivity now.

And I connect this to a struggle I have in my own teaching. Where is the balance between breaking things down into manageable pieces with checkpoints and regular deadlines versus giving students opportunities to practice setting their own agendas and priorities? Certainly the level of the course comes into play, but even at the upper level, it is hard to stand back and watch a student not doing the work they ought to be doing to stay on track. When that happens, my first reaction is to ask if I should add more to my syllabus – more deadlines, more check-ins, more progress reports, etc. Things to make sure students are keeping pace with their work or let me intervene if they aren’t. But if we all do that, isn’t that just adding to the problem, of more deadlines and less practice with setting ones priorities and managing the consequences if those priorities didn’t get you where you needed to go? Do we stop assigning books because students can’t find the time to read them, or do we keep assigning books because it is important that students think about what choices they could make to create the time to read them?

Beautiful Lumino City

I try out many more games than I finish – even when they’re short – from a combination of lack of attention span and lack of skill. So it stands out to me when I finish a game of more than trivial length. This weekend I played through the end of Lumino City and I can say with confidence that I would have finished this one off even without a snowstorm keeping me inside.

The major selling point of Lumino City, which of itself is enough to make it worthwhile, is the artwork. The scenes in the game are entirely handmade out of “paper, card, miniature lights and motors”. As you explore the game world, you navigate your character through scale models. The effect is beautiful, and when you remember to think about the work that must have gone into pulling it off, it’s awe inspiring. Even if you aren’t a point-and-click puzzle game fan (or a game fan in general) I highly recommend watching the game trailer at their site to get a sense of the scale of this thing. In fact, if you’ve played the game, go back and watch the video again – I had forgot that the literally built the whole game world as a massive model they could pan a camera around, though it makes sense as I remember how transitions between scenes take place as one portion of the world fades out of focus as the next fades in.

As far as game play, I enjoyed it quite a bit as well. I found the challenge level of the puzzles about right for a casual game where I wanted to enjoy the world as much as my activities in it. None of them rush you through, and they support the story. It’s natural in this sort of game to be aware of the puzzles as a somewhat artificial barrier constructed for the purpose of stopping you from progressing, but the content of the puzzles does support the presentation of the world and the progression of the story (though this is not a story-heavy game).

I really liked how the game built in hints, embedding them in a book the character’s grandfather gives her at the start of the game. The book is not just a short manual of puzzle solutions, though; to find the hints for a puzzle, you first have to solve a math problem based around properties of the puzzle you are stuck on to compute which of the 800-some pages in the manual you’ll find that hint on. I like that model of having to solve a smaller mini-puzzle to get the hint, as well as the fact that that extra bit of friction keeps you from turning to a hint page without being sure you want to or accidentally seeing a hint for the next puzzle when looking up the one you’re trying to find.

User Tracking Apps

There’s an interesting story out there about ads that play ultrasonic sounds that permit cross-device tracking. While this is being described as detecting devices that all belong to one user, it seems possible it would sometimes detect devices all belonging to the same family – a slightly different task but also one marketers are interested in solving. It likely depends on where and how frequently these linking ultrasonic sounds are emitted.

And, as I’ve seen others note and is alluded to late in this article, the SilverPush software development kit that is largely being credited for current implementations of this technique seems an awful lot like malware:

Use of ultrasonic sounds to track users has some resemblance to badBIOS, a piece of malware that a security researcher said used inaudible sounds to bridge air-gapped computers. No one has ever proven badBIOS exists, but the use of the high-frequency sounds to track users underscores the viability of the concept. Now that SilverPush and others are using the technology, it’s probably inevitable that it will remain in use in some form. But right now, there are no easy ways for average people to know if they’re being tracked by it and to opt out if they object.

Of course, this also reminds me of an article from a few weeks ago reviewing a study of 110 popular, free smartphone apps: User data plundering by Android and iOS apps is as rampant as you suspected. If you want to feel really helpless, consider the one piece of protective advice that article is able to suggest: “One thing app users can do to safeguard their personal information, the researchers suggest, is to supply false data when possible to app requests.” I wonder if paying for apps rather than choosing free alternatives would have any positive effect.