Robots are great but where will I put all my stuff?

I was catching up on some podcasts on a recent roadtrip and listened to an interesting two-part series on vehicle automation from 99% Invisible: Episode 170: Children of the Magenta which looks at the effect of fly-by-wire and airplane flight automation on flight safety and Episode 171: Johnnycab on automotive automation.

Overall, the two episodes focus on the “automation paradox”, roughly the idea that as we automate more, we reduce our capability to deal with problems when automation fails. So, if automated cars become the norm, for the first stretch of time, essentially all drivers will still have experience driving without automation. But, after a generation of automated cars being the norm, the average driver will no longer have experience in taking control of a car if the automation fails. Within the airline industry, one proposed practice to counteract the paradox is to have pilots regularly turn off automation to maintain their manual flight skills. However, in the case of cars, that would require automated cars that still have the necessary components, like steering wheels and pedals, to enable manual driving, which is not everybody’s vision of automated cars. It’s an interesting discussion of how to design for safety and what safety goals one even has for automated vehicles.

As part of the second episode, another assumption behind automated cars was discussed which I’ve seen elsewhere, that in a world of automated cars, people would no longer own their vehicles but would simply call for and use cars on an as-needed basis: a world of robot-taxis. Various objections or resistances to this idea are discussed, but one I’ve not seen mentioned is how poorly this model would work for many families. I think about my friends with three children all of car-seat age – would they have to put in and remove car seats every time they went somewhere? Request and only be served by vehicles with three car-seats (of exactly the right combination of sizes) pre-installed? And what about all of the “stuff” that you travel with when you have children? Most parents I know have not only their diaper bag they carry with them, but a stash of backup supplies in their car – does that now get carried with you every place you go?

If your primary model of car-usage is commuting, and particularly if you live in a setting where your daily commute is more than 10-15 minutes, I can see robot-taxis replacing traditional car ownership. There are already car-share programs out there that seem fairly successful. But when automated car discussions start moving towards plans where only automated cars are on the road (so as to, say, enable narrower highway lanes to increase capacity), there are a lot more complicated barriers that would have to be overcome.

Argument for Ambiguity

I got directed to a recent piece about tolerance for ambiguity as a job requirement and a skill education should help develop through this quote from a responding blog post: “To the extent that we can provide assignments and experiences in and among classes that give students the experience of getting a little lost and finding their way back, we may be able to build some of that tolerance for ambiguity in the kind of settings Selingo discusses.”

While the original article focuses more on the idea of a “growth mind-set” and encouraging students to think of perseverance rather than innate intelligence as their most valuable asset, from a higher education perspective, I find the reflections about the value of introducing ambiguity into assignments more compelling. Another quote that echoes what I see in my students when presenting them with open-ended, and thus ambiguous, assignments: “In thinking about my own tolerance for ambiguity, I wouldn’t call it high or low. It varies, and I think the major independent variable is my own feeling of competence in the situation. When I feel like I can handle whatever the situation is likely to throw at me, ambiguity isn’t a problem. When I’m utterly lost, ambiguity can feel threatening. The key issue isn’t so much ambiguity or the lack thereof, but its possible outcome and my own sense of vulnerability.”

I see precisely this tension in students every semester, particularly those that are new to our courses and the expectation that they take responsibility for exploring options and refining the scope of a problem for themselves that is common through most of them. It’s a tricky balancing act to present just enough uncertainty in assignments that they get to have this valuable experience, but not so much that the feeling of vulnerability blocks their openness to exploring. This framing of why the ambiguity is intentional, and its role as an employment skill, is an interesting angle on explaining the assignment design to students.

On a bit of a tangent, and returning to the original article, there is one sentence that jumped out to me as odd: “As artificial intelligence increasingly makes many jobs obsolete, success in the future will belong to those able to tolerate ambiguity in their work.” I suspect the point here is that tolerance for ambiguity is one of the higher-level problem-solving skills that are hard to automate out of the work force. But from an artificial intelligence standpoint, this statement is odd because the gap between artificial intelligence versus simply computer automation frequently comes from AI being able to tolerate ambiguity and still function. This doesn’t invalidate the larger point – if ability to function outside strict parameters is one of our tests for successful artificial intelligence, no surprise that employers would like the same characteristic in their intelligent human employees. But on a technical level, this statement jumped out at me as missing some of what is exciting in AI work.

Exercising my writing muscle

I was flipping through Spolsky’s Joel on Software today and, perhaps because I spent the morning working with our college-wide curriculum and some of our documentation of its outcomes, this passage jumped out at me:

So why don’t people write specs? It’s not to save time, because it doesn’t, and I think most coders recognize this. […] I think it’s because so many people don’t like to write. Staring at a blank screen is horribly frustrating. Personally, I overcame my fear of writing by taking a class in college that required a 3-5 page essay once a week. Writing is a muscle. The more you write, the more you’ll be able to write. If you need to write specs and you can’t, start a journal, create a weblog, take a creative writing class, or just write a nice letter to every relative and college roommate you’ve blown off for the last 4 years. Anything that involves putting words down on paper will improve your spec writing skills. [from Painless Functional Specifications Part 1]

Yes, yes, yes! There’s no indication of the content of the class Spolsky is referring to, but it wouldn’t be surprising if it were a humanities course that he took to meet a gen ed requirement. This isn’t the only value of technical students taking courses outside their major, but the likely increase in practice writing is a great one. And I love this example of someone reflecting back on the benefits of a course that probably wasn’t the motivation for signing up for the course, and perhaps wasn’t even recognized at the time.

It also got me thinking about a related question: why don’t people read specs? I distribute programming assignments that resemble specs, and my lab tutors have learned that one of the first steps to helping a struggling student is to get them to go back and actually read what is in the specification. If you don’t like to read, digesting a written description of what you’re being asked to do will be painful. But you can similarly say that reading (particularly reading carefully) is a muscle that requires exercise. So, if reading well requires practicing reading well, perhaps we should be cautious of trends towards assigning less reading and more short excerpts, web links, and videos. In addition to conveying the content within the text, it might well help students do better on assignments that on the surface don’t have anything to do with reading.

Leaving time for focus

This quote from a recent Chronicle article Infantilized by Academe struck me, particularly with the chaos of the end of the academic year:

Our students are often more distracted than we are, and so inured to distraction that they are unlikely to notice it. As other commentators have argued, the process of gaining admission to selective American colleges now requires presenting an array of accomplishments so vast and varied that any reflection that might accompany them is purely incidental.

This thought resonates with recent conversations I’ve been having with students and colleagues about the amount that students try to take on, and the difficulty many students have in recognizing the real cost of doing more. Yes, you can take on more courses and activities, but you will sacrifice the depth of attention each one can receive.

It creates an advising problem for me. By nature, I encourage my students to challenge themselves. Take the hard course they are interested in. Take on research projects. But I’m also finding myself trying to figure out how to advise moderation without advising complacency. Don’t sign up for three upper-level, project-based courses in the same semester – pick the one (or maybe two) that you care about, and make sure you get everything you can out of those courses. Don’t try to complete three or four programs – particularly if you’ll find yourself completing multiple capstones in the same semester and not able to fully dedicate yourself to any of them.

Or, at least, make those choices with your eyes open about what you’ll be sacrificing by going after quantity and decide that it’s a sacrifice you’re comfortable making.

Rescue Robots in the News

This semester my intro programming students are doing a very scaled down model of how search-and-rescue robots might very stupidly explore a space while trying to keep themselves from clumping up with each other. It’s a first programming course for most of them, so have I mentioned that these simulated robots are very stupid.

However, since I’ve been playing around with their project, I seem to be seeing interesting content about search and rescue robots cropping up all over the place:

Last week (on April 23rd), there was a great NASA JPL livestream of a talk on Rescue Robots focusing in particular on RoboSimian.

A prototype of a new robot assistant to guide firefighters through buildings to aid with search and rescue was demonstrated.

Sadly, a shape transforming robot got stuck inside a Fukushima nuclear reactor while trying to investigate the state of the plant to help with decommissioning.

Where’s my plow?

The snow situation isn’t as bad here in Western PA as it is on much of the east coast, but while waiting for things to lighten up enough for me to go out and shovel, I’ve been playing around with Pittsburgh’s new snow plow tracker. The system itself is only live while snow is falling – access it through the button on the right.

I like the use of the “multiple vehicle” icon to keep things legible when zoomed out. It took me a bit of playing around to realize that if you adjust the “history display” slider at the bottom of the screen, you can see the routes the plows took from the target time until the current time. Which, in effect, lets you figure out which roads have been plowed in the last, say, three hours. I’d love to see an overlay of this with the Google traffic information since it’s based on a Google map but maybe that can come in version two.

It’s a testiment to how fun the system is that I’ve lost so much time playing with it even though it doesn’t actual cover any roads I’ll plausibly drive on in a snowstorm. Very interesting to see which roads never get plowed at all.

Counting down

I am a crazy fan of advent calendars. In addition to my physical calendar of ornaments, I’ve got a collection of online calendars I’m “opening” each day as well. Here are my favorites I found this year:

Saveur Cookie Advent Calendar: A new cookie recipe each day – check out day six’s Alfajores

Erik Svedäng’s Advent Calendar: Fun little widgets to watch and, in some cases, interact with

Advent of Indies: Each day another indie game is promoted alongside a freebie to enjoy (some available only on the day the door opens)

LEGO Star Wars Game Advent Calendar: play through different levels unlocked each day to collect pieces

The Economist Daily Chart Advent Calendar: An infographic roundup from the year, with a new one scheduled for release on the 25th

Lorem ipsum ipsum ipsum lorem

“While Google translate may be incorrect in the translations of these words, it’s puzzling why these words would be translated to things such as ‘China,’ ‘NATO,’ and ‘The Free Internet,’”

There is so much to love in this exploration of what happens when you feed lorem ipsum text into Google Translate from Krebs on Security (or, at least what used to happen). Automatic translation algorithms, data sparsity problems, covert information channels… A bizarre, must-read article.

Patchwriting and attribution

If I were teaching a writing skills course this fall, I would be tempted to assign this Language Log post about another recent plagiarism accusation just because of the side-by-side comparison of language and discussion of “patchwriting”. It would probably surprise some students to see the degree of difference between the compared text, and that this is a concern even though the text in question is cited elsewhere, just not for some very specific phrases. Also interesting is the analysis of the older text for whether it too used and attributed patchwriting appropriately – we’re clearly more easily able to spot these things now with digital texts.

Free Service Botnets

How Hackers Hid a Money-Mining Botnet in the Clouds of Amazon and Others: a couple of security researchers build a botnet out of free accounts, potentially legally they claim, rather than from hijacked computers. They proof of concept tested Litecoin mining, suggesting they could have brought in $1750/week with their constructed botnet if left running.

While the article cites Amazon and Google’s services as examples, the following suggests an alternate source for these vulnerable accounts:

Choosing among the easy two-thirds, they targeted about 15 services that let them sign up for a free account or a free trial. The researchers won’t name those vulnerable services, to avoid helping malicious hackers follow in their footsteps. “A lot of these companies are startups trying to get as many users as quickly as possible,” says Salazar. “They’re not really thinking about defending against these kinds of attacks.”

A brief mention late in the article about companies (not Amazon or Google) turning off services or shutting down because of this type of malicious use suggests this may be a real barrier to entry into the market for cloud computing.