No longer active

Comments have been turned off because of spam. Questions/comments: I'm at dantasse.com

Wednesday, October 24, 2012

Memorizing Names

Say I was talking to you and I told you one fact (like my name) and you wanted to memorize it, but we kept conversing. How would you do it?

You should rehearse it at T+8 seconds, 14 seconds, 32 seconds, 86 seconds, and increasing intervals in a 5+3^n pattern. Here's why:

Spaced Repetition has been around a long time. The idea is that, if you're going to practice a fact N times to remember it, you should practice it over time, not all at once. (this is called the Spacing Effect.) You should be asked to reproduce the item, not simply shown it again. ("Testing Effect") Furthermore, these rehearsals should be in increasing intervals. ("Expanded Retrieval")

About the spacing effect: this has been shown repeatedly (see pretty much any link in this post where spaced practice beats massed practice)
About the testing effect: this has been shown repeatedly too, e.g. by Carpenter and DeLosh (2005).

About expanded retrieval: this is a little less clear. Pimsleur (1967) suggested exponentially increasing intervals. Landauer and Bjork (1978) found that increasing intervals (e.g. rehearse in 1-5-9 seconds) is better than equally-spaced intervals (like 5-5-5) if you're testing yourself, but neither Carpenter and DeLosh (linked above) nor Balota et al (2007) found much support for the "increasing intervals is best" argument. Indeed, Karpicke and Roediger (2007) found that increasing intervals helped short-term recall, but equally-spaced intervals helped long-term recall. But they found that this effect may be due to the equally-spaced intervals' lack of an immediate recall (the "1" in a 1-5-9 schedule). They showed that just delaying the first test by 5 seconds makes it harder, which helps long-term recall. So it seems like you should be able to get the best of all worlds by adopting an increasing-intervals schedule, but also delaying the first review.

Another consideration is that this is the real world, not a 3-repetition study in the lab, and increasing intervals scales better. If you start practicing every 5 seconds, by the time you're at repetition 10 you'll be fed up, whereas if you go with 3-9-27-81 etc the intervals will quickly become so infrequent that you're not bothered.

But what should the first interval be? Peterson and Peterson (1959) show that recall percentage at 3 seconds is better than 6 seconds, 6 better than 9, etc., and we want them to remember it at the first interval, so might as well make the first interval 3 seconds. But, as Karpicke and Roediger (linked above) mentioned, we don't want it to be too easy to remember it at the first interval. Well, the same Peterson et al (1963) found 8 seconds to be the best interval.

So, how about a 3-9-27-81 interval plus 5 second delay, so 8-14-32-86-etc? Whew! Well, whatever; increasing + delayed-first-item sounds like at least a pretty good way to go.

Hmm... but if you're not in a lab, talking to an experimenter, how will you remember to test yourself at all these intervals? Hey, what if we could do that with a system using instant, unconscious, subtle microinteractions...
Stay tuned!

Saturday, October 20, 2012

Microinteractions: the book

Since I came across Daniel Ashbrook's thesis, I've been thinking about "microinteractions": "interactions with a device that take less than four seconds to initiate and complete." I'm interested in expanding the space of possible microinteractions that people use.

Now this fellow Dan Saffer is writing an O'Reilly book with the title Microinteractions. You can currently read a draft of the first chapter. Sounds like he's using a more general definition of the term, to include lots of non-mobile interactions: the "see translation" button on Facebook, the password entry form on Twitter, calendar apps including the duration and end time when you're scheduling a thing. I like it. It leaves me wondering: is this just "everything" now? Is "microinteractions" a synonym for "details"; something that of course we should focus on but nobody's going to have some big revolutionary ideas about? Or is this part of a big shift in thinking, now that we've got enough computing resources to actually make meaningful and positive microinteractions?

Incidentally, your potential microinteraction of the day: squeeze your phone as you pull it out of your pocket depending on how you want to use it.

Thursday, October 11, 2012

UIST 2012 highlights


... in my humble opinion, based on my particular interests:

Tactile/hands/fingers:

Watches are old news. How about having 6 watch screens in a bracelet shape around your wrist? It's clunky now, but who knows. They can detect pose and interact smoothly as needed.

A ring for input with 4 sensors. Clever: recognizes IR reflection and skin deformation to tell whether you're clenching, bending, pushing, flipping, or rotating the ring. Detects position/rotation by melanin content (which varies around your finger. Wired currently (and a ring is so small it makes me wonder if the wire could be removed).

Camera/LED on your finger for always-available input. The camera is 1x1mm, 148x148 pixels. ("NanEye") Read "fake textures"; ascii characters printed into patterns. Downsides: wired and ~1s latency on touches. Still, cool!

IR laser line and camera mounted on your wrist to detect your finger positions. Allows finger gesture prediction, 3d manipulation, etc. It's a bulky box now, but you could imagine it shrinking. Between 2-9 degrees of error, which is good enough for a lot of tasks.

Wear gloves so when you're looking at a wall of stuff you can find the exact thing. Sounds like it could be useful. (the trick is finding a task where the computer knows where the right thing is, but you still have to find it with your hands.)

An addition to phone calls. You can squeeze the phone, and then it vibrates on the receiver's ear. Four intensity levels, from light to sharp. This might sound a little silly, but:
- they tried it with 3 couples for a month, and they all sent at least one Pressage every single call.
- they wanted to have it available other times too, and as another channel of communication (e.g. light buzz = "I'm coming home")
Assuming they didn't just get 6 quirky people in their study, people will use this. It's super intuitive and quick, and adds a layer of richness to phone conversations. (the channel during non-phone conversation is mostly a nice bonus, and is kind of tricky for a lot of reasons.)

Other Things:

Pan-tilt projector and kinect so things can project and interact all around the office. Quite a feat of engineering.

Our current displays have around 100ms latency on touch. You can see this yourself: draw a quick line with your finger; it lags an inch behind you. What if instead we had 1ms latency? I tried the demo, and it is much slicker. Feels like you're moving real objects. Remember when Google started targeting latency and all of a sudden Gmail became a viable non-painful application? Latency matters on tablets too.

You know image histograms? What if you could select pixels by brightness or by blueness instead of by location, and edit them all based on that? Looks fun.

Ever made an iOS app with Interface Builder? You specify constraints (like "this text box aligns with the center of this image") and they are automatically maintained through resizes etc. ConstraintJS looks like a way you could do this on the web, and for more than UI layouts. You can make asynchronous calls and display all their states without the pain! http://cjs.from.so

An IDE for developing camera- (i.e. Kinect-) based applications. If I made a camera app, I think I would want this.

Instead of expensive "clickers" which you might have used in university classes, just print everyone a QR-like card and have them hold it up. Cheap webcam takes a picture of the whole class.

What if the default unit on the web were a JSON object instead of a hyperlink? Sounds like web standards stuff, where it's awesome as much as everyone uses it, and good luck, but would be really useful in a ton of ways.

MTurk is great, but people cheat. Some tasks you can design around this, but some you can't. Crowdscape lets you visualize what people are doing as they do your task, easily weed out cheaters, use that as labeled input to bootstrap a machine learning system, and more importantly understand what work patterns lead to a good response. (Maybe tooting my school's horn a bit, but: won a best paper award!)

Some folks made a braille tutor out of the pressure-enabled touchpad that we got for the Student Innovation Contest. Awesome. They won 2nd place; I'd have given them first. Our ambient stress sensor was neat but did not win. Nor did it deserve to; there was a lot of great stuff.

Cool posters:
MMG armband, Shumpei Yamakura (like EMG, but resilient to sweat; not sure how they'd compare)
MISO, David Fleer, Bielefeld; point and snap at your electronics
Tongue-finding with Kinect (i.e. for rehabilitation), Masato Miyauchi
Breathwear (band to detect when you're breathing), Kanit Wongsuphasawat

Cool coffeeshops in Cambridge: Crema in Harvard Square for cappuccino and Voltage by Kendall for a fine Guatemala Buena Esperanza roasted by Barismo.

Yes! Another good conference. People asked me multiple times "So how's UIST going?" Look, of course it's fun and full of cool people doing exciting stuff!


Monday, October 1, 2012

Thinking about unconscious/micro interactions

Trying to define a research plan or story or something that I can both work on and apply for fellowships about. Right now a lot of work that I'm interested in feels related to me, but it's hard to explain to other people, which means it's not well-defined enough. In this post, I'm working on that.

All our interactions with computers/smartphones now are both intentional and slow. By "intentional" I mean that you have to think about getting out your computer (or phone), and by "slow" I mean at least on the order of seconds, if not minutes. Right now, get your phone out and check the weather forecast, and count seconds while you do it. (Just tried it; 23 seconds.) I want to break both of those constraints.

Why?

When you remove "intentional", you move from the slow brain to the fast brain. You get North Paw, Lumoback; systems that train you on a physical level. Ambient systems which help you change behavior: DriveGainBreakawayUbiFit (some slides). The Reminder Bracelet: ambient notifications. I guess it feels like, and I'm not sure how best to put this, when you do things unconsciously/unintentionally, you can learn procedural things or adapt physical movement without increased cognitive load. "Human attention is the scarce resource", and these systems give you something for free.

When you remove "slow", a lot more things become possible.
Thad Starner mentions the "two-second rule" in wearable interaction (IEEE article): people optimize their systems so they can start taking notes within two seconds of realizing the need. Daniel Ashbrook, in his thesis, defined microinteractions as interactions under four seconds, start to finish. At Google, speed was a big emphasis, and they're right: if something takes longer, people will use it less. (wish I had a good citation for this.)

Interactions are also overt; everyone can tell when you're computing. Breaking that constraint lets you interact with your computer without people knowing, which seems useful. Enrico Costanza has worked on "intimate interfaces" (EMG armband, eye-q glasses). ("intimate interfaces" is overloaded to also mean "interfaces that allow intimacy", e.g. among remote couples or family; not talking about that here.) Is this good? Detractors might argue that if there are social cues against something, they're there for a reason. Nobody wants you to be computing when you're trying to talk with them. However, two things: first, it's a tool just like anything else and should be used wisely; second, people already do these things with their phones. They get buzzed, answer texts, silence their phones, etc. But I'm not sure that I want to get rid of "overt", or at least not necessarily.

How?

Watches:
Mounting things on the wrist can cut down the action time up to 78%. You can use round touchscreen watches. Conveniently, the Pebble watch is now somewhere in production stages, and the Metawatch is... shipping? InPulse has been around for a couple years, but is a little clunky and doesn't have the battery life.
PinchWatch might not be what you think of as a watch (besides the display); a lot of the interaction is done by pressing fingers together. 
Nenya is a ring/watch system, getting really minimal. I like it. Reminds me of Abracadabra, due to the magnetic sensing, but now the ring just a regular-looking ring you would wear anytime.

Pockets:
You can touch your phone in your pocket, even do a Palm-Graffiti-style input (PocketTouch). More simply, for some tasks you could just hit your phone (Whack Gestures). Sami Ronkainen et al investigated this first, though they hit more false positives. (they also found taps to be more natural/accepted than gestures.)

Speech:
This seems obvious, right? But it's not. First, talking while walking around is weird. (ever play Bluetooth Or Crazy?) Second, it's not easy to get audio input into your phone in your pocket unless you're wearing a Bluetooth or something.
Sunil Vemuri et al's "Memory Prosthesis" was one approach, focusing on recording nearly-continuously and then searching; the search is less interesting to me, but the continuous recording is useful. Ben Wong experimented with dual-purpose speech: the user wouldn't give direct commands to a system; rather, the system would harvest information from things the user was saying. The Personal Audio Loop recorded your last 15 minutes and let you go back to search within it.