Trying to define a research plan or story or something that I can both work on and apply for fellowships about. Right now a lot of work that I'm interested in feels related to me, but it's hard to explain to other people, which means it's not well-defined enough. In this post, I'm working on that.
When you remove "slow", a lot more things become possible.
All our interactions with computers/smartphones now are both intentional and slow. By "intentional" I mean that you have to think about getting out your computer (or phone), and by "slow" I mean at least on the order of seconds, if not minutes. Right now, get your phone out and check the weather forecast, and count seconds while you do it. (Just tried it; 23 seconds.) I want to break both of those constraints.
Why?
When you remove "intentional", you move from the slow brain to the fast brain. You get North Paw, Lumoback; systems that train you on a physical level. Ambient systems which help you change behavior: DriveGain, Breakaway, UbiFit (some slides). The Reminder Bracelet: ambient notifications. I guess it feels like, and I'm not sure how best to put this, when you do things unconsciously/unintentionally, you can learn procedural things or adapt physical movement without increased cognitive load. "Human attention is the scarce resource", and these systems give you something for free.
When you remove "slow", a lot more things become possible.
Thad Starner mentions the "two-second rule" in wearable interaction (IEEE article): people optimize their systems so they can start taking notes within two seconds of realizing the need. Daniel Ashbrook, in his thesis, defined microinteractions as interactions under four seconds, start to finish. At Google, speed was a big emphasis, and they're right: if something takes longer, people will use it less. (wish I had a good citation for this.)
Interactions are also overt; everyone can tell when you're computing. Breaking that constraint lets you interact with your computer without people knowing, which seems useful. Enrico Costanza has worked on "intimate interfaces" (EMG armband, eye-q glasses). ("intimate interfaces" is overloaded to also mean "interfaces that allow intimacy", e.g. among remote couples or family; not talking about that here.) Is this good? Detractors might argue that if there are social cues against something, they're there for a reason. Nobody wants you to be computing when you're trying to talk with them. However, two things: first, it's a tool just like anything else and should be used wisely; second, people already do these things with their phones. They get buzzed, answer texts, silence their phones, etc. But I'm not sure that I want to get rid of "overt", or at least not necessarily.
How?
Watches:
Mounting things on the wrist can cut down the action time up to 78%. You can use round touchscreen watches. Conveniently, the Pebble watch is now somewhere in production stages, and the Metawatch is... shipping? InPulse has been around for a couple years, but is a little clunky and doesn't have the battery life.
PinchWatch might not be what you think of as a watch (besides the display); a lot of the interaction is done by pressing fingers together.
Nenya is a ring/watch system, getting really minimal. I like it. Reminds me of Abracadabra, due to the magnetic sensing, but now the ring just a regular-looking ring you would wear anytime.
Pockets:
You can touch your phone in your pocket, even do a Palm-Graffiti-style input (PocketTouch). More simply, for some tasks you could just hit your phone (Whack Gestures). Sami Ronkainen et al investigated this first, though they hit more false positives. (they also found taps to be more natural/accepted than gestures.)
Speech:
This seems obvious, right? But it's not. First, talking while walking around is weird. (ever play Bluetooth Or Crazy?) Second, it's not easy to get audio input into your phone in your pocket unless you're wearing a Bluetooth or something.
Sunil Vemuri et al's "Memory Prosthesis" was one approach, focusing on recording nearly-continuously and then searching; the search is less interesting to me, but the continuous recording is useful. Ben Wong experimented with dual-purpose speech: the user wouldn't give direct commands to a system; rather, the system would harvest information from things the user was saying. The Personal Audio Loop recorded your last 15 minutes and let you go back to search within it.
Waiting for Narya and Vilya! ;)
ReplyDeleteGood name, right?
DeleteKevin Kelly mentioned these idea, albeit briefly, during his closing talk at QSCON this year. The idea of an exosense is powerful and worthy of exploring. I think the most powerful example of possibilities in this space would be the combination of Google Glass + EEG headsets (sensors). The automatic non physical intentional connected (internet) action woudl be powerful and require training (like all other senses/abilities), but could be pretty awesome....
ReplyDeleteOoh, that would be neat. Glass + EEG gives you at first a heads-up display of your EEG (which is cool in itself) but maybe then it could also gather data about correlations between things you see and your EEG state.
Delete