No longer active

Comments have been turned off because of spam. Questions/comments: I'm at dantasse.com

Sunday, December 1, 2013

Professors Must Break Away From the Undergraduate Mentality

A partial response to Ph.D. Students Must Break Away From the Undergraduate Mentality.

Being an undergrad is characterized by classes. Endless, time-sucking, life-eating classes. Classes that you have to take, classes where grades matter, classes that will determine your employment after you graduate. At CMU as an undergrad, we took ~50 "units" per semester, (~17 credit hours), which meant that we were "supposed" to work about 50 hours/week. We all probably worked more, because some "12-unit" classes took 20+ hours on their own. Undergrad was fun, but not sustainable. (I mean, we also lived in dorms. You can't do this forever.)

Immediately upon entering grad school (hopefully before), you'll be informed that classes, in fact, do not matter, and that you should stop optimizing for them. This is fine advice. After all, you won't be a great researcher if your main qualification is "did well in class." (Mor Harchol-Balter's excellent inside scoop to grad admissions makes this very clear: "did well in class" doesn't even get you in to grad school, much less out.)

However, the faculty still forces you to take classes. You'll get minimal credit for completing them, but you have to do so. In the article above, Jason even advises, "you should do more than the bare minimum amount of work needed for your courses."

Where does this time come from? Not from your research. It's one reason your job as a grad student stretches beyond the "standard" and healthy 40 hours (which may even be too long) into 60+ hours. Remember: that's doable for a limited time, as an undergrad, when you're living on campus, and it's kind of miserable then. It's not a prolonged lifestyle. Your advisors want you to take classes, and do research, and not work too hard, but it doesn't seem a contradiction in their minds. Your advisors want you to take classes during your magic time, that extra bit of time you keep hidden in your Bag of Holding or your TARDIS. Your advisors probably had this time; after all, as Philip Guo says, "Only about 1 out of every 75 Ph.D. students from a top-tier university has what it takes to become a professor at a school like Stanford." For the rest of us, that "magic time" is our sleep or our friends/family time. It's not a healthy tradeoff.

So we have some hoops to jump through that are causing us pain. Let's rethink this. Why mandate certain classes? Why mandate classes at all? I'd like to see a shift in how classes are perceived. Instead of forcing certain classes, or a certain number of classes, let students pick which classes will actually benefit them. Make all homework assignments, papers, and projects optional. You learn more, and build your career, actually doing research.

"But classes help you learn new skills that you'll need!" Yes, some do. And some assignments within those classes do. But some don't. Let me decide that. Let me pick which skills I want to acquire and how much of them I need. Treat me like an adult, not an undergraduate.

Friday, November 1, 2013

Obstructionist vs. Intensifying Recording, Novices vs. Experts, and Google Glass

Point: Don Norman on Google Glass. Read also I Go To A Sixth Grade Play, which is spot on. In theory, you can record everything and live totally seamlessly and not just miss a large portion of your life. In practice, we'll keep futzing with cheap, poor imitations, destroying the experience itself to get a recording nobody will ever watch. We're obstructing the experience by recording; it is possible to intensify it, as by a master photographer or artist, but we are usually not doing that.

Really killer point:
Probably we've all seen a wedding reception, an event meant to be full of spontaneous expressions of joy, transformed by the photographer into a series of staged events. “Kiss the bride.” “Again, please.” “Cut a piece of the wedding cake.” “Each of you feed the other.” “All you spectators, move out of the way of the camera.” It is amazing how tolerant we have become of this manipulation of the experience: the act of recording taking precedence over the event.

Interesting side question: why do we want all these recordings? why do we cling so hard to keeping certain moments? Fairly certain this is a Deep Question. (or a question with a simple answer, but a difficult problem to solve.)

Counterpoint: Thad Starner explains it himself. Farhad Manjoo agrees. The computer can get fully out of your way, allowing you to experience and record in real time. (we've always been able to experience OR record; the AND is the real trick.)

But look at Thad's devices vs. Glass. He's got a Twiddler one-handed keyboard, he's been taking notes and pulling things up on the fly for 20 years, he is an expert at wearable computing. If Glass becomes a mainstream thing, we'll run out to the Google Store and buy it to show off to our friends tomorrow. He's an experienced photographer with a DSLR; most of us will be chumps with point-and-shoots. (or, chumps with DSLRs, pretty much the same thing.) Which means we'll have obstructionist artifacts, not intensifying ones. And they'll be on our faces!

"Not a math person" in HCI

"Math is the area where America's 'fallacy of inborn ability' is the most entrenched."

"I'm not a math person" is an unfortunate sentiment that lots of kids echo when they're grumbling through homework, explaining bad grades, or choosing majors. Other people (like the above) have pointed out why this is Not a Great Thing more eloquently than I.

HCI kind of lives between CS, Design, and Psych/Cognitive Science. It's so new, nobody's totally clear on what the whole scope of the field is, and indeed, maybe that's a dumb question. It's also so broad; anything where humans and computers interact. What's submitted to UIST will be totally different than something submitted to CSCW. As a result, everyone (at least at CMU) is an HCI student, but also (and sometimes primarily) a computer scientist, a designer, or a psychologist.

Point: this influences people to stay in their "major", rather than approach problems cross-disciplinarily.

Counterpoint: well, we humans need to categorize things somehow, in order to understand, for example, what certain professors or students work on.

Point: Fine, I guess. But be very clear when you're pigeonholing people, and do it as rarely as possible.

Tuesday, October 22, 2013

New work directions: smartphone tensions

This is a question I've been interested in for a long time, and feel like maybe I'm finally assembling the tools, people, and mental energy to tackle it. To begin to attempt to tackle it.

What is it about smartphones that stresses people out, and what can we do about it?

Okay, a lot of things stress people out. Your friend's using his phone while you're talking to him. Your boss is calling you at night. Your family expects you to text them when your plane lands, and you forget. That lady in the car next to you is texting while she's driving. You keep feeling an itch to check on your Facebook. You keep feeling a literal itch, because your Facebook is buzzing you until you check it. You don't know what it is, but you feel a little scatterbrained.

A lot of issues! Ways we could approach them:
- pick a problem that is well-defined (like texting-while-driving) and develop targeted solutions to that. (like SafeCell, which stops you from texting while driving).
- pick a measurable dimension to address a slightly less well-defined problem.
- just start from the top and tackle the whole thing.

I think the last is most interesting. And I guess it leads to a multi-step approach:
1. understand the problem. What are the tensions involved here? Why do people want to use their phones so much? What about this becomes problematic?
2. address the problem.

For part 1, I'm thinking interview people and review log data to get at what people are actually doing and why. For #2, it's more prototypes/probes than actually functional ideas. Build apps that get at the causes of these stresses, not apps that change their behavior.

Because the goal here is not to build another app that helps you slow down/de-stress/be more present. If we build a thing you've got to use, we've already lost. But it'd be great if we could uncover some of the underlying design guidelines that should be built into phones and apps. Tell developers something like: "infinite scrolls are technically cool, but will cause users the following stresses: ..." or "if you notify people more than once a day, they'll start to get antsy about it" or whatever. Instead of building an app to help you de-stress, make your phone not stress you in the first place.

Saturday, October 12, 2013

UIST 2013 highlights

(names are presenter or first author; of course they're all "et al". well, almost all. google scholar the paper titles to get links.)

PneUI: Pneumatically Actuated Soft Composite Materials for Shape Changing Interfaces, Lining Yao
- exploring what you can do with air-filled interfaces.
- examples: a soft material with a bubble on the back curls up as the bubble is filled, or a tower expands and contracts as air is added/removed.
- cool thing: you could have a soft phone that just morphs into a wristband.

Controlling Widgets with One Power-up Button, Daniel Spelmezan
- prox sensor + pressure sensor makes one physical button you can do six gestures with.
- it's a lot easier to put one button than a bunch of buttons on many small devices.

Haptic Feedback Design for a Virtual Button, Sunjun Kim
- a soft button that feels like a clicky mechanical keyboard button. this is aesthetically pleasing.

Transmogrification: Causal Manipulation of Visualizations, John Brosz
- select a section of a graphic, morph it into another shape (e.g. square -> trapezoid, or even square -> circle)
- snapping to paths, so you can e.g. straighten out rivers on a map
- cool thing: you can make a chain of interactive infographics that all depend on the previous one.

Visualizing Web Browsing History with Barcode Chart, Ningxia Zhang (poster)
- looking at browsing to see how often you switch, maybe.

StickEar: Making Everyday Objects Respond to Sound, Kian Peen Yeo
- little sticky gadgets that each sense sounds and can be configured to do things

uTrack: 3D Input Using Two Magnetic Sensors, Ke-Yu Chen
- you wear two magnetometers on your ring finger and one magnet on your thumb, and then you can do free-space 3D gestures. Neat!

FingerPad: Private and Subtle Interaction Using Fingertips, Liwei Chan
- similarly, you wear a sensor grid on your fingernail and a magnet on your thumbnail, and now you can draw on your fingertip. The privacy of it is kind of cool.

BitWear: A Platform for Small, Connected, Interactive Devices, Kent Lyons (poster)
- little fingernail-sized buttons with Bluetooth and LEDs that you can configure on the internet.
- I would really like to play with these.

Imaginary Reality Gaming: Ball Games without a Ball, Patrick Baudisch
- you can play basketball without the ball. QR-ish codes on people's heads let an overhead camera know who is where, and a speaker says who has the ball.
- I mean, that's cool in itself.
- even cooler: imagine real world games with power-ups!

inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation, Sean Follmer, Daniel Leithinger
- tangible table, a lot of little square things that can move up and down.
- you can make really non-"computery" controls like the ball answering machine. Or it can move things around on the table. It looks like it has a personality.

Traxion: A Tactile Interaction Device with Virtual Force Sensation, Jun Rekimoto
- little metal thing with a moving magnet, feels like it's pulling left or right. The demo was a big hit. Pretty weird that it can fool your mind like that.

There were a few cool mixed-initiative things there too. Cobi: A Community-Informed Conference Scheduling Tool, AttribIt: Content Creation with Semantic Attributes, SeeSS: Seeing What I Broke - Visualizing Change Impact of Cascading Style Sheets (CSS), A Mixed-Initiative Tool for Designing Level Progressions in Games, A Colorful Approach to Text Processing by Example. I might be making up a theme out of nothing (I guess the part-computer part-human system thing is just all of HCI) but there's something that feels pleasingly interactive out of these.

Tuesday, August 6, 2013

Something about addiction and flow

So flow is good, right?

Well, sometimes, yes, unless you're addicted to slot machines. Or online games. Or Facebook, or e-commerce sites, or really anything else unproductive.

So if you were, say, starting out on some research that could probably be pretty well defined by "make your computing experience more flowy"... how can you motivate that better so it's actually about making your life better, and not just faster? Google gets away with worshipping speed and fluidity because faster page loads means more internet use, which means more money for them, but the rest of us are not optimizing for money.

Sunday, July 7, 2013

To Save Everything, Click Here

by Evgeny Morozov. This was quite a read. Not often I painfully jot down 11 pages of notes in my ebook reader. I guess it's because he has strong opinions about most everything I'm interested in.

Quick overview: railing against "internet-centrism" and "solutionism." Internet-centrism is the modern trend to ascribe magic powers or moral judgments to "The Internet." (Piracy should be tolerated (instead of DRM) because The Internet! We should change government to be more like Wikipedia because The Internet!) Solutionism, a broader topic, is the tendency to see any phenomenon as a problem to be solved.

Caveats: he's kind of a jerk. I'll call him out on it explicitly later. It's really frustrating; I want to like this book more, but often he just resorts to bullying. And his caustic writing style is probably calculated just so people like me will get all fired up about his book! In that case, sir, good job. (I hope I never meet you.)

Anyway, solutionism. A good example of something that's maybe not a problem to be solved is cooking. He makes fun of all these kitchen technologies that help you cook "more perfectly" - which is to say, more accurately or more efficiently, silently assuming that accuracy or efficiency is what we want. "Here is modernity in a nutshell: we are left with possibly better food but without the joy of cooking."

Internet-centrism: people complain about Apple creating a closed ecosystem. But just why is open better? In Apple's case, for selling you some sweet apps, maybe closed is a better model. (Morozov's thoughts, not mine.) Arguments for openness often (not always) resort to "because openness! The Internet!"

Other things that Morozov says that I agree with him on, or am open to considering:
- Silicon Valley libertarianism is bad: hell, we wouldn't even have The Internet without public financing.
- "Open government" isn't a goal - or at least, that openness isn't a goal in itself. For a few reasons: 1. it's a huge pain to have to document every damn thing you do; 2. it opens the doors to the public to nitpick every last little spending decision (some of which are unintuitive but turn out well). Efficiency is great, but not necessarily our #1 goal in our government.
- Technological solutions often have unintended consequences. If you publish crime stats by neighborhood, sure, that helps home buyers/renters to find safe neighborhoods, but it also hurts sellers and therefore might make them less likely to report crimes.
- You can't rely on Yelp-style crowds for everything. (hell, you can't even rely on Yelp-style crowds for Yelp.) I don't want crowds telling me where to eat, much less what to vote.
- If you publish some metrics (like senator attendance records) then people will optimize for them. (this can be problematic. maybe one senator has more important things to do one day.)
- We shouldn't take Google search results as gospel; they can be manipulated. (Interesting question then: given that we do take them as gospel, what should we do about it?)
- "Internet-centrism is at its most destructive when it recasts genuine concerns about the mismatch between what new digital tools and solutions have to offer and the problems they are trying to solve as yet more instances of Luddite and conservative resistance."
- Complicated computer algorithms, like any other decision making tool, reflect the biases of their creators. But complicated algorithms offer a (sometimes real, sometimes fake) appearance of objectivity. (for example, for police work.) (sounds like a call for intelligibility.)
- Oh man, great stuff about SCP, the law enforcement philosophy that says you should make it impossible to commit crimes, rather than just punishing people who do. This is really interesting. For example, if I decide I'm not going to eat cookies, I want SCP-style prevention there! I want to make it impossible for me to eat cookies! He agrees: as long as you make the decision yourself, there's no problem "shifting registers" (term from Roger Brownsword) from the "moral" register (x is good or bad) to the "practicable" register (x is easy or hard) or the "prudential" register (x helps me or hurts me).  So he's against "nudges". I can dig it. When we shift things out of the moral register, though, we might never even think about them again.
- Excessive quantification in the research world is a mess. People are gaming metrics, counting publications and citations too much, etc.
- Food is a good example of solutionism gone wrong. We decide fat is bad, so everyone counts grams and does all sorts of nasty tricks to call their products "low fat", only to later discover that fat's not so bad after all.
- Furthermore, Quantified-Self solutions can backfire by putting the onus back on the individual, rather tahn the broken system. ("why didn't you just count your calories?")
- Memory != preservation. And we shouldn't assume that we should preserve everything.
- We talk about "information nutrition" (e.g. The Economist is healthy, tabloids are junk food), but we really have no idea what we're talking about.
- You can't really design for serendipity.
- Gamification via points and badges is dumb. (okay, duh.)
- Check out Albert Hirschman's futility-perversity-jeopardy trio as a set of common reactions to new things.
- We should try to change people's behavior mostly by reflection, not by paternalistically "nudging" them into making the right decisions. I don't know, though; his example, using some complicated parking meter thing, seems like those 5 cent nickels at Whole Foods when you bring your own bags, and then you have to decide where to donate your 5 cent credit. I don't want to think about those 5 cents. Just let me get on with my day.
- Sure, everyone's always manipulating us. But I want to know when it's happening.

Anyway, the overall feeling I got from reading this is that maybe my earlier research thoughts are misguided. I basically heard the Weiser ubiquitous computing story ("your computers will fade into the background" etc) and thought "yes, let's do that." Maybe perfect efficiency isn't always the goal!

Some things that Morozov says that are stupid: (if he's going to bully people, I can bully him back)
- somehow make "open government" data (say, campaign finance) only appear on the original source (like the FEC website) so people can't re-publish it and selectively alter or highlight it.
- the "Pirate party" in Germany is losing support, therefore their ideas are failing and should be mocked.
- LiquidFeedback, this tool that sounds like Google Moderator, is a "solution to a problem that doesn't exist" - we don't need more feedback from, say, politicians. 
- partisanship isn't necessarily a problem
- Amazon might start automatically generating books! That are creepily optimized to be exactly what you like! Never mind that this is probably AI-complete!
- We should start having all algorithms be audited by qualified third parties. (oh my god. what constitutes an algorithm? geez.)
- "Once everyone is wearing Google's magic glasses, the costs of subjecting friends to a mini lie detector... are trivial."
- Argh, he totally doesn't understand some of the systems he's writing about and mocking. (e.g. Matthew Kay's Lullaby)
- Quantified Self people are weird and gross. (seriously, this whole chapter is just straight up bullying. it's really offensive.)
- Quantified Self people are all into some weird woo-woo shit about revealing the deeper truth of who we are, with numbers! What a bunch of misguided weirdos!
- Self tracking for health purposes makes a mess for the insurance industry, so we shouldn't track things about ourselves.
- If you like quantifying something, then you must be a Super Quantifier who wants to quantify everything!
- "Even though Bell doesn't quite put it this way..." (... puts words into his mouth.)
- Gordon Bell is a weird guy. I can just dismiss anything he does by mocking him.

Some comical excerpts from my notes as I was reading this:
"no, you numbskull."
"sigh"
"this section sounds fearmongery"
"the bullying in this section makes me wonder about the rest of this book."
"not what he said. you clown."
(on chapter title "Gamify or die") "I hate this chapter already"
(when he starts talking about extrinsic vs. intrinsic motivation) "sigh I knew this was coming"
"ad hominem, you're a jerk, etc"

Sunday, June 16, 2013

My "wearable computer" set up, June 2013

I don't (yet) have a Google Glass, but I wear computers, and it's great. Currently I have:
- Android phone (Nexus 4, if you must know)
- Pebble wristwatch
- LG Tone headphones
- Fitbit Ultra (no longer made; replaced by the Fitbit One)

The Fitbit counts my steps, and that's all. The neat thing about that is that I can compare days.
I know what it's like when my friend tells me she had an 8000-step day.
I know dancing for a couple hours is usually a good 10k steps at least.
I know that my 30k-step day in Dublin was seriously a lot of walking.
I haven't figured out why it's useful to know these things. Just cool.

The Android/Pebble combo gives me texts on my wrist. This is cool. I've received, understood, and dismissed a message on a bike. Yes, you can do this safely. I've received, understood, and dismissed a message in mid sentence. Yes, you can do this politely. (trust me; your conversational partner will mind much less than if you get your phone out and fiddle with it.)

Is it good that you can do these things? Good question.

But the Android/Pebble/Tone combo! This is the coolest part. Connect everything and start my music app, and then I can start/stop/forward/back the music from either my Pebble or the main Tone hardware. AND I can see the title of the currently playing song on my Pebble. I can almost-fully control my music on my bike. (yes, I can do this safely.)

The fact that it used to take ~25 seconds to start listening to music (unwind headphones, plug in, etc), and it now takes 3, means I listen to a lot more music. The fact that I can see the song playing means I remember it a lot better.

Is it good that I am listening to more music? Good question. But as a Real Actual DJ, who's constantly trying to take in new music, I appreciate it.

What do I take from this? Not a lot, because it's just me. Nevertheless:
- reducing the time it takes to do something really does make me do it more.
- I have no idea if there are negative effects from, say, the fact that I rarely walk more than 5 minutes in silence anymore. I have no idea how to measure this.
- the form factor has to be good and sort of invisible, but it doesn't have to be all that invisible, if there's clear benefit. (the Tone is a new one; it's "around-the-neck" style, and noticeable but usually not problematic.)
- we have enough ways to control music.

Friday, June 7, 2013

Voice of the Coffeepot

Well, this was kind of fun:


A class project. Used a force sensor to tell when the carafe was present/empty/full/being pressed, and either thanked you for refilling the coffee or wished you to enjoy your coffee via a text-to-speech chip and speakers. Enjoy your weekend!

Friday, May 31, 2013

Visualizations, video/audio, and ML for time series data: which platform?

I want interactive visualizations of a bunch of time series, as well as some audio and video, scrub through them and keep them all synced, and then generate features from them and feed those into a machine learning model. No need to be a web app.

What environment do we do this in? Cross-platform choices seem to be javascript/browser, python/installed, and java/installed.

Visualization:
Javascript: d3 (gallery), cubismGoogle Charts, some others like rickshaw, even processing.js. This is maybe the best reason to pick JS.
Python: matplotlib - mostly static visualizations (gallery) or Bokeh? (see this post) Bokeh outputs to an html5 canvas in the future (or a Chaco plot currently).
Java: ...?

Playing audio/video:
Javascript in a browser: HTML5 video and audio?
Python/Java: beats me. Codec hell?

Machine learning:
Javascript: ...?
Python: scikit-learn (and others)
Java: Weka (I hear the API's a pain, though.)

The path forward seems to be to start building an HTML/JS app, even if it's only client side, and figure something out for the machine learning. Perhaps compile scikit-learn to JS with pyjs? Perhaps (this sounds kind of painful) just send all the features to a server and use weka or scikit-learn there to do the real ML and send back results? But I'd welcome any input.

Saturday, May 25, 2013

It's about how you use it

"I'm still here: back online after a year without the internet"- in which he leaves the internet, and all's well, until it's not anymore; all his bad habits catch up with him offline too. Also, you can't just quit the internet alone. It's part of all our life together. 

Don't be a gargoyle with Google Glass. Don't be with people but not actually with them. Don't tolerate other people being gargoyles.

Is Glass an anti-distraction tool? Could be. So long as you can ruthlessly and efficiently curate what goes into it (or if the algorithms it uses are good at doing that for you). Given people's abilities with their email inboxes, I'm not sure, but I think this writer's on to something: if people have a low cost way to get into your visual field, that'll be a problem.

So what are we left with? "The Internet, or Google Glass, or whatever technology you want, isn't good or bad in itself; it depends on how people use it. (which depends partially on design decisions of the people making it.)" Xkcd is right on.

Three kinds of multitasking

I've got to start posting one thought at a time. Shorter posts more often.

Let's get our words straight: here's multitasking vs. switch-tasking; multitasking is doing a lot of related things, switch-tasking is doing a lot of unrelated things.
Maybe this points to the polychronicity puzzle: how is it that some people can prefer to do many different things, even though multitasking doesn't work? On the other hand, examples of polychronicity sound like switch-tasking, so that actually doesn't help answer that question.
A third term: continuous partial attention, or paying attention to many things at a surface level.

I guess it's like this?


Friday, May 10, 2013

Thinking about time and speed

Half formed ideas and links here.

Been reading a lot of blogs and listening to talks, going to a workshop, thinking about how we view time and how fast it goes. In Motion has a bunch of thoughts about time, in relation to travel; while a lot of the book is frustrating (what is "Deep Travel"? is it just "traveling while paying attention"? why does it need a name?), it makes me think about time as experienced vs. time on the clock.

Monochronic vs. polychronic time - first pointed out to me in two papers by Gloria Mark and Laura Dabbish. Monochronic is what we're used to: one thing at a time, time matters, be on time. Polychronic time sense is the approximate kind of time; do a couple things at once, value relationships more than the time. (but multitasking doesn't work! what's going on here?)

Contemplative Computing points me to a few things: an article about speed mentioning Google Glass cutting down the picture taking time from 12 seconds to 1 second, an interview with Linda Stone on time management vs. attention management, a group interview where Neema Moraveji talks of (among other things) "speed bumps" put into typical tasks. We're putting speed bumps into one task and taking them out of others (and indeed, as texts-on-the-Pebble has shown, removing speed bumps is surprisingly awesome). The answer is probably not "speed is good" or "speed bumps are good", but something more subtle. What is it?

Another thing I want to jot down here: calendars are to time as maps are to space. Ponder that.

Tuesday, May 7, 2013

CHI 2013

... was, of course, great. I student-volunteered, which meant I didn't get to see a ton of talks; nevertheless, here are a few things I particularly liked:

Stories of the Smartphone in Everyday Discourse: Conflict, Tension & Instability, by Ellie Harmon and Melissa Mazmanian. They looked at the stories people tell themselves about their mobile phone use; it oscillates between the "integration" story ("get a smart phone, be a super connected techno future hero") and the "dis-integration" story ("unplug, de-stress, get back to the real world"). This causes tension and makes people uncomfortable. Interestingly, these are kind of the only stories being told, and they're overly simplistic.

Indoor weather stations: investigating a ludic approach to environmental HCI through batch prototyping, by Bill Gaver et al. They put little devices in people's homes that would playfully reflect indoor environment conditions (like slight wind, etc); people didn't find them practical or particularly fun, but still felt some attraction to them; "it's like there's a ghost in the house."

Slow Design for Meaningful Interactions, by Barbara Grosse-Hering et al. They described principles they used to design a juicer. Most interesting: it's okay to make some things slower. It can be good, in fact, as long as they're the key parts of the process. People don't mind spending a little more time juicing; they mind spending a long time cleaning the thing afterwards.

Some cool gadgets you can wear: WatchIt (watch band interaction) and NailDisplay (goes on your fingernail; I wasn't sold at first, but then it grew on me throughout the presentation).


Finally, and most entertainingly, don't use seven segment displays. See you all next year in Toronto!

Wednesday, February 20, 2013

Canabalt 8x8

You might think I spend all my time researching Really Hard Problems in Human-Computer Interaction. This is not entirely true. Sometimes I take classes, and sometimes classes have projects, and sometimes I get to make fun things like this:


(real Canabalt, which is a lot more fun and I cannot take any credit for making)
(more details on github)

Wednesday, January 9, 2013

Hypothesis Generation Systems

Imagine you're tracking something about your health. Say it's your weight. You have weight measurements from every day and you want to generate some hypotheses to test, like "my weight this month is significantly higher than last month." Or say you're tracking how many steps you take each day, and want to know some things about how that correlates with when you woke up, or any number of other data streams you've got.

I'm picturing some kind of visual analysis tool. Something that would help with hypothesis generation and feature selection. What's in this space? I started with the 150 most popular tools on the Quantified Self guide, as well as all 204 tools on Ian Li's personalinformatics.org. Some things I found:

Systems that pull in data from a lot of places and generate visualizations
Notch pulls in data streams from (currently) Fitbit, Runkeeper, and Bodymedia. Then it generates some pretty basic (although definitely pretty) visualizations.
Cosm (formerly Pachube) lets you look at a bunch of graphs together (one example here). Doesn't do any statistics, though, AFAICT. Focuses on Arduino/Internet-of-Things applications.
Trackify seems nonexistent.
Fluxtream seems promising, but I don't have access yet.
Also The Locker Project (Memo to myself: they seem to be very connected with Singly, which looks like a way to easily integrate apps with lots of social and/or quantified-self applications), TheCarrot, DataDepot
These all would be much cooler if they did stats (although just visualization is useful too!)

Tools that let you track things yourself by hand (and then they generate graphs)
ChartMyself (meant for QS-type use), TallyZoo (for anything), MycrocosmDaytum (anything), MercuryApp (mood-ish things), Dayta, Tonic, DidThis (things you've done), rTracker, Graphomatic, DailyDiary, LifeMee (maybe? signup is broken), Limits, DailyTracker, Grafitter, your.flowingdata, lifemetric (for many people)
These all seem like not what I'm looking for. The data entry is manual, and the output is still mostly visualization.

There's Quantified Mind, which apparently shows you all the significant results in your mental-test data automatically. Only for mental-test data, but still, this is the kind of thing I'm looking at.

There's media annotation and editing tools as jumping-off points, if you wanted to build this. Elan is one, but it's based on the quirky Java Media Framework; Pitivi is another, but for Linux only.

Then there's the whole field of information visualization, big-data type stuff (recent startup Trifacta comes to mind), which I'm just starting to dig in to.

More to come! There's a conference called IEEE VIS (formerly VAST) that might have some interesting stuff, I'll trawl through that. Also, any ideas you have would be quite welcome.