Countless wearers of Google Glass stalked the halls of this year's Google I/O developer conference, but only a lucky few were sporting the prescription model, which makes room for lenses in a more conventional glasses frame. Among those lucky early adopters with imperfect vision was Thad Starner, a Georgia Tech professor who, in 2010, was recruited to join a top-secret project at Google's fabled X Lab. That project, as it turned out, was Glass, and Starner's role on the team as a technical lead would be a vital one.
Starner invented the term "augmented reality" in 1990 and, after experimenting with wearable technologies for 20 years now, offered us a rare perspective on where the stuff has been and where it's headed. So, then, we were very glad to get a few moments to chat with the man at I/O and get his insight into how we got to be where we are and, indeed, get some suggestions from him on where we're going from here.
Starner says he's been wearing computer devices of some form or another daily for the past 20 years, a claim that few others can make. Before becoming an assistant professor at Georgia Tech, he founded MIT's Wearable Computing Project. It's in these years that he made the acquaintance of a pair of grad students named Larry Page and Sergey Brin. The group had discussions about the future of search and, given Starner's tech persuasions, how wearables might fit in to that:
We talked about how it would make you more powerful if you could have web search on your eyeball ... One of the problems was simply making a search engine that was good enough that the right hit was in the first four links, versus AltaVista which was the first 14 links. That took way too long to navigate.
They went their separate ways, Starner continuing to refine his wearable prototypes while Page and Brin built themselves a little search engine. After about a decade, Starner thought that it was time to reconnect:
About 2010, I sent Sergey an email saying, "Now that you guys are doing Android and you're doing these phones, you should really take a look at the wearable computing technology that we've been working on in academia. Why don't you come out to Atlanta and I'll show this stuff to you?" Next thing I know I'm on a plane out [to Google Headquarters] to join the Glass team. They had the same kind of thoughts. The time was right. The next thing you know I'm working on it too, making the early prototypes.
The term "augmented reality" comes from Starner's earlier work, a 1990 fellowship proposal. (Fun fact: this wasn't actually Starner's preferred term. "Artificial reality" had already been used by Timothy Leary to describe a drug-induced state.) However, his concept of a life augmented by technology is rather different than the "AR" that we generally think of when describing things like the Layar browser.
Starner's term for augmented reality simply referred to "information .....................
you can use while you're doing other things." He continued: "Your point on a map is in some senses augmented reality. Knowing what restaurants are nearby is augmented reality." What we typically think of as "AR" is an extension of augmented reality called registered graphics. In this way, a system is fully aware of your 3D position in space plus your orientation and is able to use that information to virtually paint information over the landscape."You're really trying to make interfaces that allow people to augment their eyes, ears and mind, but not get mired in the virtual world."
This, for many users, is the perceived Holy Grail of wearable technologies: a fully immersive experience where virtual displays appear and disappear at will, where every friend you spot in the real world is highlighted by an icon floating over their heads and where you can always find your way by following the all-knowing green arrow hovering over you. For Starner, these applications aren't nearly as compelling as a system that quickly provides information when you need it and then disappears just as quickly:
The big thing that people don't realize is that it's not about the full-field-of-view, registered AR experience. It's much better to have something you can interact with in micro-interactions. That's what Glass is all about, having these short interactions throughout the day. You're really trying to make interfaces that allow people to augment their eyes, ears and mind, but not get mired in the virtual world.
There's value in physicalities, he continued: "There's a lot to the tangible nature of devices and their interfaces that make a lot of sense. I'm loathe to give up on tangible interfaces ... Having an actual physical object that everyone is looking at and sharing is an important thing."
In this way, Glass is already in good shape on the AR front. That, though, is only one of three key aspects that are, for Starner, crucial to wearable devices. In a 1993 article called "The Cyborgs are Coming," he suggested two other crucial features.
The first is augmented memory, which is simply the ability to look up information previously learned, but possibly forgotten. For Starner, the primary focus has been conversation. "Having access to your education, having access to your everyday conversations on that level, so you can actually use it in face-to-face education, is invaluable," he said. "It makes professors seem smarter than they are -- which is a very big thing when you're a professor!"
He gave us a quick demonstration of a system called a Remembrance Agent, which he runs on all manner of wearable devices. It is, effectively, a massive text buffer of everything he's said or thought was worth typing down. Through this he can quickly and easily search within using a single-handed keyboard called a Twiddler. (Despite its one-handed nature, Starner is quite proud of his ability to type at 130 words per minute.) Using a little regex search, Starner can query through decades of textual memories. The results aren't great, but there's potential, he said:
Everything I say goes into a text buffer and then it automatically searches my past history for things that are relevant. Most of the time it pulls up garbage, stuff that doesn't matter. But 5 percent of the time it pulls up something that's really relevant. All it takes is a one-line summary to remind me of what was so important. It's not that it's replacing my memory; it's helping me recall stuff. Computers are really good at recall and really bad at recognition. People are the other way around.
The final, and in some ways most complicated, aspect is the development of what Starner calls "intellectual collectives." These are, effectively, social networks -- but not in the Facebook or (more appropriately) Google+ kind of way. These networks are much more focused on real-time information sharing and collaboration. In other words: they make people more productive, not less.
"You try to read your email while you're having a conversation, you lose 40 IQ points."
Starner described the process of interviewing candidates on the Glass team at Google: face-to-face conversations between a single candidate and a single interviewee while other members of the team watched remotely, conversing with the interviewer actively while he or she in turn conversed with the interviewee. It all sounds horribly distracting; a side-channel that Starner said is actually perfectly intuitive:
Because it's focused on the conversation you're having, it's not distracting. You try to read your email while you're having a conversation, you lose 40 IQ points. When you actually are taking notes and doing stuff that's related to your conversation you can do it just fine.
In this way the entire team could participate in the interview at the same time without physically needing to be there, hitting the hapless interviewee with questions from multiple minds all delivered through a single mouth. An intimidating process for the recipient, perhaps, but it certainly beats the typical corporate interview procedure of bouncing between offices and getting asked the same questions over and over again.
Starner described another situation, talking about wearable technology at the National Academy of Science. As ever, he was sporting some headgear, which in this case showed him a sort of chat room full of students back at Georgia Tech. The students watched a live stream from Starner and used the chat room to provide information and ask questions.
This was also shown to the physically present Academy members on a larger display. At first, the students provided information to Starner about what he was discussing and asked questions. Eventually, the members of the Academy began conversing openly and directly with those students, none of whom were actually in the room. In this way, the collective was formed.
But could this be done on Glass? Absolutely, said Starner, but he isn't confident there's a lot of priority for it:
One of the academics will do it. The question is whether there's a commercial reason for it. When you make something like this that has a clear focus, has a clear use, stuff that's well-baked, stuff that's compelling -- but when that hardware gets out to all my buddies [at universities] you'll see them adapting to some very interesting uses.
And that's where a line may need to be drawn, dividing Starner's vast experience in wearable technologies and the future of Glass and other derivative devices. In an academic setting, when you actively research something new and contribute to a broader project, you can get away with wearing a backpack full of circuitry while constantly adjusting a weighty pair of glasses on your nose. After all, you're doing it for science.
When it comes to the commercial world, however, to the creation of a profitable and thriving ecosystem used by average people in the average world, the standards are higher. Devices must be smaller, their interfaces must be intuitive and everything must simply work and work simply. From a researcher's point of view this is an unworkable limitation. From an engineer's point of view, this is a necessary challenge. From a consumer's point of view, this is just the way it is.
In some ways, Glass in its current form is limited compared even to the devices Starner wore years ago. The real question, of course, is whether it offers enough to finally bring wearables to the mainstream? That remains to be seen, but if it does, remember this: Thad Starner did it way before it was cool.
No comments:
Post a Comment