Google Glass Team: ??Wearable Computing Will Be the Norm?? Source: Steven Levy
Even though I followed Google’s I/O Conference from across the country, the event made it obvious that a company created with a strict focus on search has become an omnivorous factory of tech products both hard and soft. Google now regards its developers conference as a launch pad for a shotgun spread of announcements, almost like a CES springing from a single company. (Whatever happened to “more wood behind fewer arrows?”)
But the Google product that threatened to steal the entire show probably won’t be sold to the public until 2014. This is the prosthetic eye-based display computer called Project Glass, which is coming out of the company’s experimental unit, Google[x]. Announced last April, it was dropped into the conference in dramatic fashion: An extravagant demo hosted by Google co-founder Sergey Brin involved skydivers, stunt cyclists, and a death-defying Google+ hangout. It quickly attainted legendary status.
Even before people got to sample Glass, it was popping their eyes out.
Google wouldn’t provide a date or product details for Glass’s eventual appearance as a consumer product ― and in fact made it clear that the team was still figuring out the key details of what that product would be. But Google made waves by announcing that it would take orders for a $1,500 “explorer’s version,” sold only to I/O attendees and shipped sometime early next year. Hungry to get their hands on what seemed to be groundbreaking new technology, developers lined up to put their money down.
Meanwhile, I just as hungrily bit at the opportunity to do a phone interview with two of the leaders of Glass. Google originally hired project head Babak Parviz from the University of Washington, where he was the McMorrow Innovation Associate Professor, specializing in the interface between biology and technology. (One relevant piece of work: a paper called “Augmented Reality in a Contact Lens.”)
The other Glass honcho, product manager Steve Lee, is a long-time Google product manager, specializing in location and mapping areas. Here is the edited conversation.
Wired: Where are you now with Glass as compared to what Google will eventually release?
Babak Parviz:    Project Glass is something that Steve and I have worked on together for a bit more than two years, now. It has gone through lots of prototypes and fortunately we’ve arrived at something that sort of works right now. It still is a prototype, but we can do more experimentation with it. We’re excited about this. This could be a radically new technology that really enables people to do things that otherwise they couldn’t do. There are two broad areas that we’re looking at. One is to enable people to communicate with images in new ways, and in a better way. The second is very rapid access to information.
Wired: Let’s talk about some of the product basics. For instance, I’m still not clear whether Glass is this something that works with the phone in your pocket, or a stand-alone product.
Parviz: Right now it doesn’t have a cell radio, it has Wi-Fi and Bluetooth. If you’re outdoors or on the go, at least for the immediate future, if you would like to have data connection, you would need a phone.
Steve Lee: Eventually it’ll be a stand-alone product in its own right.
Wired:    What are the other current basics?
Parviz: We have a pretty powerful processor and a lot of memory in the device. There’s quite a bit of storage on board, so you can store images and video on board, or you can just live stream it out. We have a see-through display, so it shows images and video if you like, and it’s all self-contained. It has a camera that can collect photographs or video. It has a touchpad so it can interact with the system, and it has gyroscope, accelerometers, and compasses for making the system aware in terms of location and direction. It has microphones for collecting sound, it has a small speaker for getting sound back to the person who’s wearing it, and it has Wi-Fi and Bluetooth. And GPS.
This is the configuration that most likely will ship to the developers, but it’s not 100 percent sure that this is the configuration that will we ship to the broader consumer market.
Wired: How much does it weigh?
Lee: It’s comparable to a pair of sunglasses. You can stack three of these up and balance a scale with a smart phone.
A prototype version of the consumer Glass design. Photo: Google
Wired: What was your thinking when you embarked on the project, and how did that thinking evolve?
Parviz:    We did look at many, many different possibilities early on. One of the things that we looked at was very immersive AR [Augmented Reality] environments ― how much that would allow people to do, how much could come between you and the physical world, and how much that can be distractive. Over time we really found that particular picture less and less compelling. As we used the device ourselves, what became more compelling to use was a type of technology that doesn’t come between you and the physical world. So you do what you normally do but when you want to access it, it’s immediately relevant ― it can help you do something, it would help you connect to other people with images or video, or it would help you get a snippet of information very quickly. So we decided that having the technology out of the way is much, much more compelling than immersive AR, at least at this time.
Wired:    So in other words, you’re moving away from “Minority Report” into doing something that’s more organic to everyday life.
Lee:    That’s correct. You see that in the video released when we announced the project in April. That kind of information, we still think is super-compelling, but in a form that’s available when you want it, and it’s generally out of the way and it doesn’t clutter your entire field of view.
Wired:    How do people issue commands to the system ― like when to begin a video stream?
Parviz:    On the side of the device there’s a two-dimensional touchpad. We have a button that we typically use for taking pictures. There are microphones in the system, so I you could have sound input to the system. We’ve experimented with that and we’ve experimented with gyroscopes and accelerometers and compasses with different types of gesture input. Now, how this is going turn into a consumer product, we’re still experimenting. It’s not entirely finalized yet.
Lee:    We’re also experimenting with a time-lapse feature, which takes a photo every 10 seconds. It’s the perfect example of getting technology out of your way. We believe it’ll be easier to initiate one of these live hangouts than placing a phone call today. The power of being able to share your view with other people is pretty incredible. Not just in extraordinary situations like the parachuting demo, but everyday situations like sharing moments with remote family members, or just having a richer experience in shopping where you could get feedback or advice from a spouse or partner or friend.
A Google employee Ray shows off his teal Glass headset. Photo: Roberto Baldwin/Wired
Wired:    You both have been testing this extensively in your lives ― what have you discovered?
Parviz:    Two things, actually. One was about how I can communicate with the people I care about through images, so I can capture moments that otherwise I wouldn’t capture. I communicate actually a lot more with those people through images and they get the first-person point of view. The other involved search. In one of our prototypes ― I don’t know if this will be on the consumer product or not ― we had search available with an audio input, so you could touch the device and say something, and get the response back. So literally I could touch the device and ask, “What’s the capital of China?” and the response would just appear in front of my eye. It’s a magical moment. You suddenly feel you’re a lot more knowledgeable.
Now, wearing it day in day out, I have to say that this device is very experimental. It crashes a lot of times, and a lot of the features don’t work. There’s quite a bit of work that we have to do to make this a seamless, enjoyable thing for regular people to wear. But as someone who developed the technology, I’ve been pretty happy with it.
Lee:    I’m an avid cyclist, and several weeks ago I rode in a grueling six-hour ride around San Francisco. Obviously it’s been a design goal for a long time to make Glass lightweight and comfortable, but it really surprised me how comfortable and unobtrusive and out of the way it really is. It didn’t become a problem or annoy me.
So then what value did I get out of wearing it? I could enjoy my ride, talk with my friends, meet new people, and didn’t have to think about technology the whole race. Yet at the end [using the aforementioned time-lapse photo capture feature] I had over 1,000 images, some of which were spectacular, just really precious moments. That gave me the ability to create a very short video. No one wants to watch a six-hour video, but my friends and family enjoyed watching the 20 or 30 second video that summarized my experience.
Wired: How long did it take you to go through those images? It would be pretty grim if we spent half our lives gathering information and the other half curating it.
Lee: You raise a really good point. If a device like Glass is successful, it’s definitely going to generate a lot more content, and so tools to manage that are incredibly important.
Wired: Maybe this could be a good use for the huge machine-learning neural net that Google announced this week ― maybe you could use the Google Brain to go through your six hours and find the most interesting parts.
Lee: Yeah, agreed, but simple approaches can help a lot, like discarding blurry photos and detecting the photos that have people’s faces, or landscapes. Just by doing those basic things you can quickly reduce 1,000 photos down to 20 or 30.
Wired:    On that bike ride, did you get any kind of data on the thing as you were riding? Did it help give you directions or alert to stuff that was happening or things like that?
Lee: Let me use a different example. I often commute from Google in Mountain View to my home in San Francisco, and I was supposed to meet up with a friend when I arrived. While I was riding, he text-messaged me, and I saw that he was going to be late. I saw that on the display, and that was it. If I didn’t have Glass, I would’ve felt the vibration of the phone in my jersey pocket and pulling it out would have been awkward and unsafe. It really made a difference.
Wired: Some people who have interacted with Glass testers feel that sometimes people seem to temporarily drop out of a conversation to process something they see on the display. How does Glass avoid being something that removes us from our physical environment?
Parviz:    We’re actually very cognizant of that. One of our definite goals is not to have something that constantly distracts people ― something where every three seconds you get an email and you have to look away, and can never engage in a real conversation. We’re going be very, very selective in how this may interrupt you.
Lee:    We really do view this through the lens of how to improve people’s lives in society, and not how can we geek out with the most technology possible. But it’s definitely true that something like this could go either way. A poor design could absolutely distract you and isolate you as a person. Good design actually keeps you more engaged in your activities in life whether it’s a lunch with someone or riding your bike or whatever activity you do.
Parviz: We want people to be engaged with the physical world. We want to untether them from the desktops and laptops. You want to have something where don’t feel like you’re wearing technology. Where your eyes are pretty much open to the environment, your ears are open, your hands are free ― but you can engage with the technology if you need to.
Wired: It also seems to me that at a certain point you realized that there’s something qualitatively different about a photo that’s taken with your hands free.
Lee: That’s true. We’ve long thought the camera’s important, but since we’ve started using this in public and with our family and friends and in real situations, not just hidden in the Google lab, that we’ve truly seen the power of being hands free. We make sharing really easy, and we have a Google+ circle for our team, and so various team members were out in their real lives with their families, with their friends, in different situations, and were posting photos. It actually really brought our team closer together, because we got to understand our team’s personal lives better, and I think it was through those photos that we saw many aha moments. We shared the one with Sebastian [Thrun, a head of the Google[x] division] where he’s playing with his son. That photo symbolizes the kind of images and moments that we’ve been able to capture.
Wired: Let’s talk about the launch process�C why the two stages?
Parviz: We’re hoping actually to create a developer community to help us evolve this technology together. In 2013, we’ll ship the developer version, to this community, and hopefully, in less than a year following that, we’ll have the consumer version out to the public. That’s our hope at the moment. We’re trying really hard to get this out the door.
Wired: Why did you pick that $1,500 price point for developers?
Parviz: We try to have a reasonable cost that would be accessible to developers, but our target is to have the consumer version significantly cheaper than that.
Lee: But at the same time, we view this as a premium product.
Wired:    So consumers won’t be paying $1,500, but it’s not like buying a pair of sunglasses.
Lee:    It’s not going be $49.99. It’s up to us to deliver the value of a premium product and also communicate that to people.
Wired: Since this device sits next to your brain, are there any concerns about radiation?
Parviz:    We’ve looked at it very thoroughly throughout the project, and at the moment, the radiation is significantly less than a cell phone. When you use a cell phone, you have to communicate with a tower that’s quite far from the device, but when you use this system, you communicate only with short-range radio. We measured the radiation on the device and it’s way below any threshold set by standards.
Wired:    Do you think this kind of technology will eventually be as common as smart phones are now?
Lee:    Yes. It’s my expectation that in three to five years it will actually look unusual and awkward when we view someone holding an object in their hand and looking down at it. Wearable computing will become the norm.
| }
|