Google Glass: The Next Era of Personal Computing

Recently I wrote up some initial impressions on using Google Glass, but Glass represents something so new that my impressions form and change every day. It has been very difficult for me to neatly arrange my thoughts for this post as I feel Glass represents so much and is, arguably, a culmination of Google’s finest work yet.

It has been widely discussed that Glass isn’t yet a final product and will evolve into something more sophisticated and powerful. The more I discuss this idea with the Somo innovation team and friends around the industry the more I realise how exciting and powerful Glass will be, so I wanted to take a stab at why I think Glass has such massive potential, and look at the impact it will have on our lives.

Google are changing the way we interact with computers

Glass represents some of Google’s finest work in one product; it follows a few recent significant strategic moves by the tech giant.

In 2012 Google changed the way it indexed information; the Knowledge Graph was created to understand the relationship and connection between objects, moving away from just searching for strings of characters. This helped Google to “understand the world in the way [we] do”. Cool.

2013, Google announces Conversational Search, utilising the Knowledge Graph to the fullest. Forget the fact that Google’s speech detection is as good, if not better than Siri, using the Knowledge Graph now allows Google to ‘hear’ questions you are asking it and respond in a human way. Ask Chrome ‘how old is Obama?’ and it knows you are very likely asking ‘how old is Barrack Obama?’. Then ask ‘how old is his wife?’ and it knows that you are really asking ‘how old is Michelle Obama?’. Subtle difference, massive implications.

Again, 2013 and Google launches Google Now, their predictive assistant that learns from your behaviour across the web and serves you content that you want, when you want it. On Android this is amazing. Stick with it…

Later in 2013 and Glass is soft-launched. Glass has a, largely, hands-free UI that enables the user to talk to it, and it talks back – a big step towards Natural User Interfaces (NUIs). Super.

Right. Now Glass does not perfectly implement all of the above at the moment but Google have now released a device and technology that learns from you, serves you relevant content at the right time in a couple of unobtrusive ways, it will talk to you, you can talk back, it will understand you. Game. Done. Changed.

Image

Wearable tech needs to be fashionable

Say what you want about Glass v1; some think it’s cool already, others do not. For wearable tech to be mainstream it needs to be cooler. It needs to be in the arm of some Tom Ford specs, not look like a Bluetooth headset on crack. No doubt Glass will be refined when released to consumers, but I believe that we will see ‘now with Glass technology!’ claims on Tom Ford, Burberry, Oakley etc. glasses. It will become integrated into fashionable goods. A brilliant article on why wearable tech needs fashion to thrive should be read for a detailed explanation. Continue reading

#IgotGlassed – My first impressions using Google Glass

We’ve just got our hands on one of two Glass units here at Somo. I’ve had a bit of a play around and thought I would jot down some initial impressions seeing as no one on the internet is talking about Glass…

How does it feel?

Weird. Low-res. High-up in my field of vision. A wee bit sparse. Feels like Google Now for your face. Using it feels ok when you get used to it, but I imagined it to fit into my routine seamlessly so I could stroll along checking out the latest NYT stories – not so, it is very distracting when viewing content and I wouldn’t consider it sensible or safe to carry on your daily business.

Do you like what it looks like?

What. A. Dick.

What. A. Dick.

Do you? I think it looks like a computer on your face, and when the screen is on it looks like your devil-eyes glow red. However, in saying this, I am getting more and more used to seeing it around the office now. I think it looks like a great v1. It really makes this Wired article on wearable tech needing fashion to thrive make a lot more sense.

** EDIT

The ’tilt head up to switch on’ manoeuvre is both ridiculous and hilarious (when other people are doing it, not me). You can’t really use it purely without your hands. At some point you need to touch it, which is annoying seeing as it’s on my face.

How easy is it to use? #speechdetection

It is easy. Even new gestures are pretty simple. The only thing that isn’t that ‘easy’ to use is the speech detection – it’s as good as any Google speech detection, it’s pretty good, but quite a few times it picked up what I was saying incorrectly. Also, it was a surprise to me to get generic search results through when asking Glass to ‘find me a hotel’. I was served with the full search result (not sure if it was PPC or not) including such inspiring copy as “Find cheap hotels, discount hotels and last minute hotel deals at LateRooms.com – theHotel Offers Experts. Book hotels & make hotel reservations online or by ...”. Not cool.

I’m using a shared Glass so have only used a few apps. NYT is pretty impressive as it seems to suit the use case of ‘give me small headline snippets of content’ and it reads them aloud to you so that you don’t focus on the screen for more than a couple of seconds.

**EDIT

The speech detection is pretty annoying actually. Lots wrong. It also reminds me why being conversational, like Siri, or Chrome’s recent conversational search, is important. When asked “how old is Obama?” it told me the correct answer. When I asked “how old is his wife?” it didn’t even register that I was trying to talk with it.

Overall…

It’s fun, new, innovative, unique, but a little limited at the moment. This will no doubt evolve into something really useful over the next year, it just very much seems like a v1 product. If Google had given these units out for free, or very cheap, then it would be awesome and completely worth it, however, paying $1,500 for one at the moment isn’t hugely worthwhile.

Will this replace my mobile? No chance. Our phones are really sophisticated, powerful devices that offer everything from 10 times a day utility, through to Plants Vs Zombies. My iPhone offers me far too many rich, thoroughly thought-out, reliable experiences that I fear Glass, or wearable tech sans-smartphone, will take years to match. I see Glass as a very ‘top-level’ gadget – something to give you a snapshot of info – where phones will remain the device of choice for longer, more in-depth experiences.

Rapid prototyping Google Glass – by an experience designer in Google’s X team

Google are starting to talk a lot about Glass, releasing a video yesterday entitled How It Feels [through Glass].

 

This reminded me of a great video by Tom Chi, an experience designer in Google’s X team, explaining how quickly they created prototypes for Glass.

via Only Dead Fish

 

It’s tempting to think that the prototyping for a project such as Google Glass would have been a complex, lengthy process lasting months if not years but this short, charming talk from Tom Chi (experience designer in the Google X team) gives a fascinating insight into how their process of creation was greatly accelerated through rapid prototyping. The first prototype was built in an hour using coat hangers, a tiny projector and a piece of plexiglass. Subsequent prototypes took even less time and used materials as diverse as paper, clay, modelling wire, chopsticks and hairbands. From these models they were able to glean useful insights into the social awkwardness of gesture controls which led to them dropping fetaures which had been thought integral. As Chi says, “Doing is the best kind of thinking”. Fascinating.