My Comments On… “Google Glass could enhance the High Street experience but what else can it offer brands?”

I recently supplied a few quotes to The Drum on how Google Glass can be used by brands as a marketing tool. The full article can be read on The Drum with full commentary from the editorial team, my quotes are below.

Joel Blackmore, senior innovation manager, Somo, described Glass as “an ultra-personal device” and claimed that it meant delivering “appropriate content is more important than ever before.”He added; “The simplest thing a brand can do to use Glass as a marketing tool is to find a way to deliver brilliant and relevant content” to users.

 

“The New York Times has already produced a good example of this with its Glass app. I’d encourage brands to start experimenting with the Glass technology to understand the best ways to deliver their content to users in an appropriate fashion.

 

“Using Glass and having content pop up in your field of vision takes some getting used to, so having unwanted advertising content would be even more disconcerting right now. Putting the right branded content onto the Glass screen is more important than advertising for brands at the moment.”

There you go, shocker! Content is more important than advertising for a brand to remain relevant to a consumer. We’ve known this for a long time, Glass just confirms this once again.

Has the Role of the Google Glass Designer Reverted Towards an Art Director and Copywriter?

Will the lack of standard options throw the digital designer?

Will the lack of standard options throw the digital designer?

I recently ran a Google Glass hack day where ~40 developers and designers were tasked with pioneering new creative uses for Google Glass. One of my favourite parts of the day was when I briefly caught up with the head of design and I asked how his team was doing. He said to me that his input as a designer using the mirror API was limited and often his design choices were reduced to choosing an image, a few words, maybe an icon or two, and the use of colour to get his key message across. This made me question: is the role of the Glass designer more similar to a traditional art director and copywriter than it is to a digital designer?

Google Glass currently only let’s you design content through the Mirror API where only cards of images, copy, icons, and colours are permitted. As Glass places content directly into your field of vision, choosing the right content is essential to stay relevant and therefore installed. This means the choice of image, copy, and iconography is more important than ever before and a designers role is now more to choose the perfect image, write (or work with someone to write) the perfect short form copy, and use iconography and colour in significant, meaningful ways on the tiny real estate.

Designing for the Glass Mirror API calls for considered, concise design choices. Whilst designing for a smartphone app could be compared to writing a blog post, with room for flourish and explanation, designing for Glass is like composing the perfect tweet, say only what is relevant with minimal media attached to get your point across in a couple of seconds.

Google Glass: The Next Era of Personal Computing

Recently I wrote up some initial impressions on using Google Glass, but Glass represents something so new that my impressions form and change every day. It has been very difficult for me to neatly arrange my thoughts for this post as I feel Glass represents so much and is, arguably, a culmination of Google’s finest work yet.

It has been widely discussed that Glass isn’t yet a final product and will evolve into something more sophisticated and powerful. The more I discuss this idea with the Somo innovation team and friends around the industry the more I realise how exciting and powerful Glass will be, so I wanted to take a stab at why I think Glass has such massive potential, and look at the impact it will have on our lives.

Google are changing the way we interact with computers

Glass represents some of Google’s finest work in one product; it follows a few recent significant strategic moves by the tech giant.

In 2012 Google changed the way it indexed information; the Knowledge Graph was created to understand the relationship and connection between objects, moving away from just searching for strings of characters. This helped Google to “understand the world in the way [we] do”. Cool.

2013, Google announces Conversational Search, utilising the Knowledge Graph to the fullest. Forget the fact that Google’s speech detection is as good, if not better than Siri, using the Knowledge Graph now allows Google to ‘hear’ questions you are asking it and respond in a human way. Ask Chrome ‘how old is Obama?’ and it knows you are very likely asking ‘how old is Barrack Obama?’. Then ask ‘how old is his wife?’ and it knows that you are really asking ‘how old is Michelle Obama?’. Subtle difference, massive implications.

Again, 2013 and Google launches Google Now, their predictive assistant that learns from your behaviour across the web and serves you content that you want, when you want it. On Android this is amazing. Stick with it…

Later in 2013 and Glass is soft-launched. Glass has a, largely, hands-free UI that enables the user to talk to it, and it talks back – a big step towards Natural User Interfaces (NUIs). Super.

Right. Now Glass does not perfectly implement all of the above at the moment but Google have now released a device and technology that learns from you, serves you relevant content at the right time in a couple of unobtrusive ways, it will talk to you, you can talk back, it will understand you. Game. Done. Changed.

Image

Wearable tech needs to be fashionable

Say what you want about Glass v1; some think it’s cool already, others do not. For wearable tech to be mainstream it needs to be cooler. It needs to be in the arm of some Tom Ford specs, not look like a Bluetooth headset on crack. No doubt Glass will be refined when released to consumers, but I believe that we will see ‘now with Glass technology!’ claims on Tom Ford, Burberry, Oakley etc. glasses. It will become integrated into fashionable goods. A brilliant article on why wearable tech needs fashion to thrive should be read for a detailed explanation. Continue reading

Should the iPod be Laid to Rest?

From ‘What Apple Didn’t Announce at WWDC‘.

The Invisible iPod: It may be about time that Apple put the iPod in its rearview mirror. Like the Apple TV, the iPod barely saw a mention at WWDC yesterday other than as a part of total iOS installations. Apple released a new iPod a couple weeks ago to zero fanfare and hardly a press release. The iPod as a featured product from Apple is likely dead.

Does it make sense to kill off the iPod at this stage? Surely Apple can’t rely on the iPhone, iPad, and compatible cars solely for their newly-announced iTunes Radio to succeed.

The newly announced iPod

#IgotGlassed – My first impressions using Google Glass

We’ve just got our hands on one of two Glass units here at Somo. I’ve had a bit of a play around and thought I would jot down some initial impressions seeing as no one on the internet is talking about Glass…

How does it feel?

Weird. Low-res. High-up in my field of vision. A wee bit sparse. Feels like Google Now for your face. Using it feels ok when you get used to it, but I imagined it to fit into my routine seamlessly so I could stroll along checking out the latest NYT stories – not so, it is very distracting when viewing content and I wouldn’t consider it sensible or safe to carry on your daily business.

Do you like what it looks like?

What. A. Dick.

What. A. Dick.

Do you? I think it looks like a computer on your face, and when the screen is on it looks like your devil-eyes glow red. However, in saying this, I am getting more and more used to seeing it around the office now. I think it looks like a great v1. It really makes this Wired article on wearable tech needing fashion to thrive make a lot more sense.

** EDIT

The ’tilt head up to switch on’ manoeuvre is both ridiculous and hilarious (when other people are doing it, not me). You can’t really use it purely without your hands. At some point you need to touch it, which is annoying seeing as it’s on my face.

How easy is it to use? #speechdetection

It is easy. Even new gestures are pretty simple. The only thing that isn’t that ‘easy’ to use is the speech detection – it’s as good as any Google speech detection, it’s pretty good, but quite a few times it picked up what I was saying incorrectly. Also, it was a surprise to me to get generic search results through when asking Glass to ‘find me a hotel’. I was served with the full search result (not sure if it was PPC or not) including such inspiring copy as “Find cheap hotels, discount hotels and last minute hotel deals at LateRooms.com – theHotel Offers Experts. Book hotels & make hotel reservations online or by ...”. Not cool.

I’m using a shared Glass so have only used a few apps. NYT is pretty impressive as it seems to suit the use case of ‘give me small headline snippets of content’ and it reads them aloud to you so that you don’t focus on the screen for more than a couple of seconds.

**EDIT

The speech detection is pretty annoying actually. Lots wrong. It also reminds me why being conversational, like Siri, or Chrome’s recent conversational search, is important. When asked “how old is Obama?” it told me the correct answer. When I asked “how old is his wife?” it didn’t even register that I was trying to talk with it.

Overall…

It’s fun, new, innovative, unique, but a little limited at the moment. This will no doubt evolve into something really useful over the next year, it just very much seems like a v1 product. If Google had given these units out for free, or very cheap, then it would be awesome and completely worth it, however, paying $1,500 for one at the moment isn’t hugely worthwhile.

Will this replace my mobile? No chance. Our phones are really sophisticated, powerful devices that offer everything from 10 times a day utility, through to Plants Vs Zombies. My iPhone offers me far too many rich, thoroughly thought-out, reliable experiences that I fear Glass, or wearable tech sans-smartphone, will take years to match. I see Glass as a very ‘top-level’ gadget – something to give you a snapshot of info – where phones will remain the device of choice for longer, more in-depth experiences.

What mobile means to the Xbox One

I recently contributed to a piece on the Somo blog looking at the Xbox One announcement and what it means for advertisers and users. A little rushed, both in my thinking and the urgency of posting the piece, I think my opinion comes off a little harsh. After a little time to reflect on both my opinions and those of my co-writer Naji El-Arifi, I felt like I should add comment to my initial reaction.

The Xbox One, controller, and Kinect

On my initial opinion:

First off, let me set the record straight – I’m a huge Xbox fan. My gamerscore is over 20,000 (in your face Naji). I used to work for both EA and Codemasters so I have a keen interest in the next generation of entertainment hardware. I was counting down the days to May 21st but am left just a little unfulfilled. Here’s why…

WHERE’S MY SMARTGLASS?

SmartGlass is a brilliant, yet underused feature that allows you to control your Xbox with any tablet or smartphone. SmartGlass also displays supplementary content to the big screen, giving a really great in-built second-screen experience. This integration of mobile to console opens up the opportunities for mobile interactions in the connected living room. I really want to see where Microsoft have taken this as I believe it will be an emerging space to deliver mobile experiences on. Currently non-gaming activity accounts for around 40% of time spent with an Xbox, as we use an Xbox for TV and films, there is an opportunity here for brands to deliver supplementary content and engagement

 

I’m sure we will hear more about what Xbox are doing with SmartGlass at E3 as it was only mentioned once at the announcement event. The reason I was disappointed we didn’t hear more about SmartGlass is because I think it is the perfect second-screen experience just waiting to be used. If I was Zeebox or Monterosa or similar I would be worried. Even Shazam for TV should be worried as this can potentially do the same job. I can see SmartGlass becoming the most used input method to the Xbox One, more than the Kinect, as people start to watch TV, movies, and internet content through their Xbox. What do have PlayStation have to compete…?

The original SmartGlass hasn’t really changed since it was launched. What does the Xbox One have in store for it?

IS IT ALWAYS-ON?

Whilst the ‘always-on’ rumours are still a bit murky, being strongly encouraged to be connected to the internet means the home screen of the Xbox One will be a prime position for advertising. I remember the long-gone blades UI with no advertising (except for my Discovery Channel sponsored Gears of War theme). Nowadays the Xbox home screen is choc-a-bloc with branded advertising, and we can expect even more advertising, branded apps and sponsored content to come.

The now-infamous tweet from Microsoft CD that sparked the ‘always-on’ backlash

This one is relatively straight-forward. Internet connection required = advertising. Combine this with Kinect recognising your face and you have some really tailored advertising and content recommendations.

MOBILE XBOX

At the moment there isn’t much news at all on the mobile Xbox platform except for an early discussion about how Ubisoft game ‘Watchdogs’ will enable mobile gamers to interact with friends’ games in real-time. For some reason no one is talking about this and what this means for Xbox on mobile!

I expect to see Xbox games made available across mobile platforms, and I expect to see apps that interact with Xbox content in a meaningful way. I would like to hear about how games like Plants Vs Zombies can work across mobile and console, bringing the Xbox Live gaming platform to a much wider mobile audience.

This one’s a lot bigger than I can go into now… however, in short, Xbox Live is awesome and bringing it to mobile would be a huge win for Microsoft. The Watchdogs reference above I highly recommend reading as it talks of mobile gamers influencing friends’ Xbox gaming sessions in real time. This will be huge when released and I can see this an essential part of every AAA game – having a companion mobile app that keeps you playing on mobile when you are away from your Xbox. Eventually I see this moving to social platforms also.

On Naji’s opinion:

XBOX ONE THE ONE PLACE FOR YOUR MEDIA NEEDS

Microsoft have played up this angle for the Xbox One, it aims to be the entertainment hub of the living room. Hopefully you will be able to have all channels going through the Xbox and integrate Netflix. The perfect solution for me would be if I could search for a program on my Xbox One and it would rifle through Netflix, Sky and Lovefilm for me, I don’t care which service I watch a program or film on I just want to be able to watch it.

This is interesting when we add mobile into the mix (#SmartGlass). It’s obvious that Xbox want the living room, but what happens if you replicate the Xbox home screen on the mobile? Could you watch your content through the Xbox app on your phone? Do you organise your watch lists cross Netflix, Love Film, and iPlayer on the Xbox app?

KINECT, WE CAN SEE YOU BETTER THAN YOU THINK

The most impressive piece of technology announced was the new Kinect sensor which pulls in 2Gbs of data and so gets a very accurate reading of your environment. It can actually see up to six people and even the orientation of your extremities.

The hardware has also been upgraded and it now films in 1080p which is far superior to the previous Kinnect.

These improvements mean it is now technically possible to track a user’s facial expression, so you could see someone’s reaction to a particular advert or program – provided Microsoft allowed you to pull that data. Users’ heartbeats can also be captured by the Kinnect which could be used to great effect in games.

Overall, I think these improvements are all good, but I’m not tempted to pre-order. I’m hoping they will announce more than just exclusive game titles at the E3 event in a couple of weeks. There has to be more to come.

My first thought on this was a sarcastic ‘wow, HD Kinect. Now my Xbox can track 200 points of my body fumbling another kick in Kinect Sports’, however, I like Naji’s point that it can now track emotional responses to content. Whilst this doesn’t relate directly to mobile it’s interesting as this idea has been around for a while for smartphone front-facing cameras. It will be interesting to see how this idea is implemented successfully and how these learnings get translated to mobile.

In hindsight it seems so obvious – the Xbox One announcement was just the basic ‘hello, we’re here’ from Microsoft giving the basic overview of what they are offering for the next generation of home entertainment. Over the next 5 months we will get the full low-down of each and ever feature detailed and discussed to gain maximum buzz and coverage followed by 12 – 18 months of developers finding their feet and really innovating with the Xbox One’s multimedia and mobile integration capabilities.

I’d love to hear what you may think about the Xbox One announcement, and especially how you think mobile impacts the console in the comments below.

Rapid prototyping Google Glass – by an experience designer in Google’s X team

Google are starting to talk a lot about Glass, releasing a video yesterday entitled How It Feels [through Glass].

 

This reminded me of a great video by Tom Chi, an experience designer in Google’s X team, explaining how quickly they created prototypes for Glass.

via Only Dead Fish

 

It’s tempting to think that the prototyping for a project such as Google Glass would have been a complex, lengthy process lasting months if not years but this short, charming talk from Tom Chi (experience designer in the Google X team) gives a fascinating insight into how their process of creation was greatly accelerated through rapid prototyping. The first prototype was built in an hour using coat hangers, a tiny projector and a piece of plexiglass. Subsequent prototypes took even less time and used materials as diverse as paper, clay, modelling wire, chopsticks and hairbands. From these models they were able to glean useful insights into the social awkwardness of gesture controls which led to them dropping fetaures which had been thought integral. As Chi says, “Doing is the best kind of thinking”. Fascinating.