Will the lack of standard options throw the digital designer?
I recently ran a Google Glass hack day where ~40 developers and designers were tasked with pioneering new creative uses for Google Glass. One of my favourite parts of the day was when I briefly caught up with the head of design and I asked how his team was doing. He said to me that his input as a designer using the mirror API was limited and often his design choices were reduced to choosing an image, a few words, maybe an icon or two, and the use of colour to get his key message across. This made me question: is the role of the Glass designer more similar to a traditional art director and copywriter than it is to a digital designer?
Google Glass currently only let’s you design content through the Mirror API where only cards of images, copy, icons, and colours are permitted. As Glass places content directly into your field of vision, choosing the right content is essential to stay relevant and therefore installed. This means the choice of image, copy, and iconography is more important than ever before and a designers role is now more to choose the perfect image, write (or work with someone to write) the perfect short form copy, and use iconography and colour in significant, meaningful ways on the tiny real estate.
Designing for the Glass Mirror API calls for considered, concise design choices. Whilst designing for a smartphone app could be compared to writing a blog post, with room for flourish and explanation, designing for Glass is like composing the perfect tweet, say only what is relevant with minimal media attached to get your point across in a couple of seconds.
Tim Dunn has just posted an interesting look back at the mobile future piece he wrote for the UK government in 2010. At 30,000 words the original piece is a bit of a slog to get through but is actually a great way to get to know lots about mobile in a short time, even if some isn’t super current.
What I like about Tim’s post is the summary of what is not covered in the 2010 piece that would likely make it into a ‘mobile futures’ piece if written today. When discussing potential topics for inclusion, Tim writes:
I welcome your ideas below the line, but as a starter for 10, perhaps we would be looking at:
- Natural interfaces – the growth of new intuitive ways to control the device, such as eye-scrolling and gesture recognition
- Socio-economic dimorphism of mobile adoption and behavior based on geography and class. Will we see 4G simply extend and deepen the rural ‘not-spots’ we already see in broadband coverage? And will open OS smartphones truly enable digital participation regardless of earnings?
- ID – I really don’t think we’ve scratched the surface of this. Smartphones should be able to carry secure and inviolable credentials such as passports and driving licences, but I don’t see much work in the field. Also, we should surely be able to scan or verify ID without the need for peripherals such as Square? Surely mobile will be able to deliver the vision of people like Dick Hardt as shown in this bravura performance from 2006
- Enterprise – at Roundarch Isobar we do huge amounts of work in the Enterprise space that would be mind-boggling to my European colleagues. But I still think there’s a long way to go in B2B and B2E, specifically with BYOD in mind. Microsoft guys such as Matt Ballantine are providing leading thinking in this space
- Location – this might seem like an old chestnut now, but the fact that mobile is, well, mobile, has not been mined to anything like its full potential. The capabilities have been very much held back by lack of physical infrastructure and lack of standardization, but payment and vouchering should now be on the up as business gears up to match consumer behavior in the converged world.
- Connected Devices – with the smartphone packing the same processing power that a mainframe could deliver not so long ago, your phone is likely to be the center of your own local cloud services before long with anything from your watch to your soccer team to (whisper it) your fridge hanging off it for processing power and network functions
Whilst all points are key, and would pull out enterprise and location as key. Not necessarily for the obvious reasons. Whilst I believe topics such as the internet of things / connected devices is far more exciting, and will have massive impact on how we use physical and digital products together, I can see first hand the changes that mobile devices are having in the enterprise world right now. Businesses are embracing BYOD and modern smartphones – the iPhone is the most used enterprise device in the US – this has already opened up the possibilities of what can be delivered on business devices, where does this go in a couple of years?
Location is interesting because of how this impacts on content and UX. Whilst people argue about whether a mobile web site should include full site content or not, what is interesting to start to think about is how whatever content included reacts to the environment in which it is used. How does a mobile site accessed on a 3G connection, near a car showroom, directed to by specific Google search terms react when combined with the same user’s previous desktop behaviour on the same site? How does a supermarket mobile site react when it picks up chemical signals from a banana using sensors such as the Node?
Head on over to Tim’s blog to read the full article and get a load of useful links.
Valve have created an excellent way around the usual god-awful text input usually seen on a games console. I’m sure most of you will be aware of the pain that is entering even simple text fields, such as your login details, on a PlayStation 3 or Xbox 360. This is a great bit of UI design that takes something as ‘simple’ as text entry back to the drawing board, creating a superior text entry method to the awful virtual qwerty keyboards we currently see on both the Xbox and the PlayStation.
Steam’s Big Picture text entry mechanic
See the new text entry method at 4:05 in action video below, via IGN.
Microsoft have just released a video at CES for their proof-of-concept IllumiRooom, which uses the Kinect and a rear projector to improve the user’s gaming experience by lighting up the area around the TV screen and displaying contextually relevant content. As you can see in the video below, the content projected can be subtle, complimenting the on-screen experience, see the falling snow at 0:48, or more full-on and intrusive by extending the on-screen area of a game, such as the Halo example at 0:27.
At first this looks like a more sophisticated version of Philip’s Ambilight concept (from 10 years ago!), something that obviously never caught on, but what caught my eye with this is the recognition from Microsoft that user experience is much wider than what can be displayed on-screen. Extending content onto the area around the TV screen adds atmosphere, focus, and makes for a more immersive gaming experience. I have half-watched films whilst playing on my phone, browsing the internet on my laptop, or tidying the room, all having a detrimental effect on my view of that piece of content because I was not focussed on the content that a studio had carefully crafted. I love the fact that this IllumiRoom product helps you to focus tightly on the content you are consuming.
It’s worth remembering that what consumers experience when interacting with your product or content is only partially what is displayed on-screen.
Read more about the IllumiRoom here.
I love a bit of good mobile UI I do.
Just saw this cool example of a small tweak to the UI / IXD made in the iPhone app Languages. Great job @jerols.
Check out the video, specifically from 40 seconds on, where you see a beautifully simple and well executed idea of indenting the A-Z navigation as you scroll down, so your thumb is never in the way. There’s also a really nice example of a folding side-nav in the video too… as a bonus.