I was only able to make the morning session for UXCamp Dublin (20/02/16) but very glad I could. If you’re not familiar with BarCamp it’s essentially a flatter, more ad-hoc style of conference. Flatter in that it dispenses with much of the speaker / audience division (since anyone can be a speaker) and ad-hoc since much of the schedule is arrived at by consensus before kick off. Overall this UX event was very well run and the DIT Grangegorman setting was, despite the frankly terrible weather, perfect, so kudos to the organisers.
Two talks that caught my attention followed each other in the morning.
Padraig Mannion , UX Design Lead from IBM, was the first talk of the day. Reminding us that it wasn’t long ago that designers shifted from print to digital he predicts that over the next 5-10 years design will transition to a cognitive space. Since the Machine Age, manual labour became progressively more automated permitting production on a previously unthinkable scale. With the Digital Age, sets of complex calculations that would have taken weeks or months for a team of people to perform could be performed by a computer in seconds. When we start representing the world in numbers as Ada Lovelace had anticipated the possibilities for computers became endless. In both cases tasks were outsourced to machinery (whether cog or semi-conductor based) to accelerate the pace and scale of of labour and innovation.
Now, heading into an age of Artificial Intelligence, where analytical and cognitive tasks will be automated by Machine Learning, we have to consider what that interaction will look like. Historically, Human Computer Interaction leaned heavily on metaphor to help users become accustomed to interacting with something that was in the early days (60s and 70s) quite alien. The desktop metaphor employed by Douglas Englebart and later Alan Kay acclimatised users by adopting a system of recognisable signifiers from the familiar office environment.
In much the same way character will play a central role to help set the tone for our interactions with AI Bots. That way it feels less like HAL 9000 and more like a considerate acquaintance. It’s the reason they wrote those playful quips for Siri and why getting it wrong will just feel like Clippy (1998 – 2004 RIP), the MS Office irritant.
We’re used to creating Personas for users but how about writing them for Bots? Though still in its infancy, Slackbot adds character and tone to user on boarding. It doesn’t feel like you’re completing a form, instead with Natural Language Processing, we’re having a conversation with a friendly, non-invasive assistant.
— Patrick Mooney (@PatrickMooney) February 20, 2016
If we consider AI in health tech, for instance, what would a Bot’s persona look like if it was to assist an elderly diabetic with her medication? What tone would it use and how would that tone differ from the same Bot speaking with a teenager? As it grows to know the patient what would it say and what would it not?
These are all considerations visual designers make when they consider the type of end user they are designing for and how that informs colour choice or type but how would construct the character of AI / machine learned interaction? It might be similar to how a TV drama scriptwriter develops a protagonist’s interior and exterior persona and considers how she might deal with the situations in each episode. How would she react and what would she say or do and even more what wouldn’t she say. As a Bot learns user behaviour, having a clear understanding of the persona ensures a consistent experience as the machine learns more about its user.
In a similar vein Connor Upton, Senior Research Manager at Accenture, talked about ‘greybox’ optimisation algorithms where a computer generated solution is delivered as a suggestion rather than an immutable instruction.
In large manufacturing plants the ideal of ‘lights out manufacturing’ (as the name may suggest – a fully automated manufacturing plant) has long been pursued though rarely delivered. With so many variables at play human intervention is often necessary even when efficiency improvements could be made by using a set algorithm to automate processes and schedules. Those who run the plants are often sceptical of computer generated ‘blackbox’ optimisation for scheduling processes; that is, a machine outputting a schedule without visibility on how it was arrived at or a facility to adjust it. Joint cognitive systems design instead provides that visibility and presents outputs as suggestions which permit human intervention.
This is what Connor called a Greybox system where the first schedule is presented by the machine as a suggested optimal outcome using the information it has available to it at the time. The key word here is suggestion and it feeds back into Padraig’s talk of the character and tone of interaction.
Both talks were very interesting and came from perspectives I hadn’t previously considered before. Very much looking forward to the next event.