I like big bots and I cannot AI
There's an awful lot of buzz at the moment about chatbots, conversational interfaces, and digital assistants like Amazon's Alexa - it seems like there is a steady march of AI into our lives, but is that a good thing?
For marketers and advertisers it may appear to be he beginnings of a brave new world of opportunities to deliver messages to consumers in new and "innovative" ways, but that is a fundamentally naive approach.
In the rush to be part of this brave new world it is this very simple question that is overlooked - "what is the value exchange"?
It's not about a new way to get messages to consumers - it's a new way to communicate with them more naturally through the medium of conversation - to (inevitably) quote Marshall McLuhan, "The medium is the message".
The technology
It seems likely that AI could be the next big technological wave - access to computational power and large-scale data storage has been democratised, and all of the big players including Apple, Google, and Facebook are making big investments in AI.
It's pretty easy to create a basic chat bot - from a technical perspective at least - that can interact with your audience, and it's not too much work to create a basic "skill" for Alexa, but the real question from a user perspective is "why would I need / want / use that?"
Despite the dire prognostications of the AI doomsayers and pessimistic singularitarians we are, from a consumer perspective at least, still very firmly in the early stages of popular applied (or "weak") AI, that is to say that we are seeing the adoption of AI in a number of different contexts but that they are applied to accomplish specific problem solving or reasoning tasks.
First impressions
We are still a long way off achieving an idealised "Artificial General Intelligence" (or "strong" AI) that could successfully perform any intellectual task that a human being can achieve. In fact most consumer AI is, frankly, a bit crap.
I bought an Amazon Echo and have been tinkering with it at home to see what it can do - my conclusion is "not much, yet".
Alexa can play music from my Spotify account (but can't switch to my girlfriends account when she's at home despite us having both Amazon Household and Spotify Family accounts set up), read me the news (either via nicely pre-recorded audio or slightly less well executed reading of content that isn't phonetically hinted well enough to make it sound natural), tell me the weather forcast, or even turn up my Hive heating system (which unfortunately turns off the existing heating schedule).
Being a technologist I can see the potential power and flexibility that it could afford in the future, but at the moment it's largely a novelty with very narrow use cases delivering well implemented connectivity to other products and services and an awful lot of cruft from everyone else who has jumped on the bandwaggon with "me too" offerings.
In some ways that early limitation in capability is a good thing - it gives people the chance to experience and experiment with a new technology in a non-threatening way without the shock and awe of something truly world-changing. This early-adopter advantage creates a level of affordance where consumers will come to actively expect a slightly poor exerience, which gives the better implementations a veneer of superiority that sets them above the rest.
If you want to know how that can work just look back at the original iPhone with its limited functionality and walled garden approach which had the technorati either scratching their heads in confusion or salivating with technolust - it opened up a whole new device category, a wildly successful app ecosystem, and elevated Apple's fortunes to previously untold heights. All that was just 9 years ago.
The future
Chatbots and conversational interfaces as a nuanced, stateful means of communication between humans and automated systems are in a putative state, but progress seems to be gathering pace quite rapidly
Computers are getting better at interpreting the nuances and peculiarities of the messy, natural language structures that we humans use. They're also getting better at understanding the different languages that we speak and accents that inflect them. They're even getting better at understanding context - it may seem like a simple thing, but the fact that you can ask Google's assistant "Who is Gia Milinovich?" and then subsequently "Who is she married to?" is much more natural than having to re-state her name, and makes the interface feel much more human.
There are some interesting ideas starting to come to the surface, with Facebook's significant investment in Messenger as a plaform and some of the early integrations from the likes of Uber, Google's new Assistant which will appear on their new Pixel phones and in the Google Home device as it vies for supremacy with the Amazon Acho, in Siri's steadily improving capabilities, and in Microsoft's Cortana too.
Problems to overcome
At the end of the day I'm a well paid professional geek with either a slightly over-developed sense of technological curiosity or (as seems more likely) magpie syndrome when it comes to shiny new tech - I think we're a little way off having a voice-activated digital concierge in every home (let alone the 60s sci-fi staple, the robot butler), and while Alexa is a long way off adopting HAL's recalcitrant tendencies and the failings and frustrations of interacting with her are in the order of mild annoyance and rolled eyes rather than life-threatening obstinance, I think we do need to have a weather eye on how AI is being used from a privacy, security, and ethical perspective.
We also need to bear in mind what happens as AI improves and gets closer to appearing human in its interactions with us - as they improve we will reduce the level of affordance that we permit them, expecting ever better perfomance of their duties and tollerating fewer errors or glitchs, and as they move ever closer to the Uncanny Valley we will find them suddenly less human and much more troubling on an inate, emotional level.
