As the hype around generative AI begins to subside somewhat, more reasonable questions are starting to emerge.
I’ve personally lived through three tectonic shifts in the way the world works during my adult lifetime: the public recognition of the web (starting in 1992), the ascendance of social media (since 2005), and the mobile revolution (since 2007). Each one of these technology breakthroughs – popularized through entrepreneurial effort – have fundamentally changed the way most humans live. Compared to the way life was before each of these, life afterward is radically different: entire markets and industries eliminated or fundamentally transformed, political landscapes reshaped, entire job classifications invented and destroyed. Most individual lifestyles have also been radically and irreversibly altered.
So I like to think I know what a tech revolution looks like. And as the bard said: “here we go again.”
Of course I’m not alone. The mainstream media sees it, the tech world sees it, and the entrepreneurial ecosystem sees it. Great minds and unions and governments are already acting on the correct assumption that the revolution is upon us.
But, like most revolutions, beneath the hyperbolic perseverating, the utopian promises, and the portents of imminent doom lie us, the practitioners.
Tech revolutions don’t happen just because someone imagines them. They happen because real people build real, practical solutions using them to do things better, cheaper, or faster. Without the entrepreneurial class laboring in mundane, pragmatic ways, revolutionary technologies would remain in textbooks and academic papers.
So the question on my mind is decidedly less hyperbolic than “will AI take my job?” and more down to earth: how SHOULD AI (or, if you like, the wide range of large language models and machine learning that goes under the moniker of AI) work?
Great applications for AI are easy to imagine (the science fiction community has been doing it for generations). But as those of us on the practical, tech/product/business world know, concepts don’t matter, execution does.
Asking the Past to Invent the Future?
As with most innovation, humans have a very natural tendency to try to imagine the future based on the past.
Back in 1996, some very smart people imagined that the way people wanted to use web technology was the way they watched television – from the comfort of their couches. The result was WebTV, a famously incorrect product concept that repackaged the web as a passive experience based on – let’s face it – half a century’s worth of precedent.
A few years later during the early social media era, MySpace imagined that humans wanted a personalizable page – a lot like the web’s “home” page that had grown popular in the 1990s. A perfectly reasonable approach which was rapidly eclipsed by the less aesthetic and more transactional Facebook, Twitter, and Instagram.
So it’s natural to ask in 2023 as we consider practical solutions for AI, how should AI work?
AIs Are Our Friends. Right?
Recent AI experiences like ChatGPT have captured the imagination in part because of how human interacting with them seems. It’s become traditional (already) to brand AI products with human-like names and even presume their gender: Siri, Alexa, and Bard all come to mind. This was intentional, of course, but based on the subtle conclusion that humans want to interact with AI as they do with humans. The result was voice activation and (now that we’ve been through the mobile revolution) chat. These certainly aren’t the only form factors envisioned for AI – the Japanese are convinced that the form factor will be mannequins, while Meta is placing a bet on everyone being willing to wear Oculus headsets and interacting with virtual avatars. But it’s still early days, so who knows?
That said, I’ll leave the form factor question to those more qualified (and better resourced) than myself. What’s more relevant to me and the product AIs I’m seeing on the drawing board is the presumed relationship we have with our AIs.
Relationship Analogues
Anthropomorphizing AI makes sense. Many of us get angry with Alexa when she doesn’t understand our instructions. What interests me more is the metaphor for the relationship between me and what will certainly become a plethora of AI products and solutions. I don’t imagine there will be one model that triumphs over the rest, in fact in Elon Musk’s world of AI I’ll probably end up interacting with many different forms depending on circumstance.
The Omnipotentate
This analogue places the AI in a relationship of superiority and command. An Omnipotentate knows more than we do and tells us what to do. I imagine the Master Control Program from the 1983 masterpiece TRON. Call me crazy, but aside from masochists and enthusiastic authoritarians, I can’t imagine this model being widely accepted.
The Mom
A kinder, gentler variant of the Omnipotentate, the Mother has superior knowledge but phrases predictive actions as suggestions, and even pushes or nags you a bit to do what is best. Imagine this AI waking you up in the morning and anticipating your needs. This feels comforting for some situations, but still places the human in a subordinate position.
The Smartest Bot in the Room
This is the Siri, Alexa, and ChaptGPT model “ask me a question and I’ll give you an answer” … one that is likely better than you could come up with on your own. While popular, the Smartest Bot in the Room doesn’t presume to predict, advise, or suggest … unless asked. This strikes me as a disappointing shortcoming given the potential of AI: doesn’t it limit the value of a “superbrain” if it only responds to what I ask it to think about?
The Concierge
This analogue blends the entirely responsive Smart Bot with the pleasant deference of the Mother. Imagine asking your AI a question and getting an answer along with relevant suggestions you hadn’t thought about.
The Co-Pilot
In some ways this is a AI who we think of as an equal, but over whom we still exert some control. Imagine a doctor asking an AI if they concur with her diagnosis or a programmer asking an AI to clean up their code. They’re skilled, helpful, but follow our lead.
What seems to emerge as one contemplates these analogues is “what do we want the power dynamic behind our AI interactions to be?” AI’s ability to aggregate and synthesize more data than humans will soon be largely undisputed, and interfaces that mimic real human interaction are probably inevitable. The question is which relationship model will make us poor humans most effective?