Novelist Naomi Alderman (co-creator of Zombies, Run!, which I wrote about last month) published an article for The Guardian in January 2016 on the subject of artificial intelligence (AI) in video games. She wondered whether it would be that great when we reach the point of full AI experiences.
Alderman’s piece, set in 2166 when you ‘turn on your 27th generation XStationPlayBone’, was the first I read after being posed a question by The Gaming Diaries as part of a Unique Blogger Award nomination: if you could meet any video game character and ask them one question, who would you meet and what would you ask? This got me thinking and resulted in a post which has gone a little off-topic…
The quest for characters who can think and act in a more human way began in 1985 with Activision’s life simulation Little Computer People. I remember playing this with our brother on our family’s Commodore 64 when we were very young and being amazed. It may not seem like much at all by today’s standards but at the time, it was almost magical: we could enter commands and ‘talk’ to our little person and he’d respond accordingly.
Next came the first instalment of EA’s best-known series in the genre: The Sims. This virtual soap opera provided what a household of intelligent characters who would form relationships and develop their own personalities. It became a best-seller after release in 2000 and then displaced Myst as the top-selling PC game in history two years later, showing that gamers wanted more from characters than simply digital bodies to shoot at.
Michael Mateas and Andrew Stern released Façade in 2005 and I’d recommend downloading it if you haven’t tried it for yourself. Players are invited to the Grace and Trip’s apartment for dinner but the timing is bad; an argument has taken place and it’s up to you to smooth things over (or make them worse). Typing sentences to ‘speak’ to the couple allows you to support them through their problems, drive them further apart or even get kicked out.
All three of these titles were built around AI systems which created their own stories and featured human interactions that appeared to be real. Although incredibly innovative, the industry sadly had no use for them. Its focus was on AI where it controlled non-player characters (NPCs) to provide a certain level of challenge to the player, and there’s no need for emotion or creativity when you’re a target for a crosshair.
But the idea of AI that can think like humans and make their own creative works intrigued universities and has led to developments such as ANGELINA by Michael Cook. This system can intelligently design video games, writing rules and finding ideas and assets from the internet, and I had the opportunity to see it in action at Rezzed in 2014 where I was given a floppy disc with one of ANGELINA’s games.
Works like this are fuelling contemporary research into emotional AI. Associate Professor Mark Riedl believes we’ll soon see characters that can ‘research and learn from human stories or actions in a game world and thereby work out how to act like humans’; but is it necessarily to add this level of complexity? What will it mean to have AI characters who can ‘think’ for themselves and make decisions based on data not obvious to the player?
On one hand, this could be what’s missing from the non-linear open-world games we’ve shifted towards in recent years. Instead of the NPCs who are there to merely populate the environment, characters who have their own agendas and motivations could add an additional level of realism to a game. Imagine a world where missions don’t only come from a scripted narrative; they emerge from the thoughts and desires of the NPCs around you.
Or think about an action genre where it’s no longer always necessary to resort to violence and gun-fire in order to resolve a situation. Instead, you could choose to enter into a negotiation discussion with your antagonist and use skills such as empathy, charm and persuasion as your new weapons. (Don’t worry, I’m sure there’ll still be some explosions in there somewhere).
Game designer Aaron Reed said in an article for The Guardian: “In much the same way that playing with a simulation of fire or fluid dynamics can lead you to deeper understanding or quicker insights than simply reading about them, truly interactive characters potentially let us have a more immediate and intuitive kind of relationship with them than with characters in linear stories. And that is really exciting.”
But new relationships like these come with questions and potential ethical implications. As characters become more complex and humanlike with their own beliefs and desires, is it moral for us as players to decide their fate? Will it be harder or even wrong for us to cause them harm and choose whether they live or die? And if that’s the case, when comes the point that it stops being a game?
These are open areas of discussion which require us to look at ourselves closely and ones which can’t be responded to easily. I’ll therefore wrap up before my head starts to explode with all the possibilities and do this final thing; I need to thank The Gaming Diaries for their very kind award nomination last month and answer the question which kicked off this post.
I’ve not had the opportunity to play many video games over the past few weeks and when I have had the chance, I’ve mostly returned to The Elder Scrolls Online because it’s just so easy to pick up. I’d therefore like to ask my Breton character why she’s capable of running down a steep mountain without getting hurt, but completely flummoxed when she comes up against a hedge. She really needs to work on her jumping skills.
Video game lover, Later Levels blogger and SpecialEffect volunteer. Big fan of wannabe pirates and fine leather jackets.