Games with Heart

Designing interactive experiences that elicit emotion

Talin
15 min readNov 16, 2018

Somtime around the late 1990s (I don’t recall the exact year) I presented a talk at GDC (Game Developer’s Conference) on ways to design games that would evoke strong feelings in the audience. I recently stumbled across a document on my hard drive, buried many layers deep in ancient folders and archives, that gives the content of that talk. I’ve cleaned up the text, and I present it here for your perusal and enjoyment.

I would also ask that you bear in mind that this was written at a time when the industry was in its formative stages, and many of my arguments are no longer relevant or have been overtaken by subsequent events.

Computer games, for the most part, are intellectual entertainments. While there have been computer games that have moved me emotionally, they are rare; whereas movies, books, paintings, music, and other works of art can do so on a regular basis.

I feel that putting “heart” into a computer game program is a goal that is worthwhile in both a humanistic and commercial sense. It is worthwhile because it taps into a different part of the brain than the one we have been entertaining up to this point. It is a daring and unexpected thing to do, certainly few in the game community seem to be even aware of it as an issue, let alone actively working on it. I’m sure that if you asked most other developers about it, they would claim that it couldn’t be done. But I’m not so sure.

One could argue that since compassion and love are mental attributes, and since computer technology isn’t anywhere near simulating human intelligence, then the goal of putting these characteristics into a piece of software would require true artificial intelligence, and thus be beyond the limits of our current-day technology. However, a painting or a song has even less CPU cycles or kilobytes than what we have to work with. And the “Eliza” experiment shows that humans can easily be fooled into thinking that an intelligence is there when in fact it is not. It shows that the secret to creating realistic characters is to let the player’s mind do most of the work. After all, the player already wants to believe in the fantasy we have created for them. We don’t have to completely simulate real human behavior; All we have to do is avoid displaying obviously non-human behavior.

(For those who don’t remember, “Eliza” was the first chat-bot, developed in 1964.)

Unfortunately, most computer games today are not even as sophisticated as Eliza when it comes to simulating personality. Eliza allowed the user to express his or her thoughts freely, then picked out a few keywords and spit out some canned replies with certain words substitutions taken from the typed input. Most modern game characters are far less responsive. Certainly they speak eloquently enough (with professional writers supplying the script), but their total behavior boils down to “Push button A to make me like you; Push button B to make me hate you”.

One could argue that all it takes is a good storyteller to make an interactive movie with true feeling. One might even believe that this isn’t even a design problem, it’s merely a script writing problem. I have several arguments against this:

First, storytelling isn’t essential to an emotional work of art. It is true that many great paintings, songs and poems tell stories, but many others that are equally as great do not.

Second, the ways that stories evoke feelings isn’t the only way to evoke feelings. Stories allow a third person to passively observe characters and by their actions and experiences develop sympathy for them. However, in an interactive experience we can take a second-person approach, and allow the player to “get to know” the characters by interacting with them.

Third, computer games are not books or movies. Computer game players expect different things from a computer game than they do from a book or movie. I have seen players who wanted nothing more than to deviate from the plot of a storytelling adventure as quickly as possible, and were frustrated because they couldn’t.

There are two parts to this goal of creating games with feeling.

First we have to present feeling to the player, in other words create the illusion that the actors in the game world are living entities in their own right, with independent feelings, and evoke sympathy from the player.

Secondly, we want the player to not just perceive emotion, but to act with emotion as well, encouraging him or her to act with compassion towards the characters in the game world.

Emotion from the Protagonist

In most stories, the character we sympathize most strongly with is the protagonist. As with all characters, it is the protagonist’s unique character traits and quirks that make him or her an object of sympathy and affection.

However, in many computer games, we have total control over the actions of the character. The protagonist, therefore, has no character traits of their own, but only the player’s traits. Self-love aside, it’s hard to sympathize with a robot who does nothing but what you tell them to.

It seems, therefore, that in order to have a protagonist that we can empathize with, we need to put some distance between them and the player. How can we do this and still allow the player the feeling (if not the reality) of complete interactive freedom?

The design choice for many games is to feature non-interactive ‘cut scenes’ where the player has no control. During these interludes, we get to see how the protagonist behaves when the player is not controlling them. However, this sudden “mode switch” can feel awkward and artificial, and occasionally frustrating to the player who wants to be in charge. Some game theorists have argued that there is a fundamental tension between storytelling and interactivity, a tug of war over who has control — the player, or the game author.

A different approach is to allow the protagonist some leeway in interpreting the player’s commands. Possibly the protagonist does what you tell them, but with their own unique style. The player’s choices may be limited to things that the protagonist might conceivably do. Some players will find the latter case too constraining and chafe against the restrictions of the author’s intent. However, at least it does allow the protagonist to preserve their character.

A variation on the above is to have one character, who represents the player, and a bunch of independently-minded “associates” who travel with the player (sometimes temporarily) which are not under direct control but who can be strongly influenced.

(Interestingly, the game Divinity: Original Sin does something much like this, in which there’s a distinction between “player characters” and “followers”, where followers have motivations and backstories that are separate from the player’s.)

Another possibility is to show the protagonist’s character in ways other than behavior. Storytellers have developed a number of techniques for externalizing a characters behavior, such as dream sequences, foreshadowing, and reflections of the character’s personality in the environment. These could be applied to a computer game world.

An experimental approach I have flirted with is visualizing the protagonist’s emotional state, representing their personality as graphical icons which can be manipulated by the player. It allows them to “see inside their head” using a mental inventory of ideas and memories. It objectifies feelings by turning them into game tokens the same way we objectify physical matter by turning them into inventory items. However, while this was interesting to me as a game play mechanic, I eventually abandoned this approach because it did not fulfill the purpose of making the player feel, and in fact had the opposite effect of turning feelings into an intellectual puzzle.

Emotion from other characters

A potentially easier path is to create characters other than the protagonist for the player to empathize with. Ideally it should also create characters for them to hate as well as love. Not just monsters who attack blindly, but cruel men who need killing badly.

In some games, the character you control isn’t always the one who the story is about.

Of course, making the player empathize with non-protagonist characters has it’s own challenges as well. For one thing, the characters have to act realistically, or at least they have to act non-mechanically. But at the same time, they can’t be completely random, because it’s hard to be a friend to someone who is completely unpredictable. What the characters need is a kind of “animal familiarity” — as we get to know them we get used to an uncertain yet comfortably predictable pattern of behavior.

So how do we create these patterns of behavior, and how do we present the character’s emotions to the player?

The most straightforward way to handle it is to use completely scripted characters, in other words ones whose actions are completely determined in advance by the game designer. This is an unsatisfying solution, since a human designer only has a limited lifespan to encode all the possible behaviors of the characters. And cutting down the number of behaviors results in a game who’s interactivity feels impoverished.

The next approach would be to build some kind of “emotion simulator”. Each character would be sensitive to the actions of other characters (especially the player’s), and alter the internal state of their mind based on that input. This brings us to the problem of assigning emotional significance to the various actions that the player can do.

As an example, here’s a set of emotional triggers that were developed by the “OZ” project at Carnegie-Mellon:

Happiness: Goal Success, triggered when goal has been met

Depression: Goal Failure, triggered when likelihood of success is low

Hope: Potential Goal Success, triggered when likelihood of success increases

Fear: Potential Goal Failure, triggered when likelihood of success decreases

Gratitude: Assistance in reaching Goal, triggered when likelihood of success increases, directed towards agent that caused the change

Anger: Hindrance in reaching Goal, triggered when likelihood of success decreases, directed towards agent that caused the change

When there are no unmet goals, the result could be boredom, satiation, etc.

To use a system like this would require keeping track of each actor’s goals, to know how likely each of those goals are, and to know when the goals have been accomplished. They would also need to know which external agents can influence the likelihood of the goal occurring. They would need to keep a “memory” of such influences — essentially a simplistic ‘theory of mind’.

An emotion simulator needs to take input of some form from the player representing verbal communication between the emoting character and the protagonist.

From a simulator point of view, it would be simplest if the player communicated directly in terms of emotions (“Be nice to this person”, “Be mean to this person”) since they could be directly input to the emotional simulator without translation. However, I suspect that would be unsatisfying to the player, especially since they probably want to say things that have both an emotional and semantic content.

Hardest would be a full natural-language parser. Textual interfaces have pretty much gone out of favor anyway (although few of the old parser-based text adventure games were “natural language” parsers, they just recognized a few keywords).

Another method is the “sentence menu” used by Secret of Monkey Island and other adventure games, where the player chooses what they want to say. Unfortunately, unless the number of sentences is large, the player will be restricted in what they can say.

There is also the “noun menu / verb menu” where the character can pick a noun phrase and a verb phrase from separate menus. This has more possibilities for interactive freedom. The menus can be either textual or iconic.

Finally, there is the “context-sensitive” speech menu, like the one used in Spaceward Ho!. In this case, the user constructs a sentence, word by word, by clicking on a menu of either text or icons. Each icon chosen causes the menu to change such that only choices which are appropriate for the next word in the sentence are shown. The complete sentence, as constructed so far, is shown on another area of the screen. Although many people didn’t like the Spaceward Ho! system (especially for communicating with other human players), I think it could be improved and that it has potential worth exploring.

Once the emotions have been simulated, they need to be presented to the player somehow. The primary conduit of these emotions will be verbal communication.

I mentioned earlier about avoiding obviously non-human behavior. One of the most common machine-like behaviors noted in game characters is utter consistency. Real humans never do the exact same thing twice, whereas a game character generally gives the exact same response to the exact same stimulus. At best, they might pick one of several random responses.

In order to avoid repetition, the character must be capable of a large variety of responses. Many of these responses can be minor variations on one another. However, the number of responses needed for convincing authenticity is probably too large to create by hand.

One approach would be to devise a way to allow subtle “tweakings” of a pre-written sentence. An author would figure out what the character should say in rough form, and then the algorithm would add minor variations to this.

Another method would be natural-language generation. In this case, each “speech event” is represented internally as an abstract idea, such as:

(event-occurred:
(pick-up
(actor: Justin)
(object: Sword)
(2nd object: Table)))

This abstract idea structure would be translated into English text, such as “Justin picked up the sword from the table” or “The sword on the table was picked up by Justin.”

Unfortunately, natural-language generation completely breaks down in the face of digitized speech in the form of audio recordings. Most game players now expect a CD-ROM game to have speaking parts for the actors, however it is currently not possible to splice together a complete sentence out of individual digitized words and have it sound anything like natural speech. It probably is possible, however, to splice in one or two words into a sentence, if you could find voice actors who were capable of precise control of intonation.

Another conduit of emotional signals is facial expressions. Currently, most games either do not support facial expressions, or use pre-drawn bitmaps for every possible emotive state. (Note: this is no longer true.) It might be possible, however, to use “morphing” technology to create a range of expressions algorithmically. This would allow subtle variations in expression.

Emotions can also be signaled through movement and action. For example, a character walking briskly, looking at the clouds is probably feeling something quite different from one that is plodding along, shoulders hunched, eyes lowered. Unfortunately, animation is expensive, and we can’t afford to create two different walk cycles for everybody. We might be able to do something with the speed and direction variables, however, for example make the character walk in a meandering path as opposed to a straight line, make the character walk faster or slower.

Although it may seem counter-intuitive, but it is actually easier to convey feeling in a simple, cartoony or non-realistic world than one that has a high degree of visual fidelity. The reason is that viewers are much more likely to forgive errors of presentation in a lo-fi world, and it also avoids the “uncanny valley” where characters that look almost-but-not-quite-human come off as creepy.

Emotion from the Player.

Encouraging the player to act using feelings will be a difficult task. Most players are conditioned by their experience with other computer games to act with ruthless calculation. So what if a few people in your Sim City or Civilization game are miserable? As long as the city is prospering and growing, who cares about a few malcontents? While that’s the kind of hard choice a good leader must make, but for computer game players, there’s nothing hard about it at all.

Let’s assume, for a moment, that we had a magical subroutine that could give a moral analysis of the player’s actions. It could look at the player’s previous moves and determine if the player’s behavior was heroic or cowardly, compassionate or cruel, curious or bored, etc. Note that such a subroutine is at least theoretically possible in a micro-world simulation, with a context more limited than the real world, because the number of variables and permutations is microscopic compared to the real world.

We can’t read the player’s mind, of course — the player could be feigning kindness, for example. To a certain extent that is always going to be true; The player is never going to feel the same affection for a bunch of pixels on a screen that they do for real people (at least, we hope not)! However, if a player were “role-playing” kindness, in other words “pretending” to be kind using their own understanding of that feeling, that would be enough, I think, to create the entertainment experience we seek. What we want to avoid, however, is the player who second-guesses the game, reducing the act of kindness to a mere mechanical formula, such as “always choose the first menu option”, or “give a gold coin to everyone who looks like a beggar.”

It’s a lot easier to rigorously define evil than good. A character who goes around killing everything and rifling through the dead bodies can safely be classified as a unprincipled brute. (It would be interesting to allow the player the choice to engage in more “civilized” evil, such as exploitation or enslavement. Note that the person controlling the player might be a perfectly decent person either way.)

“Good” behavior, however, is not just the absence of evil. A truly good person is one who suffers, or risks suffering, to negate some evil. A hero is one who risks death or pain to defeat a villain.

Can the player be a hero? Heroes conquer their fear, but the player has no fear to conquer. The player isn’t afraid to fight — he likes fighting. He laughs at death (I know, I’ve seen it). He wants battle, else he wouldn’t be playing the game. And even if he didn’t want battle, he’s only risking his character’s neck because he knows he’ll be rewarded at the end. He’s not attacking that army of orcs because he’s afraid they might burn the village, he’s attacking them because they might have some loot on them. Even if we get rid of the loot, and instead reward the player by keeping track of “good” deeds (trading them for some ultimate reward later) the player knows this, and unless they are a natural role-player, they will only be risking the battle because of this hidden tally.

What happens if we get rid of the reward entirely? What if we get rid of all feedback, so that the sole reason for battling the army of orcs is the player’s own internal motivations? His only reward is seeing the smiling faces of the peasants for a moment. I don’t know if most players out there would be satisfied with this. A compromise might be to make the reward a subtle one, that develops slowly (deferred gratification), but one that will be deeply satisfying when it comes to fruition.

Another possibility is to keep the reward, but to alter the nature if the risk. Since the player can’t really be hurt (physically), what does the player have to lose? There are only these things: The time they spent playing the game, the game tokens which the player traded their time for, and the sense fun received in the process of doing so. If the player had to work especially hard for some object or capability, they may feel an “attachment” towards that thing — they might feel very bad about losing it.

Suppose, then, that we let the player know in advance that there’s a good possibility that their character will lose some valuable capability if they attack the orcs — whether they win or not. Now, at least, the player has the possibility of being heroic. He is giving up something of real value to him (his time) to defeat an evil force.

Of course, you can’t really take anything away from the player, since many players will just restore from a saved game anytime the lose anything. One suggestion is to “bribe” the player not to restore the game by accompanying every loss with an associated “consolation prize”.

My co-worker Robert Wiggins suggests that the ability to save and load at will makes all of the actions within the game environment meaningless. However, the players seem to demand this feature — can we possibly convince the buyer of the product that restricting saving and loading is a good thing?

Here’s a possible scenario: Your character is travelling through the game world whereupon you meet a pair of young lovers. If you go up and talk to them, they will greet you. If you tell them about your travels, they will give you a useful gift, for no reason at all except the goodness of their hearts and the fact they are happy and enjoying life so much right now. If you return later, you may find them doing various things, but always together.

(The reason for the gift is twofold: One, to set up a karmic imbalance that might be subconsciously noticed by the player, so that they later will feel obligated repay the favor; Also, to reinforce the player’s endearment to the happy couple).

Later, however, you discover that one of the lovers is killed by an evil power, and the other one is lost in hopeless despair. Or perhaps the lover has been enchanted or taken prisoner. Either way, there is no chance that the one left behind could ever free the captive, and they are convinced that no one else can either.

If the lost love is truly dead, then the player may choose to find some way to console the one left behind somehow. This gives a passive, more conversational adventure.

If the lost love is not truly dead, then the player might choose to right what has been wronged. This gives us an active, more deed-based adventure.

In either case, we make it clear to the player that the costs will be high, and the rewards chancy. This frees the player to make the decision apart from all material considerations, and purely on emotional grounds.

My goal in writing this is not to give all the answers, since I don’t have them. However, I hope that the questions I have raised will be interesting, and perhaps will inspire some creative solutions.

See also

--

--

Talin

I’m not a mad scientist. I’m a mad natural philosopher.