Useful Notes: Video Game AI
At first, in the good old days of cathode-ray tubes, 8-bit sprites, and 64k RAM, good A.I. was considered to be any code that actually got the bad guys to do anything rather than simply standing frozen in one spot unable to perform a single action. Being able to recognize the existence of the player, or to reach and attack the player without getting hung up on walls, was considered nice but not essential. However, as the complexity and realism of video games increased, so too did both the ability and the desire to make the computer controlled opposition come across as a genuinely cunning adversary rather than a bunch of 1s and 0s carrying out pre-defined lines of code. An important caveat to remember is that all video game Intelligence is by definition Artificial. Even the best modern game A.I. uses a variety of cheats to create the illusion of an intelligent adversary, when in fact the A.I. is a relatively limited system with a finite (rather myopic) set of decision-making behaviors that are microscopically small compared to the thinking capacity of even the most limited human player. However, good programmers will hide the cheating well and really good programmers will create an A.I. that is genuinely capable of limited brilliance in its specific field of operation. For example, a good cheat is the "emergent behavior" seen in the Replica Soldiers in F.E.A.R. that give the illusion of much more tactical complexity than they're actually designed for. A bad cheat occurs in some lesser FPS games, where enemy soldiers will yell out "Give me covering fire!", "Flank him!", or "Retreat and circle back!" when they're actually not capable of doing any of these things and are merely programmed to move randomly and fire in the player's direction. Another important caveat is that AI is not universal and can't generalize concepts. A human can take what they learn from one game and apply it to another or even indirectly related subjects and apply it to something else. So even if you've never played Halo or Civilization before, most people can draw upon past experiences, common sense, similarities, and human convention to at least demonstrate a basic level of competence or understanding. An AI on the other hand must learn how any given game works and, more importantly, how the mechanics that make that game unique work. Thus video game AI can be one of the most time consuming tasks involved as you first need to teach the AI the rules of the game and then teach it how to be good. This after basic implementation of how the AI will work for basic tasks such as pathfinding in ones' game engine. It doesn't stop there. As the power of computers have evolved, games have had the potential to be much more complicated. People have the capacity to understand pretty complex and abstract notions. However, designers have begun to realize that while players can do this, AI can't (not yet anyway or if they can, then implemented in a reasonable time frame and budget available to game developers). Soren Johnson, lead designer on Civilization 4, noted that any mechanic they put into the game, they had to be sure that the AI would be able to understand and utilize it. Even relatively simple tasks of scheduling can be an issue. One such example he has mentioned was amphibious assaults and sea invasions. As it turns out, building transports and ensuring that units arrive at the same time in the same place over sea is very difficult to get an AI to understand compared to doing the same on land. Likewise, teaching an AI to handle Fog of War was difficult. The gaining and losing of information while retaining the ability to infer information and form a strategy is easy for a human but much more difficult to implement in to AI. There is also the debate on whether AI should be fun or competent (though they aren't necessarily exclusive to each other). That is, should an AI play a role (they are an equal player in a fair game or are otherwise pretending to be something like a monster and thus do things that may not lead to victory but are 'in character') or should an AI play to win (that is, they're trying to beat the player and do things that may potentially be 'out of character'). There isn't any right answer as how much of each will vary from game to game, audience to audience, even player to player. All of this also suggests why games with heavy multiplayer components are a popular way to minimize costs. The cost of AI is minimal as opposing players are responsible for the brains. This is one reason why FPS have a bigger niche than 4X games. FPS games can be played in short rounds; therefore finding online players is usually trivial, and the games are designed without much care for multiplayer bots. 4X games often take many hours to complete to allow for strategic complexity; since it's hard to get a crowd of people to schedule for such a game, in most cases they are played against bots, and the game expected to have good ones that cheat gracefully. By forgoing all the cost of designing a good multiplayer bot, an FPS can simply do more with the same amount of money. To be fair, humans in combat situations aren't usually doing any complicated reasoning. Given the time pressures and adrenaline of combat, people generally fall back on conditioned responses and rules of thumb for snap decisions. That's why training is so important, after all. Conversely however, in context of video games, this trained response in humans can sometimes be less entertaining (though not necessarily less fun) and will be more predictable. One reviewer of Grand Theft Auto IV noted that they enjoyed playing the various multiplayer modes better when they were playing against bots. The reason being that AI has fewer limitations on what it will be willing to do. They noted an instance where an AI fought a hectic gun battle on an I-beam suspended in mid-air, something few humans would even consider doing. Likewise, in the development of Left 4 Dead, Valve noted that though players and developers alike wanted wide open maps, everyone would eventually just settle onto one or two optimal paths anyway.