Hey guys,
During the development of People of Aionos‘s AI system, I came across quite a few problems to solve and I thought I’d share what I’ve learned in case anyone runs into the same kind of trouble. I will write a series of blog posts, among the game’s updates, about different bugs I’ve encountered putting together a game AI with Unity, starting with AI Perception.
An AI perception system simulates the real senses of a person, usually sight and hearing, and transmits the relevant information to the decision system. Both senses need a way to detect if something of interest, mainly the player, is in range of the NPC, but the desired object has to be in the Field of View (FoV) and the Line of Sight (LoS) of the NPC to be seen. We can solve this in Unity by three different ways : Colliders, List or some 3rd party solution.
- Colliders : With this option, you simply attach a sphere collider, as a Trigger, to any NPC and write a script that reacts on the OnTriggerStay function. Unity’s tutorial section has a good tutorial on it right here. This option is quick to implement and doesn’t require a lot of code, but I have found a problem with it while testing People of Aionos on an older computer.
- Lists : With the Lists solution, you have to keep centralized arrays or lists of every possible object that might be of interest to an NPC (Player, Munitions, Sounds, etc…). Once this is done, it’s a simple matter to search, in the Update function, the content of those arrays and to test each object for distance, FoV and LoS. This option requires more time to develop and requires more code to manage than the colliders option, but it was my solution to the performance issues faced with the colliders.
- 3rd party solutions : There are plenty of great and usable solutions on the Unity’s Asset store. Most of them include another AI system in them, like navigation, decision making and communication between NPCs. I will not go through the entire list of solutions and tell you which is the best as I have found this choice to be a matter of personal taste between code clarity, support, functionality, API’s, documentation, etc… But I will point out something to consider for any of these solutions : Yes, they will do a lot of the hard work for you, but some modifications to your code might have to be done in order to make them work properly and they will require time and energy to master.
I personally toyed with all 3 of these options before finding the right one for me. I started with what I found in every tutorial online, using colliders. Sadly, while testing People of Aionos on a weaker system than my personal computer, I’ve found that perception colliders attached to multiple NPC’s at the same time, 15 or more, incurred huge slowdowns and performance issues. It took me days to track down the source of the problem and discover that all of the sensory colliders close together caused the performance dip. During this search I learned a lot about optimizing an AI along the way, which I am soon going to share with you in another post.
Since colliders were not a possible solution, I’ve decided to test out centralized lists and 3rd party options. I have found the list solution to work easily and without any performance issue and this solution was satisfactory for months, that is until I needed more complex AI decision making options. I tried a State Machine, but it wasn’t a realistic option, so I turned to Behavior Trees which worked marvelously. Finally, it’s while searching for a solution for my Behavior Trees in the Unity store that I stumbled upon Rival Theory’s RAIN. After some getting used to, I switched my NPC’s perception system to RAIN’s, which has great performance, visualization and support.
I’ll be back next week with another post about the progress made on People of Aionos and a little bit more about its universe.