Working with emotional models in an artificial life simulation

The aim of the project was to create a testbed for the simulation of an artificial life in which emotional models could be observed in action, as well as provide a method of monitoring and interacting with the simulation.


There is a limited understanding of emotions and the purpose of implementing them for use in artificial intelligence and artificial life software. Emotions are a controversial topic even in philosophical and psychological terms, however research shows that emotions may be beneficial in adaption processes and goal selection.

The aim of the project was to create a testbed for the simulation of an artificial life in which emotional models could be observed in action, as well as provide a method of monitoring and interacting with the simulation. The importance of merging emotions and artificial life together is that it provides a method for testing emotional hypothesis; this may include the underlying mechanisms of emotions observed in nature, as well as the study of emergent emotional phenomenon. Artificial emotional models can be used to improve and adapt goal management in dynamic, social and resource limited environments, in order to improve survival rates of agents in a simulation (Cañamero, 2001).

The emotional model implemented in this project provides four basic emotions of happiness, sadness, anger and fear; these are affected by input from sensations such as hunger, pain, and crowding. Weighted sets of corresponding feelings are computed, which may also be influenced by the models hormone system, the hormones act as a method of capturing and storing the emotional state over time. The model calculates the strength of the emotions using the intensity of the feelings, and selects a dominant emotion from the set, which in turn is used to select the primary action and goal the agent should focus its attention on.

The simulation provided a designed environment, which tried to mimic the behaviour observed between predators and herbivores in nature. The success of the emotional model is examined by the survival rates and how individual agents cope in a resource limited, shared environment. The project work proved successful in highlighting that artificial emotional models may be used as a successful means of adding value to the goal selection process in artificial intelligence.


The following is taken from a report written for my BSc (Hons) Computer Science Project at the University of Hertfordshire. The project was based on the work in ‘Robot Learning Driven by Emotions’ (2001) and I refer to it many times throughout the paper as ‘robot learning’.

Acknowledgments I would like to thank Dr. Thiago M. Pinto (Computational Neuroscientist) for his role as supervisor for my project, as well as my friend Timothy M. Bristow (PhD student in experimental particle physics) for the help he provided in understanding some of the cryptic mathematical symbols in this work.

I would like to thank the following users whose work helped me at some point during the project:

  • The excellent Box2D tutorials at, www.iForce2d.net
  • Robert Hodgin, who created the cinder beginners tutorial which helped me get started with swarms and the cinder framework
  • Stephan Schieberl, whose code helped me with the one of biggest hurdles — getting the mouse position in 3D space
  • Bobby Anguelov, my undergraduate lecturer and artificial intelligence tutor, whose notes helped me refresh my memory on neural networks

And finally the Cinder library and Box2D developers ☺, and thanks to any addon developers that I forgot to mention (sorry).


Introduction

In a study of the problems facing the research of emotions in artificial intelligence and artificial life fields, the researchers showed that there are three main problems with the current state of artificial emotion development. The first problem they found is a lack of scientific frameworks to work with; the second problem they faced is the research results are not trustworthy, and the third is a lack of comparison between research projects (Freitas, Gudwin, Queiroz, 2005).

The lack of comparison between projects is an important factor to consider, the research done in these areas should aim to prove that artificial emotions are in fact beneficial in solving problems. In the context of this project — how can we prove that these simulated emotions add value to the simulation? Research in artificial emotions should aim to prove that an agent is more adapted than one without artificial emotions (Cañamero, 2001).

In a simulation, the term adaption refers to the survival of the agent — and how well suited it is for existence in the environment it inhabits. Adaption can take the form of short-term rapid reaction to stimulus, or long-term and slower evolving adaption, when this occurs on species level then it is seen in terms of evolutionary adaption.

Lola Cañamero also argues that animats are a good method of studying the link between motivation and behaviour of an agent. The act of being able to categorise goals as positive or negative helps determine which actions are worth pursuing.

The motivation behind the development of this simulation is to provide insight into how the agents select their behaviours; this includes the ability to view the agents’ current emotional state, the state of the neural networks that determines the priority of actions, as well as provide a way to monitor these changes over time. The ability to observe how an emotional model effects the simulation may provide a convenient method for testing and evaluating their effectiveness, as well as monitoring the simulation for emergent properties.

The testbed for the simulation was created using C++; the Cinder creative coding framework; and the Box2d Physics framework. At the moment the testbed is stable, although quite simple, and would provide a stable platform on which more complicated simulations and emotional models could be incorporated.

The core requirements of the project specification included the creation of a running simulation with the predator and herbivore metaphor described above, provide a basic emotional model for testing, as well as provide a method for monitoring the emotional state of agents in the simulation. The users should also be able to modify and change the parameters of the testbed and emotional model during the simulation.

Testbed

Physics

1. Box2D

I have chosen to use the Box2D physics library in order to incorporate physics into the simulation, which should aid in the realism of the simulation. This helped create a map that can use water, boundaries, movable rocks, etc. The library also allows me to use many of the complicated collision detection features I have implemented into the simulation. The physics engine is robust and well designed with many comprehensive features — many of which may prove to be useful in the future if the project is developed further. The physics engine can support motors, joints and ropes; these ropes can be used to create meshes, chains, webs, flexible boundaries, or connect physics multiple bodies together. The motors can provide a force to fixtures, joints and ropes, with more control over the force, duration and torque applied.

At the moment applying a small impulse force to an agents center of mass — in the direction that the agent would like to travel, controls the movement of its physical body. It may be possible in the future to create creatures with legs, wheels, tracks or tails, and motors that may provide organic movement that is similar to animals. The engine also provides support for a ray scanner, which could provide a method of long distance vision, or a sonar system in order to detect rays that reflect back towards sensors on the agent. The Box2D system also has support for bullets and breakable physics bodies, which could be implemented as a form of long range attack and provide realistic damage effects, even the environmental elements could be damaged. There are many possibilities — however as the name suggests, it only supports physics in two dimensions.

It would also be exciting to see if it may be possible to evolve creatures using genetic algorithms — evolving agents with different traits and observe how they survive in the world. This would be similar to the work done by Karl Sims and his artificial life project in which phenotype creatures are evolved in order to swim, walk, jump and move around a 3D environment (Sims, 1994). I have avoided these topics in my project, as it would add an extra layer of complexity and computational strain on the system. Throughout the project I have tried to lower the amount of processing power required, as the emotional model is already computationally intensive.

2. Collisions

debug mode

The debug draw of the physics system is shown here, you can see the circular sensor areas of each agent, and the green box2d physics elements (rocks, water, food, world boundary).

Each agents in the simulation has a sensor around it, in the shape of a circle, the sensor is used to detect collisions in the physics engine. Each agent only stores a list of other physics bodies that are in the vicinity of its sensor, when another entity enters the sensor it is added to a vector list the agent owns; and when an entity leaves the sensor it is removed from this list. The sensor is able to detect the type of body that has entered the sensor; this may belong to the set of {boundary, food, herbivore, predator, water}. The sensor acts as the vision of the agent, and the different types of entities in this sensor affect the sensations the agent has at a specific point in time. Another benefit of this approach is that I can fire an event whenever a specific type of agent enters of leaves a sensor area. For instance, when a predator enters the sensor area of a herbivore agent — it can immediately get a bump in its sensations in order to increase fear.

At each time-step of the simulation — an agent only has to iterate through this list of neighbours in order to perform its calculations on the emotional model. The ability for each agent to store a local list of entities in its local area dramatically increased the speed of the simulation. The alternate situation would be that at each time-step, every agent would have to iterate through every other agent in the world; and perform a Pythagorean equation in order to determine the distance that separates them. Performing multiple square root functions in order to find the distance from each agent is not ideal and would be too costly on the system.

The emotional model is already very mathematically complicated and so I believe this feature was important in order to limit the complexity required in each step. This reduces the complexity from an Order of N2 to O(N log N).

Before I discuss the collision detection system further, I must first explain that each physics body is made up of fixtures, which allows complex shapes to be created in the simulation. Each agent owns a physics body; this body acts as a container, which includes a fixture for the body polygon shape, and a fixture for the sensor circle shape.

The collision detection and the collision category bits are separate from each other; each fixture may have its own collision settings. Such that a sensor circle may detect a boundary but not crash into it, so that a sensor would be able pass through a boundary object — however an agents body fixture may not pass through the boundary wall as its collision bits are set differently. This would also allow the testbed to handle flying or swimming objects in the simulation world. (*)

For example, a bird may fly around and not collide with anything except a world boundary — in this state it may be seen as flying, other entities in the world would not be able to interact with it unless they are also flying. When the bird comes down to land on the ground, its collision category bits are updated and other agents on the land may now interact with it. The same could be said for fish, which may not be able to leave a water boundary, but may appear as if they are on land if they are close to the water edge; or if a specific agent is able to enter the water and interact with these fish agents, it may update its collision bits when it enters the water in order to interact with the fish. Another way to explain the use of this would be a bear agent may be able to enter the water and eat a fish agent.

As I explained earlier, the physics library is strictly a two-dimensional physics engine, however by overriding this ability of the system — it would be possible to create a pseudo-3D world, or better thought of as a 2.5D world. We may only ever be able to view the world from the two-dimensional perspective, however this exploitation of the physics engine allows for complicated interactions to be used in the testbed.


Simulation

Overview

The ability to monitor the simulation in a graphical representation would allow emotional models to be observed and understood better. It will be easier to view emergent behaviour and observer shortcomings in a model if you can monitor it running across multiple map scenarios, while using a high level overview of the system internals.

The visual overview the graphical interface provides would allow the researcher or user to comprehend and grasp the internals of the complex mathematical models in a visual way — by representing the mathematical state using colours or visual cues — it becomes easier to comprehend the state of the system as a whole without too much cognitive overhead.

Camera

The camera supports zooming in and out of the world, and you may view the simulation as a whole, or view smaller sections of the simulation. It allows you to pan across the world and allows you to focus on a specific agent and follow it as it moves around and interacts with the world.

Maps

I also found it necessary to include the ability to create and save world maps, which is one of the main features of the testbed, which was not specified in the project specifications.

In order to help evaluate the emotional model, I have programmed the testbed so that it can create world maps, and load these maps into the physics engine. This features provides a way to create a scenario and the let the simulation run multiple times on the same map in order to observer a variation of outcomes using different settings. The ability to run simulations multiple times with many different types of maps would provide an important feature in testing the robustness of the emotional model — to ensure that it is not overdesigned to solve a specific problem.

A world map can be created with many different entity types; these include rocks, food, water, and map boundaries. The testbed is able to support maps with variable world sizes. The maps are saved and loaded from an XML file, and the file may be created using the map creation function or by manually edited the XML files.

draw mode

Above you can see the physics shape creation function which can be started with the key (D) for draw, each new mouse click creates a node, and a shape can be closed by pressing the keys (T, B, W) for trees, boulders or water elements.

The map creation function has a limitation, in that the nodes that create a shape must be placed in a counter clockwise fashion. This is a requirement of the Box2D physics engine. I have found it difficult to reorder the nodes, while still retaining the shape and position if they are placed down in a clockwise direction. The polygon shapes that can be created should have a minimum of 2 nodes, and a maximum of 8 nodes.

The load and save map feature does include a basic ability to record the positions and types of agents in the world. These will be spawned again if the map is loaded, however this feature is not developed enough yet to include the recording of the agent’s health, model coefficients, emotions, or behavioural neural network values. The agents that are spawned are new agents with the default creation state. The same applies for the health values that record the amount of energy stored in the tree food source.

Visual Metaphor

I have based the simulation on my experience from animals in the wild; there may be some disadvantage in using such a metaphor in order to build the project around. This is a concern when you compared how well suited the game of life is for allowing emergence because it is so abstracted. It may be the case that by over designing the system such as this, the simulation becomes too highly focused on one specific problem domain, and the emotional model tested in the simulation may not be able to cope well when used in a different situation.

The visual metaphor I have used, perhaps inspired by my life in Africa, has been based on the idea of predators and herbivores. I like to think of the predators as carnivores such as lions, who may hunt in packs and feed on herbivores. For the herbivores I like to think of them as elephants or buffalo, who are slow grazers with many available food sources, they are able to run from the predators and prefer to be in the company of others.

The food system I have created is based on the idea of trees, that are spread out across the world map, and provide a limited amount of food before their food source is depleted, it then takes some time for the food store to regenerate. The reasoning behind this is that would create points of interest where the herbivores may gather to feed. If the herbivores simply ate grass, it would be more widely available and harder to simulate the widespread grassland across the map — it would take a large amount of computation to implement individual pieces grass. The grass could have been rendered in the same way that I handle water in the physics world, as a large single shape covering the map, but if the sensation for detecting food is always present in the agent, it may cause some unintentional ripple effects in the emotional model.

The water source is an element which I have added to the map that is required for both the herbivores and predators agents — this was inspired by the idea of how both herbivores and predators in the wild gather around the watering hole at the sunrise and sunset, regardless of the dangers present. During the design of the simulation, I included this element, as it would provide a source of conflict — one that both agent types would have to use. I hoped this would provide a catalyst for their interaction with each other because their food sources are quiet separate from each other.

The purpose of creating the simulation around a metaphor like this is to make the simulation easy to understand, in order to help any researcher using the simulation. The metaphor also made it easier for me to design the system as I had a template to work with, and a model to work towards that I had conceptualise in my mind.

The graphics and metaphor provide a high level overview of the current state of the system, which is easier to view the effects of the calculations of the emotional models while it is running. The testbed is a very specialised tool and so the user interface — while I tried to simplify it as much as possible — still has a steep learning curve. I do not believe it would be easy to simplify the human computer interaction component of this further, as exchanging information for a nice design may hide detail that could be important to the user. However I believe that being able to glance at the system and get an overview of the entire state will be useful. If the system where not designed around a metaphor it would be harder to distinguish the actions and relationships that are taking place on the screen.

One pitfall with this approach that should be of concern is that using a metaphor like this — with a design-based approach — may lead the user to anthropomorphise the system (Cañamero, 2001). The user may misinterpret the results and the role the emotions play in the system. Although I think the design of the system is flexible enough that it can be adapted to handle new metaphors and simulations.

Monitoring

Lola Cañamero has stated that “no minor problem comes from the fact that research in this field is only starting, … , it is difficult to evaluate individual progress, but also to compare different systems and results. The establishment of metrics, common evaluation, criteria, testbeds, and standard tasks and environments are needed …”

As I worked towards solving these problems in my simulation, I found it was necessary to be able to create and save maps in the testbed. As a research tool, I realised this feature would be necessary in order to be able to compare the results and performance of emotional models against each other. I think it would be great if researchers could share maps with each other, and work together to build scenarios in which to evaluate their models and assumptions.

One suggestion for a good method for testing emergence is to create orphans and see if other parents could adopt these abandoned children. I see no reason why we could not create a map for this scenario and let different emotional models execute in order to test for emergent behaviour.

In the simulation if you select an agent, they are tagged as the “hero” agent, you may use the camera to follow the agent as they move around the world. Once an agent has been selected — you may update the values of the coefficients it contains for each emotion — these act as a predetermined weights of the neural network in its emotional model. You can also press the ‘?’ key in order to display an information panel of the agents statistics — such as the dominant emotion, the neural networks values, the current behaviour, its sensations, etc. This allows variables of an individual agent to be manipulated or tweaked while the simulation is in progress.

agent sensors

An agent can be followed as it moves around the world, here the red circle represents its sensor area, which is occupied by two predators and two herbivore agents. The selected agent has a hunger, thirst and crowding sensation, which correlates to its high fear and anger emotions. The dominant emotion in this case is anger — and the default behaviour which the agent has selected to perform is to move forward (shown in the info panel, on the bottom right corner).

Emotional Model

Artificial Emotions

Emotions play a big part in our lives, and yet their purpose is still not clear yet. Are emotions a trait left over from our evolutionary process, or do they serve a more important function in our cognitive process? Darwin expanded on his theory of evolution to include the idea that emotions may be an important aspect to our evolutionary survival (Darwin, 1872). The work “From Human Emotions to Robot Emotions” states emotions are clearly linked with our cognitive functions (Freitas, Gudwin, Queiroz, 2005).

Artificial Emotions are still largely undefined area of research, perhaps because we work with emotions in a qualitative form, as it is difficult to capture quantitative data. It is also difficult when you try and place bounds on emotions — where is the line that separates one emotion like joy, from a similar emotion of happiness. We first have to consider what emotions are; open questions still concern the origin and function of emotions, and the relation between emotions and their effects (Freitas, Gudwin, Queiroz, 2005).

The development of artificial emotional systems should consider which emotions are to be include in the model? In this work I have focused on four emotions {happiness, sadness, anger, fear}. “Robot Learning Driven by Emotions” also suggests that disgust may be another key emotion to consider. Other research shows that we should focus on eight primary emotions {joy, sadness, acceptance, anger, fear, disgust, anticipation, surprise} Plutchik, another researcher suggests that up to basic 15 are required (Plutchik, 1980). And yet, when we consider the model of emotions from “Robot Learning” — why does sadness represent its own emotion, could we not represent this as a negative valence of happiness? The emotional spectrum may be as varied as the colour spectrum. Cañamero explains that there is also a cultural disposition to how we view emotions, in the western world they are primary seen as internal states, while other cultures may tend to categorise emotions differently according to the social situation in which they occur.

Three different types of modeling of emotions exist: the first type works with emotions and qualitative data — this is a semantic-based approach. A phenomenon based approach; which studies the link between an emotional state, the phenomenon that causes the state, and the behavioural responses. The third is a design-based approach; which tries to model the underlying mechanisms that cause emotions, these models are normally biologically inspired systems. (Cañamero, 2001).

The work in this project falls into the third category, it is a design-based approach, which is biologically inspired.

Emotional Model

1. Overview

In “Robot Learning Driven by Emotion”, a method is explained for using four basic emotions of happiness, sadness, fear and anger which are affected by input from designs such as hunger, pain, proximity, etc. An emotional model uses a set of values from sensory data, and computes an emotional state as the outcome. A neural network then uses the dominant emotion to determine a suitable goal for the agent to focus its attention on, and so the emotional model determines the next behavioural action the robot would take. A hormone mechanism is used for feedback and memory, which has an influence on the node weights used in the computation done by the neural network (Gadanho, Hallam, 2001).

emotion model

The emotional model as shown in the robot learning paper, here you can see the layout of the system, starting with sensations and working upwards to calculate a dominant emotion.

Emotional models can sometimes becomes too focused on the problem they are trying to solve, by using a simulation like this which has both herbivore and predator agents, it allows the emotional model to work in two domains of animals with separate survival motivations. It has also been suggested that one of the main problems with studying emotional models is that they are rarely tested in multi-agent environments (Scheutz, 2003).

The testbed I have created would be more than capable of simulating the same scenario as described in the paper “Robot Learning Driven by Emotions”, which uses a single robot instead of many agents, and uses a light source as food.

Matthias Scheutz reports that an emotional framework should provide basic roles for emotions in artificial agents (Scheutz, 2003); including action selection based on the current emotional state of the agent, providing adaption for long term changes in behaviour, social regulation mechanisms, reaction mechanisms based on reflex, emotional mechanisms for affecting the agents motivation towards a goal, goal management, etc. Matthias also explains how a biologically inspired schema based emotional model, which uses a feed forward neural network — would be able to provide a method for dealing with the basic needs of hunger and fear in artificial life simulations (Scheutz, 2003).

In order to explain the emotional model and its functions, I will first explain that there are four base emotions, {happiness, sadness, fear, anger}. It has been stated these are the simplest set of emotional states required, we may for example also include disgust; which would be effective if the map contained poison of some kind, perhaps from a dead corpse or an infected water source (Gadanho, Hallam, 2001). The emotional model also makes use of a set of feelings {hunger, eating, smell, thirst, drinking, humidity, pain, restlessness, crowding, threat}. Some of these form pairs, such as {hunger, eating} and {thirst, drinking} — in order to distinguish between the feeling of hunger, and the act of eating to solve the hunger sensation. Smell and Humidity are activated when in the proximity of food or water, which is detected by the agents sensors. The feelings of restlessness and crowding are used to represent boredom, and the amount of friendly agents nearby; while threat and pain are activated by the presence of enemy agents, pain is activated when the agent is under attack. The agent also has a health value that can be lowered during an attack, or due to lack of food or water. When the agents health runs out — the agent is seen as being killed.

emotion coefficients

The coefficient values used in robot learning with emotion paper.

The feelings used differ from the feelings used in the robot learning paper, which are {hunger, pain, restlessness, temperature, eating, smell, warmth, proximity}. I have added thirst as the hunger element is different among the predators and herbivores. I think having a similar thirst mechanism will drive them to interact more as they will both be after the same resource. This is similar to the idea of animals, both predators and prey gathering near a watering hole at the end of each day. I think it will be a useful means of creating interaction and conflict in the simulation. The temperature feeling used was due to the fact that the robot had to feed itself and gain energy via a light source, warmth was related to the intensity of light the robot could sense. As it moved around more its temperature would increase with high motor usage, and return to zero with low motor usage. Pain in this case also referred to the act of bumping into objects, as it was the only agent in its environment.

emotion edits

The coefficient values for feelings that affect emotions (happiness, sadness, fear, anger) can be edited for each individual agent while the simulation is running.

2. Emotion Controller
2.1 Sensations

The sensations the agent feels are directly related to the feelings we previously discussed, when an action or event takes place then a relevant sensation is affected. For instance, if a predator enters the sensor area of a herbivore agent, I can directly adjust the threat levels, and the crowding levels as they are relevant to this action. If the agent were to encounter food, then the sensations affected would be smell; and if the agent were eating the food, the sensation would change to eating, and finally hunger would decrease after eating for some time. The sensations hunger and thirst are time based, so at each timestep they are decremented by a small percentage — this acts as a simple energy mechanism.

2.2 Feelings

emotion parameters

The feelings are the next step in the emotional model, they are directly linked to the sensations, however they are affected by the hormone system first, which I will discuss shortly. The equation that handles this is as follows:

emotion equation emotion learning equations

(If) is the feeling intensity. (Hf) is the hormone value. (Af) represents the emotional influence. (#8 up/down ) calculates the attack gain / decay gain. — For more info on the full set of equations please refer to the robot learning paper in the references.

The feeling’s intensity, which may now also have been affected by the hormone system, acts as a weighted value derived from the sensation.

The model works by calculating the emotional intensity for each emotion separately:

emotion intensity equation

2.3 Hormones

The hormone system is activated if an emotional value is higher than the hormone activate threshold, if this is the case then a hormone value is created, which then has an affect the feeling as we previously discussed. This allows the emotional model to learn and adapt over time, as emotions compete against each other for priority.

2.4 Emotions

The dominant emotion is now selected from the emotional model, which in turn is used as the input for a feed forward neural network. The neural network then selects the best goal for the agent to follow, and the action to take in order to achieve this goal. For example, if the dominant emotion was sadness, the agent might want to find some friendly agents as company, it would then select an action for exploration, in order to find these friends.

If the agent has low energy levels, the emotional state may be one of desire for food. The agent will then select the goal to find food, and will begin by searching the map. If the agent encounters a predator in its path, it may feel the emotion of shock, excitement or anger. This will then have an effect on how the agent reacts to the situation — such as a fight, flight or pause reaction.

3. Adaptive Controller

For every behaviour that the agent might have, each requires a separate neural network; the input of the neural network is the values of the set of sensations the agent currently has, and one hidden input for the bias. This is represented as:

agent sensation input

The neural network is set up with the middle layer using random values; the weights are set to 0 between these two layers. The output layer is the expected outcome of the associated behaviour. The weights between the hidden layer and output layer are set with random values. The emotional model from “Robot Learning” does not go into any detail about the neural networks used, except for mentioning the amount of neurons used, and that the activation function uses of a hyperbolic tangent. I have created neural networks from my previous undergraduate experience.

The paper mentions that the input neurons also include a bias. I believe this is a reference to a threshold bias, and not the same bias used for the coefficient values used for each emotion.

The threshold bias that I included, initially set to -1, is a value which has no effect on the first iteration of the neural network, but provides an extra neuron that the back propagation learning can manipulate, which makes the act of learning easier to handle.

I have found that it has become difficult to train the agents in the simulation, even with back propagation enabled — I have gone into more detail of this issue in my video report. The researchers in “Robot Learning” applied the emotional model to a single robot, which made the process of teaching the neural networks with training data easier.

4. Behaviour Selection Module

The behaviour selected is the one with the highest probability. Calculating the probability of selection using a boltzmann-gibbs distribution does this. This formula is run on each behavioural action:

agent behavior selection

5. Reinforcement Learning Module

The Reinforcement Learning module is used in order to correctly evaluate the outcome of the neural networks decisions and reward them with a positive or negative response. The reward is then back-propagate across the neural networks so they are able to adjust and learn how to correctly function in a situation with the same circumstances.

The reinforcement controller may be a sensation-based reinforcement or an emotional based reinforcement.

The researches compared these two reinforcement techniques against each other in their results; they found that no significant improvement was found by using one over the other.

6. Control Triggering

As explained in my video report, I have not been able to get the exploration behavior to function properly, the explore actions works by selecting a new point of interest — changing the angle and direction the agent in facing.

The Adaptive Controller module is responsible for updating the behavioral state. The program fires the adaptive controller at each timestep of the simulation, so I believe when the behavior action is selected, a new point of interest is selected over and over again — which does not give the agent any time to move in the desired selected direction. The impulse force starts out slowly and increased until it reaches a max speed.

The researchers in the paper “Robot Learning” faced a similar problem of decided when to switch from one behavior to another. This is easier in a grid world, like the game of life, where an event takes place instantaneously — however in a simulation like this, the actions may take different amounts of time to finish executing. Which causes a problem if an action is interrupted during execution.

The researchers included a trigger mechanism, which fires at random intervals — this would activate the Adaptive Controller. The trigger would normally be a change from one emotion to another, as we can assume if the emotion has changed it is because the circumstances the agent is in have now changes it internal state.

agent behavior selection events

This diagram shows sequence of events that are used to select the dominant behaviour of an agent.

Review

Performance

These results are from the testbed running on my laptop, with a 2.2 Ghz Intel Core i7 processor — and 16GB of 1333 Mhz DDR3. These results may vary according to the type of computer the simulation is run on.

In a simulation of 20 agents, memory utilization is ~780MB. This is not ideal — but I believe it may be from all the neural network values that have to be stored in the system. The CPU load moves from between 23% and 96% according to the number of physics interactions taking place. If the application where able to utilising multiple threads would help the situation a lot. The simulation still manages to run smoothly on my system, even though it is using a lot of memory, this is only 4.8% of the available RAM on my system — so there is no noticeable lag.

The simulation speed of the physics world may also be changed, at the moment — for each main loop the system has, the physics engine is called 10 times. For each call of the physics engine, there are 7 velocity calculations performed, and 3 position calculations performed. These calculations determine the accuracy of the physics system — if the testbed causes a significant performance drop then these values could be lowered.

Although I was not able to provide multithreading in the simulation, the program still manages to achieve a reliable 60FPS while running. The simulation is able to handle 40 agents easily enough without any lag. The highest number of agents I was able to run smoothly on the simulation was 60, with the velocity and position iterations both set to 1/10. I am able to run 80 agents in the simulation before the physics system starts to show significant lag, while the application is still able to run at 60FPS, however at this point the program is using 3.62 Gig of RAM.

Interchange of Emotional Models

I have provided a method for switching between Reynold’s Boids and the Emotional Model. The ability to interchange the emotional models is importing in order to compare the results of one model against another. It is important to show that the emotional model actually does perform better than using the Boids flocking behavior (Reynolds, 1987), and I have decided to use Boids as the base in which to compare survival rates against.

Although I did not provide a second or third emotional model to test against, I believe the code is structured well enough with design patterns that it will be easy to interchange models without too much interference with the testbed code. The only limitation is that at the moment the monitoring GUI is tightly coupled with the emotional model, so this will have to be updated as new emotional models are added to the system.

Design Patterns

The code to handle the creation of agents in the simulation uses a factory design pattern; I have created the class so that each agent can act as a self-contained object. The same applies for food, water and boundary objects — in order to create a new agent in the system, you would only have to create a new class for the agent and register it with the system. The agent classes have to follow the implementation provided in the abstract agent interface. These include methods that help the system interact with the agent, the required methods are used to update and draw the agent, as well as let the contact listener and collision system access the class.

I have considered the use of design patterns in other areas of the code, in order to simplify the model and neural networks. I think it might cause extra strain on the system to pass the values to the neural network. Currently the values are stored in multidimensional arrays. I have also tried to keep the code base as simple and easy to follow as possible.

I have found that the well known design patterns are not well suited for this type of application, I have actively tried to apply them in order to simplify the program design, however I have found that custom design patterns that have been developed for game programming are more suitable.

For example: the entity design pattern consists of creating entity objects, and these are assigned components that make up the entity. In the context of this simulation, each agent could be an entity in the system, and be assigned a physics body, as well as being assigned neural networks, or being assigned with a model information panel as a component. The system in its current state creates an agent, which creates a tree of objects — with its own emotional model, neural networks and its own GUI. An entity-component based framework would allow a greater level of abstraction, and allow the GUI to be less tightly coupled to the agent class.

Challenges

The project overall was a big step out of my comfort zone, but I believe that the end result was worth the effort, I am proud of the project and would like to work on developing it further.

The main issue I faced in developing this project was mapping the mouse position in three-dimensional space. The physics world is positioned in the 3D plane starting with {0, 0} at the bottom left corner, and is positioned at {0} in the third dimension — which comes towards the screen. The physics world is seen from a bird’s eye view and the camera is able to pan along the X, Y-axis; and move up and down in the Z-axis.

The problem I faced was capturing the mouse click event on the Z-axis, it would return the correct position in 3D space, and so a mouse click as {400,500,0} would return mouse position as {400, 500}. However as the camera moved away from the map, it would obscure the results — so a mouse click event at {400, 500, 200} would not return the correct position — when it was translated down to map at Z=0.

I tried creating my own method for un-projecting the mouse in 3D space, however as I moved the mouse further away from the origin point at {0, 0}, I found my results would have a larger margin of error. I am still not sure what causes this problem, it may be due to the way float values are stored in memory, and so the further away from the origin I clicked the more the float values where rounded down, giving me incorrect results. I had to use code I found on the Cinder forum, which solved the problem for me, and the mouse interaction now works correctly because of this. As this is code that I personally did not write, I have mentioned this in the code comments.

Conclusion

It has been difficult to find a nice balance in the simulation that allows the agents interact in a suitable fashion.

If the simulation becomes stagnant, or all the agents become stuck for some reason — they eventually lose their health from not eating enough, and the simulation will end once they all die off.

I have also found that I need to have a point of interest the agents can keep track of, which helps determine their rotation and the direction they will move towards. The behaviours used in the paper “Robot Learning” include: avoid obstacles, seek light, and wall following. In the simulation they use; it seems like the food or energy light sources are always placed along a wall. Once the robot detects a wall it will always move towards it and then follow it.

In my simulation, the food and water sources are situated in the open in the middle of the world. By including a point of interest, each robot can start by moving in a direction towards a random point of interest on the map, if they detect food or water, their point of interest is update. When using Boids, the rotation is defined by rotating towards or away from other agents in the simulation. The point of interest determines the direction their bodies will rotate towards — like a needle on a compass — they can then move forwards or backwards towards their goal.

Determining the point of interest, in order to find a suitable angle the agent should move towards is an added complexity that I am still trying to find an elegant solution for. This is relevant to the goal selection process, if they are hungry and need to find food — which direction should they move towards in order to search for food. If they detect other agents should they update the point of interest according to the sensations? And by how much should the sensations affect this point of interest.

Discussion and Evaluation

Features I think would be good to add to the project in the future is an event listener in order to watch for events of an emergent nature. A report with details of how the event was fired, at what time the event took place and under what circumstances.

Some of the advances features I think would be good, is to incorporate a method of evolving the emotional model by using a genetic algorithm, creating relationships between agents and allowing agents of the same type to reproduce and create new generations. The offspring would have a mix of their parent’s parameters, this way only the most successfully agents’ children would survive. After the simulation has run a certain number of iterations you could see how the children’s optimised emotional model is shaped.

An important area of focus should be on communication between agents, as emotions in humans are very often a social and shared experience, we communicate our emotions verbally or physically and we understand these emotions in others with empathy. An example would be that of having the ability to create shared maps of the terrain, and allow agents to communicate and follow these shared paths. I can imagine using a large map, and having agents follow these “learned” paths in order to find water — the same way elephants do in the wild. It would be interesting to see how predators react to this, if they can learn to follow the same paths or situate themselves in sections of the map where they believe the elephant agents might be likely to travel. This same technology has been used in order to build agents that can shepherd flocks into pens (Bayazit, Lien, Amato, 2002). An emotional value could be associated with points in the map, which may help individuals and groups select which paths to follow towards a goal, or reinforce a paths existence on the map.

Throughout the project development, I have tried to work with an agile development methodology; this included two-week sprints of implementing features, in which I would close off the work for the university checkpoint submission.

Recording my progress each week, and working in an agile manner helped me break the large scope of work into smaller manageable tasks, and allowed me the flexibility to adapt the project as I encountered problems and explored new paths.

I found this approached worked well. One of the problems I faced is that the very detailed work on the emotional model was complicated to continue working on after taking a break. I found that if I leave the project in an unfinished state, or “slightly broken” — it would be easier to jump back into development, as I could focus on fixing the outstanding issue. This allowed me to get back into the right mindset in order to work on the complicated mathematical elements, otherwise the work would seem overwhelming and I would not know where to start.

I have done work with neural networks before, but after studying and implementing the emotional model in this project, I have a much greater understanding of what artificial intelligence is capable of in a practical or real world application. After researching artificial life and creative coding projects in the commercial space, I hope to be able to positively contribute back to the community.

I have gained a lot of experience using version control and maintaining a large C++ project. I have also had to learn a lot about the new C++ standards, as my previous work was C++/98 standard and the community is now moving towards using C++/11 and C++/14 as the new standard implementation.

I also gained a firm understanding of the Cinder C++ framework and graphics programming in C++. I hope to work on many more creative coding applications in the future, as it is exciting field to be a part of. This is the first large undertaking of a computer science project I have attempted; I think I handled the project work well. I have gained confidence in building a large system. I have a much more in-depth understanding of using neural networks for artificial intelligence, and I have am more familiar with the current trending research topics.

References

  • Lola Cañamero, (2001), Emotions and Adaptation in Autonomous Agents: A Design Perspective, Cybernetics and Systems: An International Journal, 32:5, 507-529, DOI: 0.1080/01969720120250.
  • Matthias Scheutz, (2003), An Artificial Life Approach to the Study of Basic Emotions.
  • Matthias Scheutz, (2003), Useful Roles of Emotions in Artificial Agents: A Case Study from Artificial Life.
  • Aaron Sloman, (1999), How many separately evolved emotional beasties live within us?
  • Jackeline Spinola de Freitas, Ricardo R. Gudwin, João Queiroz, (2005), Emotion in Artificial Intelligence and Artificial Life Research: Facing Problems.
  • Sandra Clara Gadanho and John Hallam, (2001), Robot Learning Driven by Emotions.
  • K.Sims, Computer Graphics (Siggraph ‘94 Proceedings), July 1994, pp.15-22. Evolving Virtual Creatures.
  • O. Burchan Bayazit, Jyh-Ming Lien, and Nancy M. Amato, (2002), Better Group Behaviors in Complex Environments using Global Roadmaps.
  • Craig W. Reynolds, (1987), Flocks, Herds, and Schools: A Distributed Behavioural Model.
  • Darwin, Charles, (1872). “The expression of the emotions in man and animals”.
  • Plutchick, R, (1980). “Emotion: a psycho evolutionary synthesis”. New York, NY. Harper & Row.