Honeybee and Robot Navigation

Perhaps the most interesting questions of animal biology turn around social insects. The well-known communication between honeybees has been studied especially by Dr. Karl von Frisch, who was honored for his research with the Nobel prize, and his scholar Dr. Martin Landauer (recommended books: Dr. Karl von Frisch, "Duftgelenkte Bienen im Dienste der Landwirtschaft und Imkerei", Vienna, 1947; Dr. Martin Landauer, "Communication among Social Bees", Cambridge, Mass., USA, 1961 and 1971). Von Frisch showed that during their famous dancing, honey bees are able to communicate at least three informations to their sisters. Two informations concern the navigation to a great nectar-source, the third is about the quality of the nectar. Recent research made it possible for the first time to communicate these informations directly to the bees by simulating the dance by a robot. So, once again, technical support allowed a real breakthrough in a science domain that was reserved for exact observation and description.

The problem which remains mainly unsolved is the way, how the bees get and transform navigation informations. Some suppose the bees are collecting visual landmarks. Others involve their magnetic sense organ. Obviously the navigation abilities of bees are amazing. In order to find their way home, they memorize the angle of flight to the position of the sun. They even compute the sun's movement in the sky. They know how to measure the flight-distance, involving the wind's force. And over and above that, during their dance in the dark of the beehive they transpose the course-angle to the sun relatively to the earth's gravitation field and the distance to a certain rythme of the 'belly dance'.

1. Braitenberg-vehicles

The bee-dance is part of the complex communication concert which only makes possible the survival of social bees. Today much research of honey bees turns around the question, if simple behaviour components might be found which brought together explain complex behaviour. For example, scholars showed with the help of a computer-simulation, that the queen bee's inner representation of the nest order which was supposed by elder scientists such as Gerstung (F. Gerstung, "Der Bien und seine Zucht", Berlin, 1905) does not exist. With only two instructions the virtual queen bee simulated perfectly the complex nest order:

  1. search for empty cells in order to lay eggs
  2. stay as close as possible to the nest-center

There exist various other simulations which all have in common to reduce complex animal behaviour to simple elementary behaviour components. For instance Valentino Braitenberg from the famous Max-Planck-Institut for Biological Cybernetics made lots of studies in this direction. His "Braitenberg vehicles" are well-known in both robotics and biology. Many of the Lego Mindstorms robots are real "Braitenberg-vehicles". Equipped with simple sensors such as touch-sensors, light-sensor or temperature-sensor and programmed accordingly, a Lego-robot just behaves like a very primitive animal. Let's take a example. The following robot will be able to avoid obstacles and walls. The simple reasoning is:

Of course there must be some initilization and definition. But the programming is very simple.

OBSTACLE-AVOIDER

#define(Watch_Sensor1,1)

#define(Watch_Sensor2,2)

BeginOfTask(Main)

SetSensorType(Sensor_1,Switch_Type)

SetSensorType(Sensor_2,Switch_Type)

SetSensorMode(Sensor_1,Bool_Mode,0)

SetSensorMode(Sensor_2,Bool_Mode,0)

SetPower(Motor_A+Motor_C,con,7)

{go ahead}

SetFwd(Motor_A+Motor_C)

On(Motor_A+Motor_C)

StartTask(Watch_Sensor1)

StartTask(Watch_Sensor2)

EndOfTask()

BeginOfTask(Watch_Sensor1)

Loop(con,forever)

{now wait until right bumper touched}

While(Senval,Sensor_1,EQ,con,0)

EndWhile()

{turn to the left}

SetFwd(Motor_A)

SetRwd(Motor_C)

{now wait until right bumper released}

While(Senval,Sensor_1,EQ,con,1)

EndWhile()

{go ahead}

SetFwd(Motor_A+Motor_C)

EndLoop()

EndOfTask()

BeginOfTask(Watch_Sensor2)

Loop(con,forever)

{now wait until left bumper touched}

While(Senval,Sensor_2,EQ,con,0)

EndWhile()

{turn to the right}

SetFwd(Motor_C)

SetRwd(Motor_A)

{now wait until left bumper released}

While(Senval,Sensor_2,EQ,con,1)

EndWhile()

{go ahead}

SetFwd(Motor_A+Motor_C)

EndLoop()

EndOfTask()

Another example could be Paul-Nicolas' Firefighter, a simple light-follower. The behaviour of such a robot might seem very complex to an extraterrestral observer, who doesn't know Braitenberg and his collegues. Until better comprehension of the robot's sensors and neurons, he might think the vehicle has an internal representation of its environment.

Have you ever observed the interior of a living bee hive? The incredible order in such a disordered movement leeds unskilled observers to the same conclusions as E.T. in our example. Bees have antennas which help them to avoid collusions with their sisters and other obstacles in the hive. The reactions are very similar to the robot's above.

Bees have many other sensors, such as:

A bee's brain and ganglions are made of about 800.000 neurons. This is remarkable for all the tasks they have to fulfil, isn't it? How the hierarchy of the tasks is set up, isn't known yet. Since every I/O-system is mainly analyzed separately by scientists, there remains plenty of work for interdisciplinary research. Nevertheless, some general rules do appear as real cybernetic laws. For example:

There is an important difference between single sensors and pairs of sensors. If you try to realize an obstacle-avoider with only one touch-sensor, you will have difficulties. Real world has more than one dimension! If a touched-message occured, which way does the robot choose? It must perform a try and error task and walk perhaps in a random direction. So, there will be many necessary but costly actions. Generally this is not compatible with energy economy.

Or, try to realize a light-follower with only one light-sensor. The robot will constantly have to move the light-sensor to make possible comparisons to former measurments. Equipped with two sensors or a differential sensors, the robot can decide without movement.

2. Insect Navigation

Bees might recognize the angle to the sun by the difference of information collected with the ocelli. They need 3 of them, because during a flight there are 3 dimensions that must be considered. How bees register the duration of a flight or the travelled distance is still a mystery. In their work about ants, Hartmann and Wehner suggest a simple computing modell for way-integration. The integrator is made of linear arranged neurons (i). Every neuron inhibits its two neighbours and, once activated, stays activated (see the next picture, which is a simplified diagram of Hartmann & Wehner). If a virtual ant starts from the nest, all but the first neurons are reset. Every time the ant has traveled a certain distance ds, the activation passes to the next neuron. If the ant is able to register the orientation to an objective direction (the magnetic north or the sun, for instance), it might store the two coordinates of the vector in function of the orientation -dx and dy- in two different integrators. The actual orientation might be stored by the same way in a more complex ring of neurons which also allows desactivation of neurons.

Using this model the virtual ant is able to determine its actual position and thus deduce the best course for getting home. No real ant is able to calculate in cartesian coordinates. The insect will rather compute positionning in a polar coordinates system which is closer to its sensor possibilities.

3. PATHFINDER

Robot positionning is perhaps the most interesting subject of mobile robotics. We recommend the lecture of the standard book "Where am I?" of J. Borenstein, H.R. Everett and L. Feng, University of Michigan, 1996 and the estonshing internet-site of Loughborough University.

Using the ant's model, we possess a performing instrument to realize our objective: an autonomous Lego robot able to deduce its actual position from sensor readings and to find its way to a given target position, despite obstacle avoiding. This charming challenge for programmers and designers might interest you. So go on!

As robot platform we opted for an improved version of the synchro-drive. The robot must have a sensoring to measure the actual angle to an initial direction or even to an objective reference and a sensor to determine the distance travelled. Here the main changes to the initial design. All the touch-sensors are connected to one port.

The program is made of several components:

INPUT-CONTROL

  1. read angle
  2. read distance
  3. read bumper

COMPUTING

  1. calculate actual location
  2. calculate new course

OUTPUT-CONTROL

  1. adjust direction
  2. follow course until target reached
  3. avoid obstacles

For practical realization, there have to be at least 4 tasks running simultanously. The RCX is capable of real multi-tasking. The 4 tasks are the following:

  1. READ_ANGLE
  2. WHERE_AM_I?
  3. CHECK_BUMPER
  4. MOTOR_DECISION

1. READ_ANGLE continuously measures the angle of the robot platform to the reference zero angle. In our design this is done by

2. WHERE_AM_I? continuously operates the distance measurement (there must be a certain time dt between two measurements) and computes both the actual position and the new course direction. This is done by:

3. CHECK_BUMPER continuously checks if there is an impact on one of the bumper sensors.

4. MOTOR_DECISION continuously checks the values of the variables called steer_command, drive_command, (and in a further version also avoid_command). It then decides which instructions to send to the outputs through a seperate subroutine.

This task fulfils the important arbitrate-work to control the robots action without ambiguity. The best code for the task would be the following [but because of LOGI2 compatibility the first version presents some differences] :

Here an uncomplete NQC code:

#define comm_none -1

#define comm_stop 0

#define comm_forward 1

#define comm_Left 3

#define comm_Right 4

#define comm_Avoid 5

int Motor_command;

task MOTOR_DECISION() {

while (true) {

if( Drive_command != command_none)

Motor_command = Drive_command ;

if( Steer_command != command_none)

Motor_command = Steer_command ;

if( Avoid_command != command_none)

Motor_command = Avoid_command ;

motor_control () ;

}

}

Here some pseudo-code for the motor subroutine:

sub Motor_control () {

if (motor_command == comm_stop)

stop Drive_motors ;

if (motor_command == comm_forward)

stop steering and go ahead ;

if(motor_command == comm_left)

turn to the left ;

if (motor_command == comm_right)

turn to the right ;

if (motor_command == comm_avoid)

execute obstacle avoiding action ;

}

This technic is known as Subsumption Architecture and has been developped by Rodney Brooks at MIT in the late 1980s. Several robot behaviours run simultanously. Sensor inputs are used to determine which behaviour controls the robot at a given moment. Thus there exists a hierarchy of behaviours. The main characteristic of this way of programming is the use of control variables like drive_command or steer_command and the most important motor_command. The arbitration-task continuously fills the motor_command with the highest level instruction. This command is the argument of the motor control subroutine which alone executes motor control. It is important that at this place of the program a subroutine is used in order to prevent the robot from jiggling.

Download the Pathfinder source code for LeRobot here. <-----

Attention, you will need the new version of LeRobot !!!


RetourMain Page