Suite 302 - 40 Boteler St.
Ottawa, Ontario, Canada K1N 9C8
E-Mail: Mail(at)AdaptronInc(dot)com
Phone: 1-613-218-5588
Adaptron Inc.     Notes   Papers
Research -   Approach     Specifications   Architecture and Design
Research - Overview                                                    This page updated: September 24th, 2015
The Vision:
To produce general purpose intelligent robot control software
The Goal:
To develop software (named Adaptron) that allows a robot to:
  • learn,
  • think, and
  • act autonomously
Adaptron must be able to learn and think using any combination of the following robot features. The ultimate objective is to handle all the higher complexity settings.







 Number of senses




 Sensors per sense




 Multiple Sensors

 Independent sensors

 Dependent sensor array
 1 Dimension


 Sensor array




 Number of sensor arrays

 1 Dimension

 2 Dimensions
 (not currently a requirement)


 Type of readings / inputs

 Symbolic (Nominal)

 Graduated (Ratio scale )


 Number of action device types




 Number of action devices per type




 Simultaneous actions

 1 at a time per device

 Multiple in parallel


 Value Systems
 (Motivation –> Behaviour)
 Familiar / Unfamiliar
 (Intrinsic –> Exploration)
 Emotional feelings
 (Extrinsic –> Survival)


 Thinking ahead

 1 stimulus

 Multiple steps


 Time flow

 Event driven

 Clock driven / Graduated

Other constraints that have been imposed on the design of Adaptron are:
  • It is to be programmed using a deterministic algorithm, no probabilities
  • It is to be as simple as possible; Occam’s Razor applies
  • It is to start with no experience; it must learn everything; 100% grounded, embodied and enactive
  • It is to operate within a robot body which is separate from the environment
  • The robot body shall house the sensors and action devices
  • It is to start with two built in value systems, one for exploration and the other for survival
  • It shall use reinforcement learning driven by the value systems for learning actions
  • It will use unsupervised learning for perception ( not supervised learning )
Research -   Overview     Specifications     Architecture and Design
Research - Approach
The development of the Adaptron software started with the creation of an algorithm and data structure that learns and thinks and meets the lowest complexity set of robot features. It has been and continues to be systematically enhanced to increase the complexity of the features it can handle. Changes are made to the algorithm and data structures so they continue to work while still handling the lowest complexity features. As the software is expanded and different algorithms are tried it grows in complexity and size until more features work. Then it is stripped down to just the necessary / essential software to continue working based on the principles and algorithms discovered. This cycle is continued hundreds of times until the underlying processes of general purpose learning and thinking are implemented in the algorithm.
There are many test cases against which the software is run. However, even the test cases change because cognitive science, psychology, artificial intelligence and artificial life theories do not yet provide practical or precise enough answers for modelling general purpose learning and thinking.
Here are two examples of simple problems / test cases that do not have precise scientific answers. They have been solved using the described research approach. The problems are based on a robot with the lowest complexity set of features. It has only one sense with one sensor that detects symbolic stimuli from its environment. The symbolic stimulus values are ‘A’ and ‘B’. The robot has only one output device that can generate only one action; ‘x’. It has no understanding of good or bad, just the ability to detect novelty and familiarity. It starts with an empty memory, no previous experiences. It can only think ahead one symbol. It has no sense of time but recognizes the sequential order of events.
1/ Given the series of inputs A A A A A etc.
     when does the robot first react with its action ‘x’ and why?
a) Is it after the first A?
    Maybe it’s surprised and wants to see what will happen if it reacts?
b) Will it be after the second A?
    Maybe because it is bored or expects the next input will be another A?
c) Or will it be after the third A?
    Maybe because it has now learned that the sequence A A is guaranteed to recur?
d) Or will it wait even longer?
         Maybe it is thinking about it? Or maybe it habituates and ignores the whole sequence
         without any action being done? If so what would cause it to act?
2/ Given the series of inputs A B A B A B A B etc.
     when does the robot first react with its action ‘x’ and why?
a) Is it after the first A?
    Maybe it’s surprised and wants to see what will happen if it reacts?
b) Or does it wait for the next stimulus, B? Why?
    Is it surprised by the change?
c) Will it react after the second A?
    Maybe because it expects the next input will be a B?
d) Or will it wait for the second B before acting?
    Does it know the next input will be an A and it will be bored?
e) Or will it be after the third A?
    Maybe because it has now learned the sequence A B is guaranteed to recur?
f) Or will it wait even longer?
         Maybe it is thinking about it? Or maybe it habituates and ignores the whole sequence
         without any action being done? If so what would cause it to act?
And do the principles you used to answer the second problem still give the same answer when applied to the first problem?
Research -   Overview     Approach     Architecture and Design
Research - Specifications
Given that Adaptron must function in any configuration of robotic senses and action devices it is necessary to clearly and precisely specify these senses and devices. It is also necessary to have a practical specification for learning and thinking. These do not always correspond to current cognitive science theory. Adaptron’s architecture and design are based on these specifications. These specifications are not complete and are intentionally simplified for highlighting only those aspects that are important for the software research. Key terms are underlined when defined. These specifications will be updated at the same time as the architecture, design and research notes.
Senses, Sensors and Stimuli
Senses are the devices that measure the environment and the state of the body. The five most commonly described human senses are the ones that measure the environment.
These are the external senses. They are:

Body part
Sense of
Properties or types of information measured
Sight (visual)
 Light - colourful images, distance
Hearing (oral)
 Sound - noises, frequency (pitch), direction
Touch (haptic)
 Contact - pressure, temperature
 Chemicals in solids and liquids - flavours
 Chemicals in the air - aromas

Additional internal senses gather information about the state of a human body.
These senses include the following:

Body part

Sense of

Properties or types of information measured

  Inner Ear


 Orientation, acceleration, roll, pitch and yaw



 Position, strain / tension



 Building material and fuel








Tired, sick

 Stored energy, state of healing

There are other internal mechanisms that are similar to senses in that they are sources of information used by the brain. It may be that the nervous system generates or adds these properties to stimuli. These are often referred to as feelings.

Sense of

Properties or types of information measured


Novelty, familiarity, boring, unexpectedness


Pleasant, neutral, unpleasant, painful

There are a wide variety of possible non-biological senses that could be used in a robot. Some measure the same properties as animal senses while others measure properties that animals can not detect. They include, but are not limited to:

Sensing Device

Properties or types of information measured

Video camera

Light, infrared (heat), Ultraviolet


Radio waves


Sound, ultra-sound

Laser range finder




Chemical detectors

Aromas, chemicals in liquids


Magnetic direction

Pressure sensors

Pressure, stress, tension, acceleration


Rotation - roll, pitch and yaw


Wind speed

Radar dish / gun

Distance, Speed, and Altitude




Angular velocity

Each sense has numerous sensors that measure or detect the type of information. For example, a simple description of the ear: a series of sensors each of which detects the volume of sound at a particular frequency.
The most atomic simple stimulus or measurement is a single valued reading that has been detected by a single sensor from a particular sense at an instant in time. The sense determines the type of information, e.g. light, sound, pressure, tension etc. Sensors determine the source / location of the information, e.g. frequency (colour and pitch), the chemical, skin location etc. Sensors measure the value of the intensity, volume, or amplitude of the stimulus.
A stimulus may also consist of many simultaneous measurements and may also be a series of such stimuli over time. Simultaneous stimuli from multiple sensors of a single sense form a composite stimulus that we recognize as an object. Examples are the image of an apple that is detected by the retinal sensors and a chord on the piano that is made up of numerous simultaneous frequencies. The term object is used to mean a tangible (but not necessarily touchable) thing in the real world.
Stimuli that occur simultaneously on two or more senses are called parallel stimuli because the readings occur in parallel. An example is seeing a door move and hearing its hinges squeak at the same time. Another example is when you scratch your forehead you see, hear and feel it simultaneously.
A series of stimuli forms a stimulus that is called a sequential stimulus. An example on a single sense is the sound of a tuning fork that lasts for one or two seconds. Assuming there is only one pitch in the note this stimulus consists of a single frequency but its volume has a rapid increase when struck and a steady decay afterwards. Another more complex example is the sound of a car passing you on the street. Seeing a person's lips move and hearing the spoken words at the same time is an example of a sequential stimulus consisting of a series of parallel stimuli.
A general purpose model that can be used to represent most senses is a linear array of sensors each providing a measurement of the intensity of the sensed information.  The sensors in an ear form such a linear array. Non-biological senses can also be mapped on to such an array. For example, a laser range finder provides distance measurements for particular angles. Each angle is a particular direction relative to the robot’s body and is equivalent to a sensor position. A more sophisticated model requires a two dimensional array of sensors and more than one sense organ to produce a three dimensional effect as in the sense of sight.
Most biological sensors provide stimulus values that are graduated readings. That is, they can be placed on a continuous scale but there is a resolution that separates one value from an adjacent one on this scale. Non-biological sensors may also produce stimulus values that are discrete symbolic readings. An example of a sensor that produces symbolic readings is a robot touch sensor that has been designed to detect the type of material with which it comes in contact. It could indicate whether the material of the object is air (touching nothing), solid (wood, metal, rubber, glass, cloth etc) or liquid (water, petrol, milk, mercury etc.). All the sensors of a sense provide either graduated or symbolic readings. Thus each sense can be categorized as producing either symbolic or graduated stimuli.
Another non-biological sense may have numerous sensors that have no relation to each other. Thus the sensors are independent and no concept of scale or relative position between them can be obtained. An example might be the sense that provides the physical configuration of a robot's body parts. There may be many motors in a robot each controlling the angle of the joints of its limbs or wheels. Each electric motor would have a sensor to measure its rotational position. The sensors would produce this position as a graduated reading from one to 360 degrees. The combination of readings from these sensors is another example of a composite stimulus. So a sense may have discrete independent sensors as in the motor sensors or it may have dependent sensors in the form of a linear (graduated position) array as in our ear.
Note that senses (hearing, sight, touch etc.) are also independent of each other because they measure independent properties.
Table 1 contains numerous examples of possible senses that are based on the combinations of the two types of readings and two types of sensors.


Graduated readings / stimuli

Symbolic readings / stimuli

Dependent sensors

Hearing, Laser range finder

Robot touch belt
(described below)

Independent sensors

Robot motor positions

Farm animal recognizer
(described below)
Table 1 - Examples of possible types of senses
A robot touch belt would consist of a linear band of touch sensors around its waist. Robot touch sensors were described above as able to detect wood, metal, water etc. A farm animal recognizer may consist of multiple sensors scattered at random throughout the fields and barns on a farm. Each sensor would consist of a video camera and a computer that is programmed to recognize and identify any animal that falls within its visual range. The symbolic stimulus values might be cow, pig, chicken, human, etc.
Note that a robot does not necessarily need to have its senses and action devices physically interconnected in a single body. The farm animal sensors could be part of a robot that controls farm animals by opening and closing gates and barn doors. Another example is a robot that controls the lights at an intersection. It could employ multiple video cameras and vehicle sensors under the road.
Dependent sensors of a sense may be circular or linear. In the case of hearing they start at a very low frequency and end at a very high frequency and are therefore linear. However in the case of a laser range finder each angle of direction corresponds to a sensor. With a resolution of one degree there would be 360 such sensors each measuring a distance. However sensor number one is adjacent to sensor number 360 and thus a circular array of sensors is required. The same circular or linear property is also used for describing the graduated readings from a sensor. Volume readings are linear while the angle of a motor is circular.
Devices and Responses
For an animal or robot to move and have some effect on its environment it must use some form of output device that causes action. In most animals muscles are the bodily action devices. In robots the motion devices are most likely motors but can be other types of electrical devices. Other robot action devices would include speakers and lights. Responses are the output signals generated by the brain and sent to these devices to cause these actions.
The brain knows if an action has been done because it receives the kinaesthetic feedback stimuli that were produced by the sensors attached to the output devices. In the case of humans the feedback stimuli are from stretch sensors on the muscles. At the same time the brain is also getting feedback from joint sensors, tendon sensors and many other senses such as sight and balance.
It is important that all action devices have associated feedback sensors. This allows for the confirmation of the action as well as the detection of an external disturbance - one not produced as a result of a response. In a robot such feedback would indicate that a light has burnt out, a speaker is not working or a limb has been moved by an external force.
Consider a simple robot action device that allows it to move left, right, forward or backward one unit of distance at a time. The device has four commands (responses) it can be given. It would also require a feedback sensor that returns one of these four possible discrete / symbolic readings. This is called a discrete device because it is given discrete responses. A graduated device is given graduated responses, such as a change in rotational angle for a motor. Graduated devices require graduated feedback sensors.
Learning and Habits
There are three kinds of learning:
  • learning to recognize,
  • learning to do and
  • learning to think.
Learning to recognize is also called pattern recognition or perception. Learning to do is learning behaviour and in this document is called action learning. Learning to think is the process of improving one’s ability to mentally model the world based on experiences.
All things that are learned are kept in memory in the form of habits. A habit is a recording of what stimuli were experienced and what actions were done (if any).
For pattern recognition there are two types of habits:
  • parallel and
  • sequential.
Parallel habits (P-habits) record stimuli that happen simultaneously. Sequential habits (S-habits) record stimuli that happen in series. Action habits (A-habits) are used to record learned actions. Action habits are a sequential record of a trigger stimulus, a response performed and a goal stimulus.
Habits are like computer programs in that they can be performed. When an S-habit is performed it is triggered by the first stimulus in the habit and is expecting the next stimulus. When the next stimulus is perceived a sequence of two stimuli is recognized. A-habits are started when their trigger stimulus occurs. Then the response is produced and the goal stimulus is expected. Learning is the process of remembering these habits and performing them again in order to achieve a desired goal.
We perceive the environment as being made up of many objects. Each object is made up of different parts that are also objects. This forms a tree structure. S and P-Habits form tree structures that are used to remember the combinations of parts for later recognition purposes.  Perception is the process of learning to recognize (identify) objects from stimuli. This is synonymous with the process of converting graduated measurements into symbolic information that identifies an object. It also includes the process of learning to recognize complex objects composed of parts that are also objects. The composition of complex objects from less complex objects can occur at multiple levels in parallel and in sequence.
One of the goals of an animal or robot is to learn about its environment from the detailed stimuli that continuously flow from its senses. Once it has a memory of its environment the goal is to notice / pay attention to any interesting things that happen. Interesting stimuli are novel / unfamiliar. But to determine the novelty of a stimulus it must be “looked up” in memory. It is done through the subconscious performance of S and P-habits. Familiar stimuli match the existing recognition habits. New habits are created for novel stimuli and they may attract attention. Conscious stimuli are those to which attention has been paid. It is only in this conscious mode that action habits can be learned.
Attention has two modes in which it operates:
  • attracted and
  • directed.
In the attracted mode attention is attracted to the most interesting stimulus from the ones available as described above. In the directed mode we are concentrating on perceiving a specific expected goal stimulus. This mode is used for practicing and thus learning action habits. The concentration level is the level of interest in the action habit’s goal stimulus.
Action Learning
Action learning is the process of discovering and remembering the goal stimuli that result from the performance of actions / responses in given trigger situations. An action habit has been learned when it is being practiced and the goal is reached. However, if a stimulus occurs with an interest level higher than the concentration level then attention is attracted away and we are distracted from achieving the goal. If an unexpected but not distracting goal stimulus is obtained then we have learned a new action habit. Because it is important to learn an action habit when it is being practiced attention is focused on only one stimulus at a time.
Complex action habits can be made up of other action habits. There are three kinds of action habits:
  • looping,
  • sequential and
  • parallel.
A looping action habit is one that is repeated continuously such as walking or clapping. Sequential action habits are ones that chain other action habits in series such as tying a shoe lace. Parallel action habits are ones that can perform other action habits simultaneously provided they do not make conflicting use of action devices. An example is speaking and walking. However speaking and eating is hard to do simultaneously because of the conflicting use of the tongue and mouth muscles.
Learned action habits are selected for execution based on the occurrence and recognition of their trigger stimuli and the desire to achieve their goals. This must be done consciously. If a habit needs practicing then all of its action sub-habits are performed consciously. But if an action habit has been learned, then once started, it is done subconsciously. It stops when it comes to an end or fails to get any of its expected feedback stimuli.
Thinking is the mental simulation of a sequence of experiences. Thinking one step ahead is the simplest form.  This is the minimum thinking necessary to decide whether to perform an action habit. The process begins with the occurrence and recognition of and attention to a candidate trigger stimulus. An action habit with this trigger is found in memory. The single step thought is the recall of its goal stimulus.  Based on the desirability of this thought about stimulus the action habit will be done subconsciously, not done or practiced.
Thinking more than one step ahead is directed by the desirability of goal stimuli. Each goal stimulus that is thought about is matched against an action habit trigger stimulus. This is continued until a desirable goal stimulus is reached. At this point the thought about action sequence is started.
Research -   Overview     Approach     Specifications
Research – Architecture and Design
When first programmed in BASIC and then in PL/1 Adaptron used a linear array to store the long term memory of its experiences. However as it has been modified to accommodate the more complex requirements its long term memory has evolved into a very restricted form of semantic network. This is described in more detail in the paper about Perceptra, the pattern recognition part of Adaptron.
In the network each node represents a class or category and all the connections are “has-a” links. Its artificial neural nodes are binons. The word “binon” is a contraction of binary neuron. Each binon represents a specialized class made up of two, more general classes. The more general classes are shared and can be reused by any number of higher-level more specific classes. In this respect a binon is very similar to a perceptual schema in schema theory or a category prototype in prototype theory. Binons have two inputs and one output. The two inputs may occur in sequence or parallel.
Adaptron uses binons to recognize objects and sequences of events. Each binon represents an S or P-habit. Binons that recognize graduated stimuli convert them into symbolic ones. Adaptron currently recognizes complex objects from a graduated or symbolic sense using a linear array of dependent sensors. It recognizes these objects as having a shape and contrast independent of position, size,  and brightness / intensity. The rules used for this process include:
There are additional rules for recognizing objects independently of level of complexity.
For recognizing S-habits Adaptron uses a short term memory (STM). The STM builds up and holds a tree of S-habit binons before attention is paid to it.
Adaptron also dynamically builds a hierarchical action habit network integrated with the ANN. Action habits are represented by actons. The word “acton” is a contraction of action neuron. Actons are binary in that each one contains two sub-actons which can be performed in sequence, in parallel or repeated. The lowest level actons contain the responses that are sent to the action devices.
Adaptron’s long term memory is an ever changing and growing network of binons and actons.
The core of the Adaptron algorithm loops continuously through the following steps:
  1. Obtain stimuli from sensors and senses building up S and P-Habits.
  2. Execute all subconscious action habits based on perceived stimuli.
  3. Perform the action habit being practiced.
  4. Pay attention - to either the expected stimulus of the action habit being practiced or any
       distracting / unexpected stimulus.
  5. Think about the next expected stimulus based on the attended to one.
  6. Start doing any desirable action habit either in practice mode or subconsciously.
Numerous fundamental principles and algorithms for:
have been discovered as part of the research and are used in Adaptron.
The current software uses:
  • Event based time flow. There is no sense of time based on a timer or clock.
  • An exploration value system based on novelty and boredom (unfamiliar and familiar stimuli). No research has yet been done on the use of an emotional value system based on pleasant and unpleasant stimuli.
Research -   Overview     Approach     Specifications     Architecture and Design
 Adaptron Inc.