2008

5th Feb 2008                           Bucketed values

People don’t have to explore all the possible intensities of a stimulus before it becomes permanent. For example we soon stop reacting to every vehicle that goes by us on the road, for a wide range of speeds. However we may still react if one goes by so fast it is a blur. What might have happened is that all the intensity values in a range (the bucket) have become permanent. Then an intensity reading must be outside the permanent ranges / buckets to attract our attention. Maybe the storage of STM values get bucketed into LTM values and it is these LTM valued experiences that are used to start background recognition habits that will recognize a range of possible valued stimuli.

                                                Object and or intensity

Most of the time we base our actions and decisions on the objects in our environment independent of their intensity. We eat the apple whatever colour it is, or read the book no matter what size it is, enjoy the music at a variety of volumes. But in certain situations intensity is important. E.g. pitch of the voice in Japanese and Chinese; colour if it determines ripeness. The rest of the time the object alone or an object composed of a sequential or parallel structure of objects is important.

                                                Generalize, then Specialize

Given an experience X triggered by an object A at a given intensity M followed by a second object B at a particular intensity N, if the object A should reoccur at a different intensity P one would expect that generalization should be used first and the experience X performed. This would set up an expectation of the second object B and an intensity change of N-M. It would not be expecting the original intensity N nor a calculated intensity P + N-M. Thus it is expecting sequential structure. Three possibilities are:

  1. it gets the expected object B and the expected change,
  2. it gets the expected object but a different intensity change and
  3. it gets a different object.

1. It gets the expected object and intensity change – this will reduce the interest in the original experience and the new experience will be recorded but have no redo interest.

2. It gets the expected object but a different intensity change – there will be left over interest, the original experience will retain its redo interest and the new experience will be recorded with its own redo interest based on the intensity change experienced.

3. It gets a different object from that expected – this results in a whole new experience.

So when or how does it specialize such that the trigger object’s intensity is important in determining which experience to perform. Is it possible that change and interest alone are not sufficient to cause specialization?  Well with 2 above we have two experiences in memory. Should object A reoccur at either intensity M or P the appropriate experience will be performed and not the other. This is specialization. Should object A reoccur at a different intensity Q then both experiences would be performed.

These habits are expecting the same change and should be flagged as C-Habits. They are done using the same interest levels as other habits but continue or fail based on the change obtained. A C-Habit would not be performed if the trigger matches the intensity of any experience because a normal S-Habit would be active instead. If no S-Habit were started then all possible C-Habits would be started for the trigger object. Questions now arise as to what become permanent and which experience is used for the last tried reflexive response?

6th Feb 2008                          Action Possibilities

I am starting to rethink the 6th Oct 2007 idea of trying all possible actions before repeating any that were interesting. It may be a good heuristic but does not emulate the way animals behave.

                                                Exploration of the Object versus Experience

Given the performance of habits expecting the same change in scenario 1 we end up with 2 experiences with the same trigger object (different readings) with the same amount of change. They are both uninteresting for redo. If either trigger repeats a reflexive response will be done to explore the environment. If the same object but different reading occurs there is no redo interest to obtain a change. Once again a reflexive response will be done. But is the response based on the object or the object and reading. That is, are we exploring the variety of intensities of the object and all the possible changes based on those trigger intensities or are we just exploring the object. What becomes permanent: the object or the object at a particular intensity?

Let’s consider scenario 2 in which two experiences are in memory for the same object at different intensities. A different valued same object trigger occurs. The two experiences are performed each expecting their change amount. In the case of action habits this works if the action they both perform is the same. One of the habits gets the expected change. Should the interest in the other habit be reduced because we did not get its change? It was not triggered by the same intensity but by a different intensity.

I need a strategy such that provided there are two or more changes experienced for the same action on the same object at different intensities I keep on trying that action given the object at any intensity. The solution has to be about unexpectedness being worth repeating but expected results causing reflexive responses. With a repeating object one can have unexpected intensity with expected change but not expected intensity with unexpected change. Are we exploring the possible actions to the object independent of its intensity?

7th Feb 2008                          Generalization

To implement generalization of objects independent of their readings I need to store the last action tried and permanence status with the object / binon which has the difference but not the reading. The readings stay in LTM with pointers to the objects. The objects point to their first memory locations if I wish to use the pointers in LTM to navigate the habit recognition stack. But if I move the S-habit recognition pointers to the object array I may not have to use memory pointers, I can use the object pointers. The most recent LTM location experienced could be for the object rather than the object and reading? Also the redo interest must be a property of the object. Then when the binon / general / change expectation is violated we have a special case which is stored in LTM with the particular reading and flagged as not based on the general case. Exact matches will then behave based on LTM unless they are flagged as based on generalized object behaviour. The first object encounter is assumed to behave in the general way and unexpected results to be specializations. What if the 1st encounter is the specialization? How do we reset the object behaviour to the general case?

A useful example is based on seeing an apple (the object) of a particular colour (the reading). The actions might be squeeze and smell. The 1st apple is green and on the 2nd encounter we are bored so we squeeze it. The next encounter we smell it and then on subsequent encounters we retry the squeeze and smell until they hold no interest in being redone and it then becomes permanent. If the second encounter is with a red apple we use generalization, which assumes colour is irrelevant. So the second encounter is boring. No this is incorrect. The 2nd encounter is interesting because of the different colour. Colour is not a good reading analogy because it really is a composite of three readings making it an object. If we were to use shades of green or different levels of brightness, that might be a better analogy. So now if the 2nd encounter is with a brighter green apple we use generalization, which assumes the brightness is irrelevant. So the 2nd encounter is boring and we squeeze it. This happened even though there was a change in brightness. The next green apple of any brightness would be smelled etc. That is, we do not explore each level of brightness with each action.

                                                Change

But surely a change in brightness with no change in object would be unexpected and retain our interest such that we do no action, just wait for the next stimulus. Then our expectation would be for the same object and the same change in brightness. If this occurs then we are bored. It is analogous to a moving object. You need two frames to establish the motion and then the third frame is boring. If it changes speed on the third frame then you remain interested because the motion was unexpected, the change was not expected. Only when we get the expected amount of change do we become bored, this happens even if the particular position has not been experienced before. It’s the same object and it has changed its position by the same amount. For example the object is seen at position 47 then at 45; its speed is –2. It is expected to change position again by –2, the same speed. If it does we are bored even though position 43 is novel.

But, if instead, it changes position by +5. This is unexpected and we pay attention. It now is at position 50. There are two expectations, one for a –2 change and another for a +5 change. Also no change could occur and this would be boring. If either the –2 or +5 change occurs then we are bored. Thus only new unexpected speeds / change amounts retain our interest. When we are bored we perform an action and a particular change in position occurs. The mode of behaviour for this object has changed from observation to exploration. Let us say at position 50 the change is –2 with new position 48 thus repeating the first speed ever observed. We are expecting this so our first reflexive action A causes no change in speed. It will have changed in position though to 46. When a reflexive action is performed the expectation is for no change, so any change such as this –2 will be interesting and mark the action as worth repeating. This means there is no sense of continuous motion involved in Adaptron. It has no expectation that an object will continue the same change if the action has no effect on it. So should it react after the new position 48? Or does the –2 speed recall the previous  –2 speed and an expectation of a +5 is set up. The question is when does it react? Here is the sequence: (L is for listen)

Position:                    47        45        50        48
Change:                                -2         +5        -2
Redo-interest:                        2           7        0?
Action:                        L          L           L          ?
Expecting:                 0          -2         -2 or +5?

The redo-interest here is based on a difference of reading. It’s the change in the change. This is different from previous redo-interest that is based on the object being different. The change amounts here act more like stimuli from another sense. It is expected to be the same as the last change and it is interesting if it is not. Thus the sequence should be:

Position:                                47        45        50        48        46
Change:                                             -2        +5         -2        -2
Redo-interest:                                     2          7          7          0
Action:                                     L          L          L          L          A
Change Sense Expecting:      0         -2        +5         -2        -2
Change Habit Expecting:                              +5

The last change of –2 was uninteresting because it was expected by the change sense. A change of +5 from position 48 would also be uninteresting because it was expected by the habit. A no change in position, i.e. stay at 48, should also be uninteresting due to no change. Now the question arises, is the action A being used to explore the object possibilities or the changes. I suspect we are exploring the changes and what actions cause what changes given specific trigger changes.

But what if now a particular position has certain properties not general to all positions? For example, speed through this position is constrained in some way. We have specialization. Now the expectations at this specific position must be based on the LTM entry, not just the object.

9th Feb 2008                          S-Habits versus C-Habits

Both S-Habits and C-Habits are sequential. It may be possible to represent S-Habits as C-Habits in which there is only one reading due to the objects involved being discrete. And aren’t S-Habits all about detecting change and unexpected change? It’s just that with S-Habits the change is either yes or no rather than a graduated value that can be determined between two similar objects that have a graduated intensity.

I’m using a different sense number to identify S-Habits and the same sense number for a change habit that involves a change in the reading for the same object. If I have a change in object in the same sense due to the object being a composite from multiple sensors then this would end up as an S-Habit. Maybe therefore change habits should use the S-Habit sense number. The two objects would just be the same and the change amount is graduated with a range of values. But what happens when an object has an intensity reading plus a position reading plus a width reading and any or all of these could change? Does each object have a number of readings? Do we then have a composite object with four values, the object ID (discrete), the intensity, position and width (all graduated)? Then we could use the principle that the biggest combination that changes attracts attention. And then an S-Habit that expects a change needs 4 change values. This then means that all possible combinations need to be explored with all the actions. But that is not reasonable. There must be a simpler exploration strategy.

                                                Exploration

In a maze animals and humans don’t try exhaustively all possible actions at all possible positions. They are more likely to repeat a past action in a new situation because it has some characteristic of a past situation in which the past action gave an interesting – unexpected result.

This only works if a common characteristic can be found between the current and past situations. In the case of one sense, one discrete sensor this is not possible. The only ways characteristics may occur are in sequences of change. But once we have two senses each with a discrete sensor or one sense with two discrete sensors we now have the possibilities of common characteristics. This would be one particular sensor and value independent of the other sensor’s value. This is generalization as described on 4th Dec. 2005 and 24th Oct 2007. Maybe I should be working on this phenomenon before change habits.

10th Feb 2008                      Explore Change or Readings

In the following scenario change is a stimulus combined with a reading to form a pair. At step 4 only the change changes so attention is paid to it.

One Sense, One Sensor, Graduated Readings

Step                          1             2            3            4            5a         or 5b            or 5c            6
                                 Environment only changes if action taken
Sense #                    In            In           In           In            In             In                In               In
Sensor Obj #            1             1            1            1             1              1                 1                1
Sensor Reading       47           47          49          49           49            49               49              51
Sensor Change        0             0            2             0            0               0                 0               2
Reading & Change  47,0        47,0       49,2       -,0           49,0        -,0                49,-           51,2
Obj ID                       1            1             2             3            1              3                                   4
Expect next              1,47,0    1,47,0    1,49,2     1,49,0     nothing
Redo Interest           0             2           -2             ?
Action                                      a                                         a             a
Explore using                                                                     reflex      redo @2         redo @3
Common Characteristic                                                                    Change          Reading
                                                                                                          Because@2  because @3
Objects ID                 1            2            3            4                            Redo Int=2    Redo Int=-2
Obj ID #1                 -1           -1           -1           -1
Obj ID #2                 -1           -1           -1           -1
Reading & Change  47,0       49,2      -,0            51,2
First occurred           1            3             4            6

At step 5 the expected from 4 is received so an action needs to be performed. Should the strategy be to explore the combination reading and change with a reflexive response as in 5a or should it be to find a common characteristic and explore it as in 5a or 5b. Step 5a uses a common change to find a trigger for the action. This is more general than Step 5b, which uses a common reading. When this action is done, the used common characteristics are stored in LTM so that the action habit records the trigger that is being explored. What is stored for the goal stimulus? It could be the complete stimulus perceived. Or maybe what should be stored are those characteristics of the goal that match the expected because this is what the action does. I think the best environment in which to play with these ideas is test case 25 – one discrete sense with 2 discrete sensors.

Another reason to use scenario 5b is that attention does not have to be changed. The expectation of attention or conscious sequence is that nothing will happen, just as sensors expect no change. So even though there is no habit being done the expectation is that the last conscious stimulus will repeat i.e. the change of 0. Attention should only change if attracted or on purpose by performing a habit. Between step 4 and 5 there is no change to attract attention and no habit is being performed. The action “a” would then be a result of a reflexive response to boredom not of finding a past experience with a common characteristic. Unless the reflexive response strategy is not to choose the next action in sequence but to find any partial past matches that have actions that were worth redoing.

                                                Action Generalization

Given that action “a” is performed, the goal stimulus will be 51, 2. What part of this will be attended to? Is the action performing the habit at step 2 expecting a 49, 2 at step 3? Or is it performing a new action habit just using step 2 as a source of actions to try? An example is that you find that banging the TV makes it start working. You didn’t want to get it working; it was just a change that you observed, an unexpected result. You find the radio is not working. Both the TV and radio have the common characteristic that they are electronic appliances. You are interested to see if you get the same unexpected result. So you bang the radio. It starts working. The same change happened. Does that mean the action habit has been proven and is no longer interesting until the desire to get an electronic appliance working occurs? 

The goal is to cause unexpected things to happen thus to obtain novelty and then reduce its interest to neutral by repeating actions. Applying this principle to the above situation it would appear that the 51 and 2 are changes from the previous stimuli of 49 and 0.

11th Feb 2008                      Exploration Strategy

The objective behind trying a past interesting action from a similar situation is to experiment. That is to see if the original action caused the change from 47 to 49 or the 0 to 2 change. It might be more helpful if we considered a rewarding situation rather than one just involving change / novelty. Let us assume the first occurrence started with composite object A, B. The action was “a” and the resulting goal stimulus was C, D. Let us assume this was given a positive redo interest due to C or D being wanted / rewarding. In the second occurrence the composite object X, B is the trigger but only the B is attended to. This results in a similarity match with the first occurrence and action “a” is tried. The expectation is that the same reward will be obtained not that the same goal stimulus will occur.

Does the first occurrence learning get updated as a result of the second occurrence? Probably not, because it was used as a source of action to try, not as an A-Habit behaviour to perform. The second occurrence forms its own A-Habit. The second occurrence trigger stimulus is -, B which is what was being attended to at the time. This records a general A-Habit independent of the 1st of the two objects involved. What if the composite X, B was attended to in the second occurrence? The partial match on the B of the first trigger should still provide action “a” as a candidate action to try. The second occurrence trigger stimulus gets recorded as “X, B” and we have another special case A-Habit.

What happens if second action “a” does not get rewarded? This new A-Habit now records this fact but would the first one be updated? If it is just a source for a candidate action then the answer is no. But if a third occurrence is “Y, B” in which the partial match is the B, the second occurrence would have precedence over the first due to the recency effect.  Now action “a” is not a candidate and the strategy would be to try the next possible action in the series.

                                                Partial match

If only change / novelty is being considered then the exploration strategy would be to use the most recent similar / partial match, redo interesting past situation as a source of actions to try. If no partial match with redo interest can be found then select the next action in the series. The expectation would be for a positive redo interest, no particular goal stimulus. Given no expectation the actual goal stimulus that attracts attention is recorded along with its redo interest.

                                                Action Selection

Currently a next action is chosen in series in which each trigger stimulus has its own series. An alternative strategy would be to cycle through a single series whenever a reflexive response is needed. I would not have to keep a record of the last tried action per trigger stimulus; just a single last tried action. If the one chosen happens to be the same as the last one tried for the trigger stimulus then it could be skipped and the next one tried. This would mean that Adaptron might never get around to trying all possible actions in all possible situations i.e. an exhaustive search. But given partial matching of trigger stimuli it would try actions that have a possibility of reward. Reward being novelty in the current system. But a real problem would be no stimuli would ever become permanent. A modified strategy could be that as soon as the chosen action happens to be the same as the last one tried for the stimulus then it becomes permanent. However this does not have any reasonable justification.

                                                Redo Interest determination

In the 10th Feb scenario the redo interest is determined as the largest value of the absolute values of the multiple changes experienced. The 47,0 stimulus followed by the 49,2 stimulus gives the two change amounts 2,2. The greatest value is 2 and is used for redo interest purposes. When going from 49,2 to 49,0 the change is 0, -2 and the redo interest should be 2.

15th Feb 2008                        Discrete stimuli

I have started storing discrete stimuli as graduated stimuli. Each stimulus in LTM is an instance of the same object with no change in it. Each LTM instance contains the reading and even the changes can be stored as though they are graduated however when it comes to use a difference the change is treated as a change or no change.

                                                Generalization

Given the fact that we generalize before we specialize (discriminate) I should be performing actions based on the object with no reading first. But actions are recorded in LTM beside specific instances of attended to stimuli. To get an action performed on the general object independent of its reading it must be done after attending to an object’s change stimulus that is a no change value. This combination matches the primitive object. This begs the question; is the first stimulus ever attended to as a result of a generalized or specialized breadth of attention? In other words does the 1st stimulus in LTM contain all the detail of the entire environment as perceived (specialized) or does it contain all the objects with no reading and no change (generalized)?

The same question applies when there are no expectations, similar to having just done a reflexive action or having just stored an S-Habit, what to expect, pay attention to the whole scene with all its specific detail or get a general impression. Especially if there is no change, do we stay paying attention to the same as we were paying attention to before performing the action? I would say yes because attention has to be attracted by change or purposefully changed. Similarly for an S-Habit, continue to pay attention to the sense / sensors of the last primitive stimulus in the S-Habit.

16th Feb 2008                      Cyclic Graduated Stimuli

I had been thinking that I would not have to deal with the start and end of a graduated range of values joining up in a circle until I dealt with a range of sensors for a graduated sense. An example is a band of pressure sensors going all the way around a robot at the same level from the ground. But I realized that a graduated intensity / volume / distance (range finder) sensor might need to be cyclical if it happens to read the orientation of a wheel or the direction of a compass. There is no start and end to these readings and we are dealing with only one sensor with a graduated reading.

18th Feb 2008                      Thinking problem

I have just documented a test run in which thinking ahead (A to C to D) caused a past neutral redo interest action habit (A to C) to be performed but the result (A to B) did not achieve the thought about sequence. Thus the thought about interest was not achieved and an infinite loop occurred. My first reactive solution is to mark neutral redo interest habits (A to B and A to C) that were being done as uninteresting. Then not do thinking on uninteresting A-Habits. But maybe a more accurate solution is to consider the thought about sequence as an attempt to form an A-Sequence that includes an R-Habit. Thus the A-Habit put on the habit stack with concentration level – interesting, should be trying to achieve the A-C-D sequence not just the A-C habit. And if it is successful this is when the R-habit gets formed.

19th Feb 2008                      Change

As I deal with making change into a stimulus I face difficult questions such as; when attention is attracted from one sense to another the stimulus perceived is obvious but what about the change? Should the change be “no change”, i.e. a “0”? Should the change be set to unknown, i.e. a “_”? Or should it be the amount of change that caused the attention shift in the first place as experienced by the sensors? I have decided to try the latter answer. This corresponds to the current approach in which habits reduce this change leaving the unexpected ones to attract attention. The amount of this unexpectedness is also used in coming up with the redo interest.

For the first ever stimulus there is no expectation and the change experienced should be as though the previous stimulus was on a different sense. This means use the change as detected by the sensors. But the sensors have no previous value to use in determining this change. This is not so important for a discrete sensor because the change is just a “1”. But for a graduated sensor the change has a value that cannot be determined.

But I have another change value I use called sequential interest. This is the interest between two stimuli that have been attended to. If they are of the same sense then this should probably equal the difference detected by the sensors on that sense. But can I use this difference when the two stimuli are from different senses?

And what about the change between a stimulus and an S-Habit comprised of stimuli from the same sense? Do I compare the last or first stimulus in the S-Habit with the next or previous stimulus to determine a change? Or do I treat the S-Habit as a totally different stimulus as though it was from a different sense? This question gets more interesting when I realize that the C-Habit is represented as an S-Habit in which the change is the difference in values between the 1st and 2nd stimulus object. Thus the sequence 7, 2 produces an S-Habit object (x) with a change of –5. The LTM reading is 7x. If the stimulus that precedes this is 6 the 7x has a change of +1 and the x object should really contain the –5 and +1 change. Now what happens if the 6 become permanent and a new S-Habit is formed out of the sequence? The memory contains 6y and y must contain a change of +1 with x the second object.

21st Feb 2008                       S-Habits structure

I’ve been struggling with the structure of S-Habits such that they also capture change information. I have also had the problem with the two ways to interpret ABC, either A (BC) or (AB) C. A possible solution may lie in the principle I use for P-Habits which is stimuli at one level are combined before they are used as source information for the next level. One does not combine level 1 with level 2 stimuli. For example A, B, and C are individually at level 1 while (AB) and (BC) are at level 2. A (BC) combines a level-1 stimulus “A” with a level 2 stimulus (BC). The solution would be to combine level-1 stimuli first into level 2 and then combine these into level 3. So ABC becomes the level 3 S-Habit ((AB)(BC)) and there is only one interpretation of the sequence. But what does the sequence ABBC become? It becomes a 4th level sequence (((AB)(BB))((BB)(BC))). To make this scheme work I will have to put past sequences up to some maximum length on the S-List and then compare them with the current sequences of the same length / level.

When lines are considered that get processed in parallel the same approach should be taking place. As an example let us consider Line 1 Brightness 6 and Width 4 (L1B6W4) beside Line 2 Brightness 5 and Width 6 (L2B5W6). And a 3rd line is (L3B7W7). The object resulting from line 1 & 2 has a reading from the 1st line of B6W4 and its S-Habit object (L1L2) contains the differences of B-1 and W+2. The combination of line 2 & 3 has a reading from its 1st line of B5W6 and its S-Habit object (L2L3) contains the differences of B+2 and W+1. The 3rd level would be a reading of B6W4 from its 1st part and its S-Habit object (L1L2)(L2L3) contains differences of B-1 and W+2. This is the solution I settled on in the Recog2 software that processed lines.

Applying this to sequential recognition of the discrete stimuli ABC produces ((AB)(BC)). The AB produces a reading of “A” and an object #1 with difference of 1. The BC produces a reading of “B” and reuses the object #1 with a difference of 1. The (AB)(BC) produces a reading of “A” and an object #2 with a difference of 1 (between “A” and “B”, the readings of the two parts) and its 2 object pointers to object #1.

Applying this to sequential recognition of graduated stimuli 246 produces ((24)(46)). The (24) produces a reading of “2” and an object #1 with a difference of 2. The (46) produces a reading of 4 and reuses the object #1 with a difference of 2. The (24)(46) produces a reading of 2 and an object #2 with a difference of 2 (between 2 and 4, the readings of the two parts) and its 2 object pointers to object #1.

Now that I have included difference as part of every stimulus it gets a little more complicated. The ABC scenario becomes A_, B_, and C_ because the changes are the same from one stimulus to the next. The 2nd level S-Habits are (A_B_) and (B_C_). For the 246 scenario the sequence is 2_, 4_, and 6_ because the changes of +2 are the same from one stimulus to the next. But what do we do with a sequence such as 2_, _0 because the input values were 4, 3, 2 and 2. The first 2 has a change of –1 which is not interesting because it is the same as the previous change. The second 2 repeats the value but the change is interesting because it has changed. The 2nd level S-Habit is (2_, _0). The reading is 2_. The object difference though is hard to understand. It is the difference between two stimuli from two different senses, external values and internal change. If this were a P-Habit the combination would be a specific object in which the difference was not important, just which sense combination was involved. This would apply independent of order because P-Habits are simultaneous. In this sequential case the reading would change. Let us assume the difference is _, _.

What about the sequence 21, _0 coming from the external values 1, 1, 2 and 2? The 2nd level S-Habit is (21, _0). The reading is 21. The object difference could be _-1. So the idea works; that is if either part is uninteresting and does not form part of the stimulus then it does not get used in the difference.

23rd Feb 2008                      Storing S-Habits

This new schema for representing S-Habits places different criteria on their creation and storage that I am finally getting my head around. Using the following information:

A, D, F are not permanent – still being explored.  b, c, e, g, h are permanent can be combined into S-Habits.

The sequences “bc”, “ce” and “gh” exist in LTM but other sequences of length 2 of these stimuli do not exist. Input sequences are underlined.

Input sequence: AbF is stored in LTM as AbF in LTM until one of A or F become permanent.

AbcD becomes A (bc) D.

AcgF stays as AcgF because it is the 1st occurrence of the “cg” pair and it needs to exist once before forming the S-Habit. The S-Habit will need to point to it.

AbceD becomes A (bc) (ce) D.

AbcghD becomes A (bc) (gh) D.

AbcgD presents the problem. If it is stored as A (bc) g D then the “cg” pair may never get stored in LTM and the bcg combination never recognized. If it gets stored as A (bc) c g D it might work better. Now that “cg” exists in LTM the next AbcgD would become A (bc) (cg) D.

AbghD has a similar problem. A b g (gh) D would be the solution if the previous structure were to be used. Now that “bg” exists the next AbghD would become A (bg) (gh) D.

AhebF stays as AhebF because the “he” and “eb” sequences are new. The next time it would become A (he) (eb) F because both sequences would exist.

The algorithm that would perform this pattern recognition would, upon getting a permanent stimulus, set-up appropriate habits expecting the known sequences in which the second stimulus was permanent. If one succeeds then the S-Habit is stored. New habits are set-up based on the second permanent stimulus. If it fails then the permanent stimulus is stored and new habits are set-up based on the second permanent stimulus.

Upon getting a permanent stimulus habits would also be set-up for known sequences in which the second stimulus was not permanent. These are necessary to reduce the unexpectedness between the two stimuli that might be detected by the senses.

I still need to not store a permanent trigger stimulus but only as long as it has LTM sequences that might succeed. I still need to save the context permanent stimulus in case no sequences are found. S-Habit matching can only combine two permanent stimuli if they are at the same level.

29th Feb 2008                      Sequential Recognition

I realized today that I can’t take the sequence AbcgD and store it as A (bc) g D nor A (bc) cg D. It must be stored as AbcgD to form the first cg sequence in LTM. Only then can it be collapsed. This is because the (bc) (cg) structure is symmetric. When it gets b it is expecting c and has not stored b because it is permanent. When it gets the c it recognizes the bc but c has expectations that do not include a next g. Thus S-Habits triggered by c with appropriate expectations must be set-up. These would have bc as their context. If any succeed then the (bc) can be stored and the (c?) is either context for the next ? triggered S-Habit or gets stored if there are no expectations. The symmetry comes in if we have AeghD. It can’t be stored as A e (gh) D nor A eg (gh) D. It must be stored as AeghD to form the first eg sequence in LTM. When it gets the e it starts any S-Habits that e expects which do not include a second stimulus of g. When the unexpected g occurs it stores the e in LTM and expects g?, which includes gh. When this occurs it starts the (gh)? S-Habit. The D may be expected but because it is not permanent it must be stored. In conclusion a pair of collapsed stimuli e.g. (bc) (gh), cannot be stored in LTM. The sequence (bc) (cg) must form first. Similarly the sequence AbcD cannot collapse into A (bc) D because the S-Habit triggered by c expects a D that is not permanent.

6th March 2008                    Forming S-Habits

I had the above working but then noticed that after the (gh) had no more interesting expected next stimuli I would produce a reflexive response and expect the next pair. This is because (gh) has not become permanent. But the first (gh) was not being formed until a valid 3 in a row existed, yet when doing reflexive responses the three in a row criteria was not necessary. I believe that I need to form pairs (S-Habits) whenever the two primitive source stimuli are permanent and not have to wait for the third stimulus. It is only when three permanent stimuli occur in sequence and both the pairs are permanent and they exist one after the other that I should form a threesome.

So this means that AbcD gets stored as A (bc) D provided the bc sequence exists in the past. (bc) will then have to become permanent before threesomes can be formed. The sequence AbcgD would be stored as A(bc)gD until the (cg) sequence exists and then the threesome can be stored as A (bc)(cg) D once (cg) has become permanent. (bc) and (cg) must be stored when recognized so that actions can be taken after them until they become permanent.

15th March 2008                  Unexpected

It’s the unexpected that is really important. The sensors identify a stimulus if it is unexpected – different from the last one. The habits remove the expected leaving the unexpected sequential ones. They effectively exist at (are waiting at) every node in the recognition tree to identify a stimulus that is sequentially unexpected. If the (cg) sequence does not exist then the c starts any habits waiting for all the known next expected stimuli. The stimulus g will be unexpected. Unexpected stimuli attract attention and get stored in LTM unless they are permanent. If permanent they are not stored but all habits are started waiting for any expected stimuli. Any unexpected stimuli at this point cause the permanent stimulus to be stored. It is as though each node in the S-Habit recognition tree habitualizes after being activated expecting itself to be activated again. And when it does it ignores (reduces the interest in) the stimulus.

16th March 2008                  Algorithm for unexpectedness

Given AbcD above; “A” gets stored because it is not permanent. “b” is held  and triggers its habits. “c” is expected and the pair is recognized. The pair is not permanent and is stored. D gets stored because it is not permanent.

Given AbcgD above where (bc) is not permanent: A gets stored because it is not permanent. “b” is held  and triggers its habits. “c” is expected and the pair is recognized. The pair is not permanent and is stored. Habits are started for c as a trigger. “g” is permanent but not expected by c habits so g is held and triggers its habits. Whether D is expected or not it is not permanent and is stored. A(bc)gD is the result. If one c habit did expect g then the cg would be stored if it were not permanent. A(bc)(cg)D is the result.

Given AbcgD above where (bc) is permanent: A gets stored because it is not permanent. “b” is held  and triggers its habits. “c” is expected and the pair is recognized. The pair is permanent, held and triggers its habits. Habits are also started for c as a trigger. If one c habit expects g then the cg pair would be recognized. If past experience only contains A(bc)gD then (bc) is not expected and A(bc)(cg)D gets stored. If past experience contains A(bc)(cg)D then the bc habits are expecting (cg) and this will result in ((bc)(cg)) being recognized.

16th April 2008                     Change

In recent Adaptron simulations I have been determining change and adding it to the stimulus as though it was a stimulus coming from another sensor. As a result it explores change just like any other stimulus i.e. it tries all possible responses to a change before it declares it permanent. But the overall behaviour does not seem realistic. Do we explore change just like any other stimulus or do we just recognize it and its sequences? If we just recognize it then what purpose does it serve? Is it possible it acts as a trigger for generalization, i.e. a stimulus that occurred at the same time as another that had a response and was rewarded?  Thus it could be a way of detecting similarity. Is it then possible that changes become the stimuli that are used in action sequences as feedback to allow them to continue executing? This might be true because the actions have been done in the first place to obtain the expected change. It also saves the cerebellum from having to recognize the stimuli as it performs automatic / subconscious behaviour sequences (Action Sequences).

Given that “the parts of an object all cause the same amount of change” a sequence of stimuli is recognized as a starting stimulus value with an associated object that contains an amount of change between the two objects that comprise the sequence. Thus the object is the change part of the stimulus and as such can be a way of detecting similarity.

22nd April 2008                    Unexpected example

A good example of unexpectedness is the phrase “Give peace a cake of your mind”. Known phrases that it mixes up are “Give peace a chance”, “Give him a piece of your mind”, “It’s a piece of cake”.

30th April 2008                     Graduated sensors – Line recognition

I’ve been experimenting with line recognition again as I did from 26th April 2007 until 6th July 2007. But now I have created all possible objects based on any combination of stimuli. This allows for holes in the object through which one can “see” other objects. I have been working with just 8 sensors to simplify the experiments. When comparing two sequential images the kinds of changes that I need to detect are:

Change in:   New object
                     Missing object
                     Moves left or right
                     Brighter / dimmer object
                     Closer / further away object (change in size)
                     Flipped over object

I also need to address a partial object (one without all its parts) triggering the recognition of the complete object (like the eye filling in the missing part)

The different types of changes need to be sorted out so that the object with the greatest change is that which attracts attention. On the 6th June 2007 (motion recognition) and on the 19th June 2007 I started to address this issue. Object size also plays a part in deciding which change attracts attention. A bigger object moving the same distance but independently of a smaller object will attract attention. A bigger object changing its brightness the same amount as a smaller object will attract attention. If one were to consider units of brightness change as equally important as units of distance change and as equally important as units of size then one could use the corner to corner distance of the cubic shape formed by using these three axes. However it would be easier to compute the volume and use it as a measure of overall change for attracting attention. Thus,

Overall change in an object = width * (position change + 1) * (intensity change + 1).

The changes need 1 added to them just in case their values are zero. This calculation would also apply to an object that disappears or is new. However here the position change would be assumed to be zero and the intensity change would determine the overall change. We could also add in a width change amount and multiply by (width change + 1).

1st May 2008                        No Motion – Stationary objects

I have in the past and am currently thinking about using a motion detection algorithm that starts by eliminating all the objects that don’t move from one frame to the next as one would expect habituation to do. However this causes a problem when a big object made up of several parts moves but part of it is duplicated at its initial position. This sub part appears in the same place on both frames and thus might be eliminated in the move detection. The previous solution was to override stationary parts when the bigger object was found to move. However a better algorithm would be to use the formula from yesterday on all objects that appear in both frames. Stationary objects will get a non-zero value but moving ones will hopefully get a larger value. But one could have a very wide stationary object and a slim moving one that attracts attention. Using an object width of 1 for a stationary object might solve this. This means that any object that is new or reappears in the second frame will be assigned an overall change rating.

However each object in a frame has an intensity reading and a position reading each of which can change independently. Is it good enough to use the overall change to attract attention or should both kinds of changes be dealt with independently? I.e. one type attracts attention before the other?

15th May 2008                      Line Recognition

Over the last 3 days I have been making some loose leaf notes about line recognition.

An experience has an object with 1) Position of its first part, 2) Intensity of its first part and 3) its total size. The object is made of 2 parts (objects) with 1) a relative intensity, 2) a relative size and 3) a separation to width of 1st part ratio.  Then if we are dealing with a lowest level object as detected by a sensor the 2 parts are the same lowest level object and the 1) relative intensity is 1, 2) relative size is 1 and the separation to width ratio would be 0. Maybe it would be better to use the size of the whole ratio to the size of the 1st part as 3) and then the value for a lowest level object would also be 1 for this ratio.

Then the same object that appears in sequential frame 1 and frame 2 is the object that minimizes the (distance moved + 1) * (intensity change + 1) * (size change +1).

                                                Generalization

Can we use generalization in line recognition such that a line experience is similar to another one but not necessarily the same? Two experiences are similar if they have the same intensity or (note this is not “and”) they have the same size or (not “and”) they have the same position. Note the use of “or” rather than “and”. Using “and” gives an exact match of the experiences. Also having 2 objects with some common properties is like having a type of object. Two objects are similar if they have a same relative intensity or the same relative size or the same ratio of whole size to size of the 1st part.

The Gestalt idea is that we group parts according to proximity, common shape / intensity and we tend to complete missing bits. Maybe we should be grouping parts according to the same intensity or the ones beside each other or at least close to each other defined by a small separation relative to overall size.

                                                The Background

Identify the background very early on as an object(s) by using the object – part grouping rules. Two parts form an object because they are close (beside each other or separated by 1 or 2? other objects) and they have the same common intensity (or size?). Relative intensity = 1, relative size =1 and whole size to size of 1st part = 1. Two parts separated far enough apart or 2 or more in between objects and small enough are not grouped into an object until they change (sequentially) together in intensity or position, relative intensity not = 1, relative size not = 1 and whole size to size of 1st part not =1.  Similar / same method used to decide 2 parts form an object on one frame as used to decide if an object is the same between 2 frames = minimum of (distance moved + 1) * (intensity difference + 1) * (size ratio +1) * size * (level+1). Background objects are grouped to higher level objects but none of its parts are grouped with other objects.

25th May 2008                      Difference or ratio

Since intensity only has one value per object i.e. it does not span a range of intensities (whereas position is complicated because objects also have width) it will be easier to decide on using the difference or ratio of intensities. I just realized this is not true if we have an object made up of two parts it has an intensity range just like it has a width that is not one. Only primitive sensory readings have a width of one and an intensity range of one. An object can be made up of two parts with the same intensity giving an intensity range of one but must have a width greater than one because the two parts can’t be in the same place unless it’s a primitive object. I started this paragraph trying to decide if I should be using the difference in intensity (Part1 intensity – Part2 intensity) to identify the object or the ratio of intensities (Part1 intensity / Part2 intensity). Up to this point I have just been using the term “relative intensity”. Now I think it would be easier to solve this for positions and then apply the same logic to intensity. Even though intensity may range over several orders of magnitude whereas position may be over fewer orders of magnitude I feel the same principles should apply.

For a two-part object where both parts are the same intensity the experience consists of the object at a particular position and a particular size. For the object to be recognized as position and size independent it must be recognized as consisting of Part1, a separation and Part2 such that these three stay in the same size ratio to each other. This can be accomplished with two ratios. Many different possible ratios would work. Differences in size would not work since these are absolute amounts and would not scale properly for larger or smaller experiences.

For Part1 at position 1 (P1) of size 1 (S1) and Part2 at position 2 (P2) of size 2 (S2) with a separation at position S (PS) of size S (SS) the size of the whole (SW) = S1 + SS + S2. But since an experience contains 2 experiences each of which have a position and size it would be advantageous not to use the separation position or size in the ratios used. Thus the two object ratios could be SW/S1 and SW/S2 where the SW is P2 - P1 +S2.

This means an experience is recorded as (1) a position, (2) a size, (3) an intensity and (4) an intensity range plus (5) the object. An object is recognized as having (1) a part1 relative size SW/S1, (2) a part2 relative size SW/S2, (3) a part1 relative intensity range RW/R1, and (4) a part2 relative intensity range RW/R2. Where RW is the intensity range of the whole object from top intensity to bottom intensity and R1, R2 are the intensity ranges covered by part1 and part2 respectively. RW = I2 - I1 + R2 where I1 and I2 are the intensities of experienced part1 and part2 respectively.

                                             Common parts

To form an object from 3 or more parts the algorithm combines any two objects that share a common object. What does this do to the relative sizes and intensity ranges? It means the sum of the part sizes is greater than the whole size due to the overlap.

26th May 2008                      Grouping

The difficulty is to invent an algorithm that groups objects together appropriately. Two parts are more likely to be grouped if D*I*S*R is small. Where D is the distance between the two objects = P2 - P1, I is the difference in intensity = I2 – I1 + 1, S is the difference in size = S2 - S1 + 1, and R is the difference in intensity range = R2 – R1 + 1. One is added so that the result is never equal to zero. The 1st pass through the sensor readings would convert each reading into a primitive object. The second pass needs to look at all pairs of objects and find the ones with the lowest D*I*S*R and group / combine them.

I have found I still need a first pass over the sensors to combine those that are adjacent with the same intensity to form primitive objects because they are equivalent to single sensor objects but just closer (expanded size). This is equivalent to combining sequential identical objects due to boredom / habituation. Then when combining these primitive objects, whether they are adjacent or not, I can make sure the 2nd part of the 1st object is the same as the 1st part of the 2nd object.  But the position of the 2nd part of the 1st object is not the same as the position of the 1st part of the 2nd object. This is equivalent to recognizing sequential changes (Change-Habits stored as S-Habits). These combinations (pairs of primitive objects) can then be combined to the top level provided the 2nd part of the 1st object is the same object and position as the 1st part of the 2nd object to get all possible objects in the scene. This is equivalent to sequential combinations of S-Habits.

While trying to get the intensity range of combination of objects correct I realized that I need to also keep information about whether the intensity increased or decreased between part1 and part2. The initial attempt was to store this as the sign (-ve or +ve) of the intensity range of the whole in the object. The same is not needed for the position dimension because part1 and part2 are always sorted by position. Then I realized the increase or decrease in intensity is really a property of the experience and not the object. In the position dimension this means that an object and its left to right mirror image should be recognized as the same object.

27th May 2008                      Intensity Changes

I’ve been treating Intensity (reading) exactly as though it was position, but it does not have the same characteristics. For example the intensity of the two parts of an object can be the same whereas their positions must be different (they can’t both be measured by the same sensor). If an object consisting of 2 parts of size 1 are positioned at 3 and 5 and they move to positions 7 and 9 they are seen as the same object, their distance apart has not changed. If 2 parts of size 1 are beside each other with intensity 5 and 7 and they change to intensity 10 and 12 they are not seen as the same object because the ratio of intensity was not maintained. They should change to 10 and 14 to be seen as the same object, just brighter. The same with size, the ratio must be maintained. The position scale is a linear one while the intensity and size are logarithmic ones. Like the keys on a piano the log of the frequency is mapped onto a linear scale and then the changes can be treated like positions. Examples are:

Linear:            Visual Position, left/right
                       Place on skin
                       Rotational angle
                       Distance away
                       Time ?

Logarithmic:  Sound Frequency
                      Volume
                      EMR Frequency
                      Brightness
                      Size
                      Pressure
                      Temperature

Weight

Linear scales are added and subtracted to determine change while logarithmic ones are multiplied and divided. A logarithmic scale can be mapped onto a linear one and then adding and subtracting can be used.

So if the sensors represent positions such as positions of dots on a sheet of paper they are linear. An object made up of two parts at position 33 and 60 of sizes 3 and 9 respectively is the same object if it moves linearly left to position 11 and 38. It is also the same object if it doubles in size at locations 11 size 6 and location 65 size 18. Note location changes linearly but size changes in multiples, i.e. it’s logarithmic. Now consider an object made up of two groups of dots beside each other of intensity 33, range of intensity 3 and intensity 60 and range 9. If the 1st group’s intensity is reduced to 11 and range of 1 the second group must be reduced to an intensity of 20 and range of 3 for the two groups to be recognized as the same object. Note Intensity and range of intensity are logarithmic.

To implement a sense such as hearing the logarithm (base 2) of the sound frequency must be calculated and quantized. Then this value must be mapped to the linear array of sensors. The Volume is then measured and mapped directly to the Intensity / reading.

To implement a sense such as a belt of pressure sensors or infrared range finders around the girth of a robot each sensor maps onto the sensor array as a linear value and the pressure or distance is then mapped to the reading.

Given this perspective an experience must be recorded as (1) a position, (2) a size, (3) and intensity and (4) the object. An object is recognized as having (1) a part1 relative size SW/S1, (2) a part2 relative size SW/S2, (3) a part1 to part2 relative.

28th May 2008                      Reflections

Happy Birthday is “ C C D C F E “ = 1 1 3 1 6 5 but we don’t recognize it played in the other direction because the timing and volume are different. But a pattern of 3 notes at the same timing and volume is recognized in the opposite order when played.

29th May 2008                      Distance moved

To find the simplest interpretation of the 1st and 2nd frames oooxoxoooo and oooooxoxoo in which the left X has either moved 4 places to the right or the pair of Xs have moved 2 places to the right (preferred interpretation) we need to use the square of the distance. Then one X moved 4 comes out as 16 while 2 Xs moved 2 is 2 x 4 = 8. Similarly we should be using the square of the separation and square of the intensity difference in calculating the D*I*S.

But at what level in the recognition process should the DIS formula [P2-P1]^2 * ([I2-I1]+1)^2 * ([S2-S1]+1)^2 be used. Square brackets mean take the absolute value, P1 = position of part1, I1 = Intensity of part1 and S1 = Size of part1.

                                             Levels of recognition

We have level 0 in which all adjacent sensors with the same intensity are combined into a single object that is the same as the single sensor detects except wider (closer). Level 1 in which all pairs of these lowest level objects are combined (not necessarily adjacent). Adjacent ones represent the change in intensity between the two objects along with ratio of sizes. Level 2 combines all possible pairs of level-1 objects that share a common part thus giving objects made up of three or more lowest-level objects.

If DIS is used at level 0 between the sensors then adjacent sensors always calculate to 1*1*1 and combinations of sensors using level 3 grouping also always calculate to 1*1*1. So this DIS at level 0 does not appear to be of much use. But a single dark line on a white background stands out as the object of interest until its size grows wider than half the range so the size of the line must also play a role. Let’s multiply the DIS by the size of the object at Level 0.

At level 1 in the frame 000131000 the 13 and the 31 have a DIS of 1 * (3-1+1)^2 * 1 = 9 while the two ones (1?1) have a DIS of  2^2 * 1 * 1 = 4. The 1’s have a greater affinity for each other. If the middle intensity were 2 rather than 3 then the DIS would be the same at 4. If the middle intensity stayed at 3 the two 1’s would have to be separated by a distance of 3 to give the same DIS. When we draw lines on paper to form a line drawing the intensity difference between the lines and the background is usually very large (black on white) and thus lines very far apart can still have a smaller DIS than a black line beside a white line. Remember that intensity is a logarithmic scale while distance is linear. If the intensity difference between the lines and the paper background is too low then we have difficulty grouping the lines e.g. faint yellow lines on white paper. So at level 1, DIS gives a reading of the simplest pairs.

Similarly two large background areas with the same intensity will have an 'I' of 1, a small size difference and separation approximate equal to the size of the 1st part. Maybe D shouldn’t be the distance between the two parts but the separation + 1. This would take the size of part1 out of the D factor. Then rather than the separation in terms of distance it should be the number of objects between the two parts + 1. Thus if the separation is all one continuous intensity it would be worth less than if this distance were filled with many small sized objects.

At level 2 it’s hard to know what to do. The DIS of the two parts could be multiplied together. The whole idea of using the DIS is to identify the object in a single frame that is most likely to attract attention, and that shouldn’t be the background. With 2 frames it becomes easier because motion helps. Attention is generally attracted to change. We then learn it and exclude it from attracting attention such that the unexpected then attracts attention. As I pointed out on 5th March 2006 attention is paid to the biggest combination of objects that have changed. But on a single frame the change is not over time but is based on position. Small width objects have more changes per unit of distance than wide ones. Objects made up of more parts have more change than an object with fewer parts but the same size. If an object repeats in the frame we are more likely to ignore it because we have learnt it. So maybe the objective is to find the object with the most parts but the smallest size that repeats the least number of times.

For black lines on a sheet of paper the background white lines consist of one more line than the black ones. If a black line is on the border then there are an equal number of lines but the background ones are wider. If both borders have a black line on them then the possibility you are looking at white lines on a black background increases. And when the amount of black space exceeds the amount of white space you see white lines on a black background.

30th May 2008                      Discrete Values

It might be easier to work out the right algorithm using a graduated array of sensors but discrete valued readings. If the discrete values are A and B then all possible combinations that could be recognized are equivalent to the binary numbers with the number of digits equal the number of sensors. If the discrete values are 0 through 9 then all possible combinations that could be recognized are equivalent to the decimal numbers with the number of digits equal the number of sensors. Obviously we don’t want to create this number of possible objects. So we recognize combinations of values that reoccur and create objects out of these smaller parts. Level 0 recognition combines adjacent sensors with the same values. Each group has a position, size, reflection and object. The objects contain the two part objects, the sizes of the two, the whole size and the values of the two parts. In case of level 0 the two parts are the same. In case of Level 1 the objects whole size is greater than or equals the sum of the part sizes. Equal if they are adjacent.

For level 2 objects and higher maybe the DISs of the two parts should be added since they are distances in a multidimensional space.

The idea of greatest number of changes in the least distance leads to the idea of the object with the greatest density of change attracts attention. This density is the number of parts in an object divided by its size. But also it would include the number of different values divided by the value range (or number of parts if discrete values). Since both of these numbers are less than zero it would be easier to find the smallest of the inverse.

31st May 2008                      Discrete readings / values

If we consider a graduated array of sensors with discrete readings and imagine the sensors detecting the objects Door, Painting (Picture) and Wall the level 0 process seems reasonable. It combines adjacent sensors and the result is the same object just with a greater width as though it was closer. Three objects, D, P, & W. The level 1 process that combines adjacent objects seems reasonable because pictures have frames which are part of them and are adjacent to them. When it combines objects that are not adjacent it should tend to combine ones of the same object more readily than ones that are of different objects. For example the walls on both sides of a door should be combined (W & W) first.

But the door as a level 0 object was found by the change in objects at its edges. Maybe the door is better defined as a thing with two edges, on the left a wall to door edge (WD) and on the right a door to wall edge (DW). Otherwise the door is not seen as an object with walls on either side until it is incorporated into a level 2 object and there the size ratios of walls to door describe it as a particular door (WWWDDW) as opposed to a type of thing (WD & DW). If a door is adjacent to a picture on one side and a wall on the other side then we have a different kind of door (PD & DW).

This approach would mean that the 1st pass through the sensor objects would detect all possible transitions / changes as objects (WP, PW, WD, DW, DP and PD). Then the 2nd pass would combine adjacent changes to form objects that have a width. These objects would not be the ones identified by the sensors but more abstract versions of them based on what they can be adjacent to. This is kind of like Lego pieces or interfaces. The interfaces define what kind of objects they are. WWWDDWW becomes the WD/DW object of width 2 (WD/DW-2). The 3rd pass would combine these objects provided they share the same interface, which they do because they are adjacent. This is equivalent to the rule that the 2nd part of the 1st object is the same as the 1st part of the second object. The resulting object has two parts of possibly different widths and thus a particular total size to width ratio. Thus it has internal structure and it has two interfaces, which are the changes at its borders. A picture beside a door with no visible wall between them is an example. WWPPDDDWWW is the combination of object #1 = WP/PD-2 and object #2 = PD/DW-3, which is the object #1/#2 ratio 2 to 3 of width 5.

But to come up with all possible combinations of objects non-adjacent objects must also be considered in the 3rd pass. The most likely combinations would be ones that have common interfaces because they are the same types of object. Such as two paintings that are on either side of a door but with wall between them. This would look like WWWPPWWDWPPPWW. The WP/PW-2 would combine with the WP/PW-3 on the other side of the door. The next most likely could be ones that share a common interface but are not adjacent such as WWPWWDDPPWWPPPDDWW in which WP/PW-1 matches PW/WP-2. Or it could be where reflective interfaces match as in WP/PW-1 and WP/PD-3.

Since types of objects are made up of a left part and a right part and their ratio of size to whole shouldn’t they match their reflections? Thus another likely combination is the same type of object except a reflection; WP/PD-2 matches PD/WP-3 its reflection and a little larger. Also the picture / door object #1/#2 should be the same as a door / picture type of object as long as the ratios of size to whole are the same.

There is still a problem. What change should be used for an object found on a border? Do I need to introduce the concept of all possible borders (?P, ?D, ?W, P?, D? & W?) and then allow the “?” match any other possible value?  If the view is circular there is no problem. The issue arises when the border object, combined with its neighbour, form an object and it could be repeated elsewhere in the same frame. Another solution is to not use the border objects in the view. But then what is a view such as WWWWPPPPP recognized as? It is recognized as the WP change object – the ones that are combined to form the next level of object types.

Why does the 2nd pass only combine adjacent changes? Consider it combining changes that are not adjacent. These combinations of changes would not correspond to what we normally understand as primitive object types.

Using the different types of changes to recognize types of object results in a door with pictures on both sides being recognized as different from one that has a wall on both sides. This does not seem right. Maybe it should be based on the fact that there is a change on both sides not what kind of change it is as I had been previously using. Given a discrete reading means the object type is being provided as the value and does not have to be determined. For graduated readings the lowest level type of object is a change. The lowest level experience then would be a change object of a certain amount. This means that WWWWPPPPP is recognized as two objects W-4 and P-5 while 555577777 would only be recognized as a change of magnitude +2.

So for discrete valued sensors the level 0 strategy works to combine adjacent sensor readings into the same object just wider. The objects on either side do not aid in determining the object as identified by the sensor. For graduated sensor readings the changes are the lowest level objects and adjacent changes must be combined to identify the thing in between as an object. This in between part will have a width and an intensity that is not it’s absolute one as measured by the sensor but a relative one based on the changes on both sides. This also means that the border sensory readings for graduated values must be used for determining change but do not produce a recognizable object unless some form of assumption is made about their border change amount. A reasonable assumption would be to assign it the opposite to the change at its visible end. Thus 555577777 is seen as 75555777775.

So getting back to discrete valued sensors WWWDDWW is recognized at level 0 as W-3, D-2 and W-2 since the objects have already been identified by the sensors. Then at level 1 all possible combinations of pairs are created. Then at level 2 and higher combinations are formed in which the 2nd part of the 1st is the same as the 1st part of the 2nd. Adjacent ones will be more natural since two parts that are adjacent / attached tend to change together while ones that include a separation are less natural and less likely. However they must be formed in case they move in the second frame and are recognized as belonging together. The strategy that will create less long lasting objects will only keep objects from the 1st frame that are also found in the second frame.

                                                Priming

The 1st frame could be used to just “prime the pump” so to speak. It would not be conscious. But then, how would “no change” be detected / signaled to consciousness? Maybe the 1st frame should be passed to consciousness as the simplest object rather than the entire highest level one. If there is no change the same simplest object is perceived. (See 15th Feb and 19th Feb 2008)

If dealing with graduated readings and linear values are being provided, i.e. raw measurements, then intensity ratios for changes are calculated using Intensity1 / Intensity2. If however logarithmic values are being provided then intensity ratios for changes are calculated using Intensity1 – Intensity2. When providing such logarithmic values a raw measurement of 0 cannot be provided so maybe the logarithm of the value + 1 should be provided and a raw measurement of 0 is provided as 0. Thus:

Raw values:              0          1          2          3          4          5          6          7          8          9
Log base 2:                           0          1                      2                                              3
Value provided:         0          1          2                      3                                              4

If the internal range of values that can be stored is 0 through 7 then linear values can only be 0 through 7. However, logarithmic values could represent 0, 1, 2, 4, 8, 16, 32 and 64 using base 2. Using base 10, 0 through 7 could represent 0, 1, 10, 100, 1,000, 10,000, 100,000 and 1,000,000.

If you wanted to be sensitive to a narrow range of values such as 1000 to 1064 then it could be represented logarithmically and divided up such that 0 = 1000, 1 = 1001 and 7 = 1064. However, Adaptron will need to know the start value = 1000 to process the values correctly.

2nd June 2008                     Types of Objects

I have implemented a version of the graduated reading recognition that uses change amounts on the left and right of each reading that are used in its recognition. And have now figured out how to scale it and reflect it properly. If we use the intensity sequence 05246 and assume each of these intensities comes from adjacent sensors we want the sequence 74251 to cause the same type of object to be recognized in the middle. That is the 524 sequence, it’s just reflected. We also want 0 10 4 8 12 to be recognized as containing the 524 sequence. Level 0 object types are the changes experienced. For the 1st sequence they are A[0/5], B[5/2], C[2/4] and D[4/6]. Where the letter is the object type and the two values are the ratios of intensities. The Level 1 objects are the adjacent pairs of changes with a given reading. The 5 in the 1st sequence is an experience using object type E composed of two parts A & B with a ratio of intensities [0/2]. The 2 uses object type F = B&C [5/4] and the 4 uses object type G = C&D [2/6]. When the 5 and 2 are combined object types E & F must be combined to form object type H = E&F [0/4]. The 2 and 4 use object type I = F&G [5/6].

However this strategy has a problem in that the 24 sequence does not get recognized unless it is buffered on the sides by a 5 and a 6. Maybe the changes should not be used to identify the individual sensor intensities but only the pairs of sensors. The individual sensor readings would use an intensity ratio of 1/1.

4th June 2008                      Reading scale

While trying to figure out how to handle a zero value on a multiple order of magnitude - logarithmic reading such as intensity / volume etc. the following ideas have occurred to me. Linear scales such as angular rotation (0 to 360 degrees) and distances (0 to infinity) have zero values naturally. The object so measured such as a point in space might change from 45 to 50. A second point would be seen as belonging to the first provided it also changed the same absolute amount such as from 9 to 14. These two points would belong to the same object and subtraction can be used to calculate the difference between them and determine if they have moved together. Zero values can be used in these subtractions.

On the other hand logarithmic scales such as pitch (0 to many Hertz), Volume (0 to many Decibels) and size (0 to many Meters) have zero values but when they do there is no object, it cannot exist. If it does exist and is made up of parts when it moves from 45 to 50 then any part of it will change in proportion. A part that measured 9 will change to 10 maintaining the ratio. This is where logarithms of the readings should be used to handle the multiple orders of magnitude over which the changes can take place. Logarithms can be subtracted to provide the proportions / ratios of the raw values.

But the logarithm of 0 is minus infinity and this cannot be represented easily. A possible solution is to use Log (raw measurement) +1 and then represent Log (0) as 0 since Log (1)+1 will = 0+1 = 1. The subtraction of two log+1 values will still produce the same ratio. But what does it mean to do the subtraction of 1-0. The 1 represents the object existing with a reading of 1. The 0 represents the object not existing. If the 1 raw measurement changes to 2 the log+1 changes from 1 to Log (2)+1 = 2. The raw measurement has doubled and the log+1 has increased by +1. For the Log (0) to stay in proportion based on subtraction it should change to Log (1). But this implies that where there was no object there is now an object. Thus it does not make sense to use subtraction involving a zero. But we still have to recognize a pattern that includes gaps containing no objects measured by the sensors. It would then be reasonable to recognize pairs of objects that involve a zero as unique combinations in which proportions do not apply and subtraction cannot be used. This would mean the pair of reading 0, 1 is not the same as 0, 2 or 0, 3.

Note that the size of an object is a logarithmic scale in which the size is determined by measuring the distance between two points. If these two points are the sensor positions in the array of sensors the smallest size object is 1 equivalent to the resolution of one sensor. Thus this size representation does not have a zero and proportional size of two parts can be determined using ratios / division. When an object moves in the sensory array it can move its position and retain its size, change its size retaining its internal proportions or do both.

                                                Reflections versus Negative images

I have come across an interesting problem. In a one-dimensional array of sensors a reflection is a left to right reflection of the same object. Thus the sensor readings 425 are a reflection of 524 and 01 is a reflection of 10. However, '505' has no reflection but it does have a negative image of 050. Then the question is; what is the relationship of the sequence 5050 to 0505? Is it a reflection or a negative image? Our brains are more likely to interpret it as a reflection since negative images occur more rarely in nature. What is the algorithm for recognizing a negative image? What is the negative image of 425? When there are black and white values only (2 values) it is easy to recognize since they swap values. I think the negative is defined as the same left to right order but the differences between intensities change sign. So 425 has changes of –2, +3. Its negative image has changes of +2 and –3. So the algorithm first sees if a reflection is valid and only then, if it is not, does it check for a negative image.

Upon implementing negative images I realized that a reflection and a negative can exits at the same time. This is equivalent to looking at a negative black and white photo from one side or the other. The possibilities are captured below, A and B are different sizes and readings.

A,B with change D             reflection ->               B,A with change –D
           negative                                                            negative
                 |                                                                        |
                V                                                                       V
A,B with change –D            reflection ->               B,A with change D

If there are three objects involved, the middle one is removed from the equation by determining the relative readings of the 1st part of the left object and the reading of the 2nd part of the right object. If the Readings are A, B, C then the two change amounts are A-B and B-C and when these two changes are added one gets A-C. Given that A, B & C are different sizes and readings and A-B is different from B-C.

A,B,C with changes D & E             reflection ->               C,B,A with change –E & –D
           negative                                                                             negative
                 |                                                                                         |
                V                                                                                        V
A,B,C with change –D & -E            reflection ->               C,B,A with change E & D

                                                DIS Formula

On 30th May I realized rather than using the volume of the change hyper-cube to calculate the DIS for motion I should be using the diagonal distance. Thus DIS becomes [P1-P2]^2 + [I1-I2]^2 + [S1-S2]^2. I no longer need the +1s because the change contributions are being added. Parts 1 and 2 in this formula can be used for two parts of a single object or can be the same object found in frame 1 and frame 2 to measure change. For a new object in frame 2 the position in frame 1 would be assumed to be the same as its position in frame 2 such that no change in position has occurred. However its I1 and S1 would be 0 because it did not exist (see Reading Scale above). For objects that exists in frame 1 and not in frame 2 the same approach should be used. We are just as surprised if an object disappears as if it appears.

But we should also add in some change amount for reflections and negatives. If an object flips left to right around its midpoint such that there is no change in position should the amount of change be based on its size? Is it equivalent to its 2 parts swapping positions? If so then the change might be 2*S1 or 2*S2. Why not use change in position of part1 squared plus change in position of part2 squared instead of [P1-P2]^2. These are not equivalent when it does not flip. So use (change in position of part1 plus change in position of part2) / 2 all squared. For a negative we could use a similar formula based on the change in intensity of part1 and part2.

5th June 2008                      What Changed?

I have come up with a reasonable formula for the complexity of an object based on its parts. The lower the value the less complex it is. I have come up with a reasonable way of determining which object changed from one frame to another and giving it a change score. The lower the change score the simpler the interpretation of the change i.e. the less complex the change. But now how to determine the simplest change that attracts attention? One of the rules to consider is that the largest change will attract attention. Another is that all the parts of an object type change in unison.

6th June 2008                      Motion detection

I tried several different formulas yesterday to determine what was the simplest object that moved the most. The most promising was the max value of level / change, where change was the square of the changes in position, size and intensity added together. This morning I realized that I am not determining change at a low enough level. In Level 1 multiple sensors that read the same intensity are grouped together into objects and it is from these objects that I determine the change. However between one frame and another, the sensor readings change. Only if the sensor’s reading has changed should it be incorporated into the move detection algorithm. This eliminates the simplest interpretation that is “no change”. Change detection should be taking place concurrently with the formation of the objects at the different levels. Maybe second frame objects should be formed out of only those sensor values that have changed.

Background is usually comprised of sensor readings that don’t change and thus it may be impossible to detect the background on a single frame. Given two frames the background could be all the readings that are unchanged. Using this approach given the two 12 sensor frames 000101000000 and 000000010100 the changed sensors (X) are: 000X0X0X0X00 and it is pretty easy to pick out the objects involved based just on changed sensors. The challenge comes when the 1st and 2nd frame objects overlap. E.g. 000101000000 and 000001010000 gives 000X000X0000 because sensor 6 has not changed. The object that has moved is the 1 at position 4 in frame 1 and it has moved to position 8 in frame 2. I showed on the 29th May 2008 that the simpler interpretation was that the 2 Xs had moved. But if only changed sensors are being used to recognize objects the pair would not be used. Another example is 000101000000 and 000010100000 will produce the changes 000XXXX00000. The object that has moved is 101. It includes the background 0.

The algorithm that I documented on 21st June 2007 worked from top down with the largest objects first. It found the same object in both frames that had the smallest change. All the parts of the higher level objects were then marked “used” and they were not used in finding lower level object changes. Left over objects were either new objects or had disappeared. A bottom up approach would have to reuse already flagged as moved smaller objects that are part of larger objects that have moved.

12th June 2008                    Motion detection

I have had some success with only using the sensory data from frame 1 and frame 2 where there is a change in the sensory reading. So any adjacent sensors that do not change are not included. Effectively this is treated as background. A level one object is only formed from adjacent sensors if they are all different from the previous frame’s readings for these sensors. This is called the 'same count' and it has to be zero to use the sensors to form a level 1 object. And only these objects can be combined to form higher level ones. However it is not perfect.

14th June 2008                    Simplest change

I think that a strategy that tries to determine the changes that would minimise the total change score between two frames will give the result that we experience visually. That is why when I eliminate the zero change objects from the analysis I come up with a better result. I also use the minimum change interpretation when an object could have moved from a number of possible places. The challenge with determining a minimum total change is that there is a matrix of possible change scores between objects and the simplest combination must be identified. For example objects A, B and C appear in frame 1 and 2. The possible change scores are:

            In Frame 1     A         B         C
In Frame 2
            A                    0          5          1
            B                    6          4          9
            C                    3          6        12

Picking the 0 for no change in A leaves either the 4 + 12 combination or the 6 + 9 combinations. Whereas the lowest change total is the 3 + 4 + 1 combination. Picking the lowest local value does not guarantee an overall lowest score.

The other strategy I have been using is that if a higher level object (one with more parts that stay together) is found to change then it is more likely to attract attention than a lower level object. And the previous strategy was from top down such that when a high level object was found to change all its parts were eliminated from further consideration as sources or destinations of changes.

When an object of two parts changes both parts change the same amount. They either: move the same distance, change intensity the same amount, and / or expand the same amount. A reflection / flip though might not follow this rule. What might be important though is that the sum (and therefore the average move per part) of the moves of the two parts is always constant for any ratio of part sizes. This is because as one part gets bigger the other gets smaller. So as a result when considering the total minimum change of two parts, if they form an object then the whole object change will give a minimum for the combination of parts. This makes finding the minimum in the matrix a simpler task.

However solving the matrix problem might be easier because I believe it is symmetrical down its axis. The bottom left triangle should be the same as the top right. If A changes to C then C changes to 'A' and should give the same change score. So a bottom up strategy might be to identify all the level 1 objects that have no change. These form one big multi-part object that is effectively the background. Then identify all objects that change with a score of 1. They form the next multi-part object even though some of the parts might move in opposite directions. And do this repetitively for higher change scores until all objects have been used up.

19th June 2008                    Edges versus Objects

An edge occurs wherever there is a change of reading between adjacent sensors. An edge can move from one position to another and the ratio of the readings across the edge can change too. If an edge is the smallest recognizable thing then it is a type of edge that has a relative reading left to right, no relative size (could be set to 1) and no width to size ratio (also could be set to 1). Its experience / occurrence is of a type of edge at a position, with a reading of the left side but a size of zero.

An object as measured by a sensor or many adjacent sensors that give the same reading is a type of object made up of two edges. Its experience / occurrence has a position (maybe the centre, position equally between the two edges), a reading (presuming the reading of the sensors) and a size (the number of sensors spanned = the distance between the two edges). If the type of object is made up of the two edges then we have a problem because each edge has a relative intensity. This would result in many types of objects based on combinations of types of edges.

Can we define an edge as a thing where the relative readings are part of the experience not a property of the type of edge?  Thus there would be only one type of edge but many different experiences of it. Each experience would contain the relative reading (not the absolute readings on both sides) plus a position and a size of zero. Then when two edges are combined to form an object the two edge parts are the same.

23rd June 2008                    Edges & Objects

I think that the series of sensor readings needs to be broken up into a series of edges and objects. Note that edges are changes in reading and have no size while an object has a reading and size. Each edge has a position and a change in reading and is of type “edge”. The edge type has a size of zero. Each object has a position, a reading and a size. The object type has a variable size greater than 1. Then all possible adjacent pairs are recognized composed of either an edge and an object (EO) or an object and an edge (OE). Then these can be combined into series of either edge-object-edge (EOE) or object-edge-object (OEO) as long as they share the same common / middle object or edge respectively. The OEOs are what I have been classifying as objects previously.

To formulate all possible combinations I also have to invent a thing called a gap. A gap has a size and position like an object but no reading. Does such an invention only appear adjacent to objects or can one have an edge then a gap then an edge?

Since all the parts of a thing change together an object is recognized as having all its parts change in unison. If an E, O, EO, OE, EOE or OEO moves left or right all the positions of their parts move the same distance. E’s and O’s both have position. If an EOE moves such that it is now adjacent to an O with a different reading then one of its Edges will not be the same and thus it will only be an OE or an EO that has moved. If they come closer, then the O parts’ sizes expand in unison but the E parts have no size to expand. If the intensity changes then O part readings change in unison and in between E parts retain their change in reading thus are the same. But the border E parts of EOEs that change their intensity relative to their surroundings have different change in readings thus only the O part is seen as the thing that has changed, new E’s have appeared. Thus a thing is seen as a series of edges and objects forming a pattern. Reflections and negatives should also be recognized as the same pattern.

The EOEs are the least intuitive things because when objects change the relative intensities at their border edges change while the internal edges don’t change. Therefore is there any use for the EOEs? It is the relative intensities of those internal edges and the relative sizes of the Os that determine the pattern / series, not the border edges.

24th June 2008                    Representing edges & objects

Since I have to be able to recognize the same thing after several types of change I need to separate the particular instance from the type of thing. The type of thing cannot have a position; this is a property of the instance. The type of thing cannot have a reading; this is a property of the instance. The type of thing cannot have a size; this is a property of the instance. Reflection and negative are also properties of the instance. The type of thing is made up of two parts and the relative intensity of the two parts is one of its properties. Also the relative size of the two parts and their relative separation are properties of the type.

Thus representing edges and objects could take the following form:

An Edge has a position, a reading, and a size of 0. It is of type edge. Type edge is made up of two parts which are terminal with a relative intensity, a relative size of parts =1 and separation of 0. An Object has a position, a non-zero size and a reading. It is of type object. Type object is made up of two parts which are terminal, a relative intensity = 0, a relative size of parts = 1 and their relative separation of 1.

An EO or an OE is a thing which has a position, a size and a reading all of which come from the object instance. It is a type of thing made of two parts. One of the parts is an edge and the other an object. Its relative intensity is that of the edge type. Its relative size and separation is that of the object part.

An OEO is a thing whose position is at its centre, a size equal to the sum of sizes of the two objects and a reading of the left part.

25th June 2008                    Discrete readings

On the 31st May 2008 I realized that for discrete readings the different edges as things which are made up of two adjacent objects didn’t make sense. But maybe, if all changes between discrete readings are considered the same object (a change) then discrete readings may be just a special case of graduated readings.

30th June 2008

See the 2008 Scientific Research and Experimental Development (SR&ED) tax credit claim.

Details about the Canadian Revenue Agency’s SR&ED tax incentive program are here.

6th July 2008                        Response as change

Responses are requests for change as opposed to requests for absolute positions of states. Thus we have action words which describe changes that can be done such as push, pull, and twist. If actions are therefore change requests one can combine them with Stimuli just like we combine objects and edges in line recognition. Given R is a response and S is a stimulus we could store the pairs SR and RS as action type objects. Then RSRs are formed from an RS followed by an SR where the S is the same and SRSs are formed from an SR followed by an RS where the R is the same. Then by combining these we could end up with just one representation for RSRSR just like we have solved the stimulus sequence recognition tree structure. An RSRS is the RSR followed by an SRS where the SR is common. The RSRSR is the RSRS followed by an SRSR where the SRS is common.

                                                Edge, Object, and Edge

We look at cartoons and wire frame drawings and can recognize the images because the edges of the real world equivalent thing have been outlined. Effectively the edges seen in the real world have been replaced with thin lines. And a thin line is an EOE. If the inside of the object between the outer lines has been filled in then the second edge is a different colour / reading than the first edge.

12th July 2008                      Thoughts from ISAB 2008

                                                Delayed reinforcement

An explanation as to how delayed reinforcement can occur was that the reinforcement only reinforces those things that were coincidental recently. Thus for some time after the coincident occurrence of two events a reinforcement will bind them even though intermediate events have occurred.

                                                Meta rules learning

Can Adaptron learn the meta-rule that occurs when you switch repeatedly backwards and forwards between one rule and another rule?

                                                Directed Attention

Attention goes all the way down the hierarchy to control the subset of stimuli desired. Each binon consists of:

  1. Attention action for 1st stimulus,
  2. Input of 1st stimulus,
  3. (response if it is an action habit),
  4. Attention action for 2nd stimulus,
  5. Input of 2nd stimulus.

Attention action says “look for the change and what type of stimulus”. Change attracted attention in the first place. The Stimulus has a “where” = sense/sensor = where to look. It has a type = edge/change, object/reading and a “what” = value / discrete / reading or change amount / difference.

                                                Finite State Machine

An analogy to a FSM might be useful: event = change, [condition] = stimuli value – symbol and /action = response – trigger an event (operation).

                                                Line moves

I could decompose a line move into a line disappearance and a new line appearing.

                                                Learnt actions

Once an orienting response (OR) is learnt it continues to work, it does not get bored and does not try other responses. What is the goal? What is measure of “success” to stop trying alternative responses? Is it that a change has been observed?

                                                Drives

Stimuli come from internal (body) and external (world) sources. Ones coming from internal sources alone are interpreted as being pleasant or unpleasant. The body interprets external source stimuli that cause body pain or pleasure and it produces the internal stimuli. The external stimuli then get associated valence.

                                                A (not B) explanation

Piaget’s A (not B) problem further described by Linda Smith can be explained by subconscious execution of a learnt behaviour sequence (habit) that is triggered by a similar situation - generalization. The same body position, experimenter’s actions (except for small detail – side touched), arm weight, cover appearance etc. are seen as sufficiently similar to the trigger situation of a learnt sequence. As soon as it is less similar due to change in arm weight, posture, interrupting event etc. the result is a new conscious behaviour sequence. The fact that when the error occurs the exact, same arm trajectory is used and gaze is on location A helps reinforce this is a repeat of a learnt behaviour sequence.

                                                Homeostasis / Homeokenesis

Homeostasis is the maintenance of a particular reading while homeokenesis is the maintenance of a particular change. One type of homeokenesis is homeotaxis, which is the maintenance of a direction. A thermostat keeping a room above a certain temperature is a one-sided homeostasis situation while keeping a room in a temperature range by turning on and off the heating and air-conditioning is a two-sided homeostasis situation. Sequential habits that maintain such a situation must be always active recognizing stimuli and their changes that are within the limits. Action habits that turn on and off the devices must be continuously active when stimuli are outside the limits. They must have been learnt and must not get boring / explored further as long as the actions they perform continue to get the result desired – a reading within the limits.

17th July 2008                      Change detection

I’m trying to capture all the rules for an object changing position, size and intensity. If we consider the most complex situation of a graduated multi-sensor sense with graduated readings from the sensors we have this rule that when a thing changes all its parts change in unison. If those parts can be objects and edges then we must understand what can happen. An object has a position (its middle point), a size / width and a reading. An edge has a position, zero width and an intensity ratio from one side to the other. So when a thing made up of parts changes:

  1. All its parts move together the same distance if there is no change in size.
  2. All its parts change size by the same ratio except for the edges that have no size and a new position is determined but the parts have not all moved the same distance.
  3. All the parts change their reading together, this includes all inner edges but outer edges may change their intensity ratios depending on what they end up beside.

It is always true whether an object moves and/or changes size that the sum of the positions of its two outer edges will always be equal to its position times 2. Does it make sense to use the same idea for the reading / intensity? An object made up of two lines could have a reading that is the sum of the two lines readings divided by 2.

18th July 2008                      Combining things with gaps

When combining things (lines and edges = objects and changes) that are not adjacent (have gaps (G) between them) what rules seem to apply?  The level 0 things are lines (L) and edges (E).

Level-1 things are adjacent things (LE and EL). EEs and LLs should not be produced even though lines are adjacent to each other. It doesn’t make sense to create EGEs, LGEs or EGLs. But maybe LGLs are reasonable because rule 3 above says the edges that are between the lines and the gap may change their intensity ratio depending on what they end up beside. The resulting things (LE, EL and LGL) have a position and size provided by the line or pair of lines in the case of the LGL. They have a reading provided by the 1st part. The new types of things will be combinations of the primitive types. All lines are of one primitive type and there are many edge primitive types. The new types of things will have a part1 and part2 width plus total width. They will also have a relative reading provided by the edges or the two lines separated by the gap.

Level 2 things are LELs and ELEs. But they also could be LGLEs or ELGLs. Additional gaps are not introduced at this level or higher levels. Additional gaps are introduced by combing things that have gaps in them. For example two LGLs which share a common line will produce an LGLGL at this level.

23rd July 2008                      Change attracts attention

I have been using a formula to determine the amount of change and assuming the smallest value represents the simplest interpretation and therefore is the one to use to attract attention. The formula is P^2 +S^2 +I^2 where P is the change in position, S is the change in size and I is the change in intensity. However if there are two objects that do not change their position or size the one that changes its intensity the most is the one that attracts attention. Similarly if there are two objects that don’t change their intensity or size the one that moves the most attracts attention. And if there are two objects that don’t move or change their intensity the one that changes its size the most attracts attention. This would seem to imply the maximum value for the formula is the one to use. But another factor is the relative size of the two objects. If one is larger it attracts attention if they both move the same amount, change intensity by the same amount or change size by the same amount.

However if one object can be perceived as changing into two alternate objects it is the change that minimizes the formula that attracts attention. So first the best combination of changes must be selected then from these the one that maximizes the formula. This would mean that at each level find the best object changes based on minimizing the formula. Then for these changes, the new and old find the greatest value. This attracts attention at this level. Do this at each level until a level is reached where no changed objects are found or just one changed object is found. If no changed objects are found, then the best change from the level below attracts attention.

Humans have a bias towards interpreting a regular array of lines changing from a positive to a negative as the lines moving left or right. People’s faces and objects in general are usually difficult to recognize in a negative black and white photo. This seems appropriate because in nature negatives don’t normally occur.

24th July 2008                      Edges

After some deal of experimentation I have realized that any combination of edges and lines that have edges exposed are not useful. Thus I need to create Es, Ls, LGLs, LELs and any combinations where the common part is a line or has lines on its borders. The object types as before will store the pattern using relative values (changes) while the experience will store the actual absolute values of position size and intensity.

                                                Negatives or Reflections

I have decided to represent the pattern 1212 as a reflection of 2121 rather than a negative because of the higher probability of it being a reflection in the real world. Read 4th June 2008.

25th July 2008                      New lines

I have discovered that after identifying the changed lines and the most attractive of these then it is still possible the combination of new lines will attract attention.

7th Aug 2008                        Change attracts attention

I’m trying to understand how to score a change in a line’s position, size and intensity such that the one with the maximum attracts attention. I have been able to find the groups of lines that move such that the least overall changes occur but need a different formula for the maximum change amount. This applies to new lines as well as moved lines. If each sensor reports a change in reading from frame 1 to frame 2 and I add these up for all the sensors in a line then this would work whether the line moved to this location or appeared from nowhere. This would account for larger objects attracting attention because more sensors are involved. It would account for a larger change in intensity attracting attention.

11th Aug 2008                      Groups versus new and old

I wrote yesterday on a test run “Maybe what I should be doing is combining all possible lines that appear in both frames up to the most complex combination. Finding out what the change this biggest combination produces. Then looking at all the remaining new lines that are not part of the combinations, putting them together as a new combination and whichever has the highest change attracts attention.”

In a little more detail I would group frame-2 lines into larger combinations provided they continue to appear in frame 1. Any group not in frame 1 would not be considered for finding the maximum change that occurred. As each group reaches its maximum complexity (that is it contains its maximum number of lines) then its change is used. It also is not used in the next level of combinations. All the lines left behind (not in the first line pairs) are new. Since they all appeared in this frame they form an object with a combined change. All the lines that disappeared from frame 1 also form an object and have a combined change. Whatever has the maximum change attracts attention, either the combination with the maximum change (not necessarily the most complex), the max change new combination of lines or the object area where a combination of lines disappeared.

22nd Aug 2008                    Including no-change stimuli

When combining lines into pairs in the second frame I have been including lines formed from the readings from sensors even though their values have not changed from the first frame. This is so that when 001010000 changes to 000010100 the sensor at location 5 is not ignored and the 101 appears to move. However when animated the 1 at location 3 appears to jump to location 7 as though the 1 at location 5 is stationary. I have experimented with more complicated series of lines in the Mover program that move right continuously and found that we do recognize the 101 moving. I suspect the mechanism should be different from my current Recog8 algorithm.

What I need is the objects in the first frame to form a hierarchy of recognized objects. Then they are all expecting no change in the next frame, the same strategy that I have been using in sequential habit recognition. Then as I have documented on 15th March 2007, the largest line combination that experiences a change in frame 2 is what attracts attention. This means that in frame 2 we do not form all large line combinations just because we can. We go looking for the same line combinations found in frame 1. After finding the largest ones we find the simplest change and score its change amount. All the lines left over are new and this forms another combination that gets scored. Any lines that are missing form a combination that gets scored. The highest scored change attracts attention.

24th Aug 2008                      Changing lines

I have realized that the approach of using 2 frames is not like reality. I’m trying to come up with an overly engineered solution. Only sensors that have changed attract attention. Combinations of lines formed from these changes are remembered and added to the recognizable tree of line combinations.  When presented with another image (series of lines) those line combos that are recognizable (simultaneous habits) get recognized. New combos are produced from any combination of changed sensors that are not recognized. The learning is always in the changes. Any unchanging part of the scene never gets added to the recognizable line combinations. But this introduces the question what if the whole scene moves left or right? This happens so frequently in any mobile creature that it must be handled without having to recognize all possible line combinations making up the scene. Every sensor has changed. Such shifts must be recognized at the lowest individual line level and no combinations created from the shifted scene unless part of it shifted a different amount.

So my Recog8 program must use frame1 to establish the tree of recognizable line combinations. Then it must analyse frame 2 using only the changed sensor values. It should first determine if there is a global shift, a global change in brightness or a global flip to the negative with no other internal change. Then it analyses the scene finding all the line combinations that it already knows. This is subconscious recognition. Then it finds any new line combinations from the changed sensors that it did not know. The largest such change attracts attention unless the change is less than the concentration level. If concentrating then the expected combination is found in the area attended to or what occurred instead.

25th Aug 2008                      Global scenes

Idea: The Combine routine just produces combos that contain adjacent lines in frame 1, no combos containing gaps. Then the frame 2 analysis finds combos with gaps only if the two parts change the same. I.E. they move the same amount, change their reading the same amount, or expand contract the same amount. Thus parts with gaps are only found / remembered if they change from one frame to the next. Actually this idea should apply to all objects. I can create all the combos from any individual frame but I should only keep the ones that change from one frame to the next. Thus it must exist in both frames. This means that a new object appearing in frame 2 might attract attention but the object is not kept. The change attracts attention though. Is attention attracted to an area or an area plus an expected object?

If the whole scene should have a global change and no combo changes any differently, then this should be recognized as a change but not cause the entire scene to be remembered as a combo. What about when part of the scene is missing on one side and new stuff appears on the other side? Rather than a global scene change being recognized the new part should attract attention. Maybe when it comes to new parts, the complete tree of combos for that part is created but all based on adjacent parts. Thus we have large combo snapshots of the area changed. This sounds like the capability of people who have photographic memory. Then the combos with gaps are only created if they change from one frame to the next as previously suggested. This means that any sensory change causes combos to be created.

Thus frame 1 is a change from a completely blank frame. This means all sensors have changed and a tree of combo objects is created from only adjacent lines. A blank frame sensor reading will be the value –1. The sensor is not functional, providing no reading. Now one or more regions in frame 2 are different from frame 1.

26th Aug 2008                      Known objects first

A sense is always expecting there to be no change from the previous frame. If the sense is not being attended to then given a new frame the recognition algorithm should first recognize all combinations of lines that it has in its object memory for that sense. This takes place subconsciously. It is done on all sensor readings independent of whether they have changed or not. As this happens the recognized objects’ amount of change experienced is reduced by the change expected for that object based on any subconsciously running S-Habits. Any residual changes must have been unexpected and form a new object(s). The complete collection of these new / unexpected lines forms a new object with an appropriate accumulative change amount. The expected change for the object is sequential expected change.

27th Aug 2008                      Unexpected Change

When I phoned the dentist this morning I expected Gail to answer. I had an “image” in my mind of the voice I was going to hear. She was on holiday so I was surprised when someone else answered. As I came into the kitchen I noticed a raspberry on the floor. I had dropped it as I had left the kitchen with my bowl of raspberries. It was unexpected in the middle of the floor. These are both examples of where the sequential expected stimulus was not perceived. And the part that was different from the expected attracted attention. It was the part of the experience that had a residual change after all other changes were recognized.

If the sense is being consciously attended to then an S-habit is being done consciously. A particular area of the sense is being focused on. This could be the whole area of the sense or a subset of it. Similarly a range of readings is also being attended to. And a particular object type is expected in this area, as was experienced in the S-habit being performed. Finding the object type in the area satisfies the goal of the S-habit. Finding a different yet known object type causes a new sequential experience to be created and remembered. The same happens for a novel object type plus the new object type is in the can be recognized tree (P-Habits).

The challenge is to devise a formula for the amount of change to be expected. Rather than have one accumulative change amount maybe it would be better to have several change types with their amounts. Sensors have a change in reading only. An edge could have a change in position amount, a change in reading range, a change in reflectivity. A line could have a change in position, a change in size and a change in reading. A combination of lines could have a change in position, size, reading, reflectivity and negativity. Then a summation change formula could be used to compare changes of experiences to determine the most attractive and whether it exceeded the concentration level. As lines and edges are combined to form bigger combinations the changes of the parts are somehow “added” to give the combinations change rating.

This would seem to imply that the sensor-detected change from one frame to the next is only used in sensor differences. But that is not true. The change in readings for lines and edges are based on the sensor-detected change. Position, size, reflection and negativity change are not based on sensor-detected change, they are determined from position and relative readings.

28th Aug 2008                      Novel versus new lines

I can detect a novel line combination at any level because it does not exist in the object tree. Then there are new line combinations that appear in a frame at each level. These include any novel ones at this level. New lines are detected after eliminating all the simplest changes at each level. An approach might work as follows.

  • At Level 0 all lines and edges are treated as known, not novel.
  • Find all the simplest changes between frames.
  • End up with changed edges or changed lines plus some lines new to the frame or missing from it. New ones can’t be novel.
  • At Level 1 all adjacent line pairs are formed. Some of these will be known and others could be novel.
  • Find all the simplest known pair changes between frames. This will not include any novel ones.
  • End up with novel pairs, changed known pairs and either some new or missing known pairs.
  • At Level 2 all combinations of 3 lines are formed. Can’t use novel pairs in these because the 3 would be novel if a pair in it were novel. Can use both changed and new pairs because a 3some might be a simpler change if it includes a new pair. Novel pairs form edges or gaps between the recognized pairs such that combinations can’t cross the edge or gap. Some of these 3somes could be novel even though the pairs are known.
  • Find all the simplest 3some changes between frames. This will not include any novel 3somes.
  • Then form all novel 3somes by combining all adjacent novel pairs. Known pairs form edges or gaps between novel pairs. Don’t combine a novel and known pair.
  • End up with novel 3somes, changed known 3somes and either new or missing known 3somes.
  • Keep doing this at all levels until there is a maximum of 1 known combo between each novel combo and vice versa
  • Then combine the known combos across gaps and the novel combos across gaps.
  • Then combine all known combos with gaps until there is just 1 known combo.
  • Then combine all novel combos with gaps until there is just 1 novel combo.

During this process the change amounts must be calculated and the biggest changing combo identified.

1st Sept. 2008                       Line habituation

It is the object type that needs to have an expected change amount such that if that object type occurs again with a lesser amount of change it is ignored and if it has a greater change it attracts attention. If one considers a tree of neurons each with a learnt expectation then only overflow change goes higher up the tree. From a single scene the activation levels based on change amounts propagating up the tree creating a particular activation pattern. These neurons are then habituated. The next scene causes changes to propagate up the tree overflowing anywhere they are greater than the habituation expectation change level. This may activate already existing neurons or create new ones to recognize the combination of overflows. Anywhere there is no overflow there is recognition but not enough to attract attention. The tree effectively has expectations of valid patterns and whenever the unexpected happens, it speaks up, which may or may not attract attention depending on the concentration level. Only attended to new object types should be habituated / formed.

However, sensory neurons must have an expectation of the previous stimulus so that they always pass on any change in value. 

2nd Sept. 2008                     Habituated object

I am assuming a new object type must be produced whenever its combination of lines is found. The new object type could not have moved from anywhere on its first appearance so its expected change did not exist. Thus its absolute change from 1/ the previous sensor readings or 2/ from zero size / zero reading becomes its change amount. If it gets this amount of change again we want to disregard it. That means it would have been learnt. To be learnt it must have been concentrated on. This would imply that if an object type was newly formed and it was part of a higher level newly formed object type that was concentrated on then the learning must propagate down the active object type tree. What if the newly formed object type is not concentrated on? Only the most interesting part of the scene is going to attract attention. Does it get forgotten / erased?

When an object type newly appears I think it would be better to set its change amount to 2/ the change from zero size / zero reading. Thus if it should appear new in another scene it has the same change amount. If it reappears from one scene to the next then it has an expected change in position, size, reading, reflection and negative from any learnt S-Habit. So given two sequential scenes we have a) novel object types, b) moved object types, c) newly appearing known object types, and d) missing object types. The novel ones are brand new and have an interest based on the change from zero. They have no sequential expected change. The moved object types have a residual interest based on how much they changed versus expected from any S-Habits. The new known object types have a residual interest based on how much they changed versus their expected change from zero. A missing object type must attract attention to a particular area and the new object type in that area. The amount of change caused by the missing object type would be as though it had gone to zero size and zero reading. If this were greater than the concentration level then attention would be attracted. The interest experienced though in the new object type in the same area would be based on it having newly appeared from zero, not from the change in reading from the missing object type.

I want the object type tree to store the expected changes from zero so that object types that are unexpected have a residual interest level. I want the S-Habits to store the expected sequential changes of position, size, etc.

19th Sept 2008                     The process

  1. Real world stimulus interrupts attention because of an unexpected part of a stimulus.
  2. Recall the unexpected part from memory.
  3. It is found and has an associated action and goal stimulus.
  4. The idea of the goal stimulus is recalled and an associated next goal stimulus is recalled.
  5. The expected goal stimulus is wanted, interesting due to novelty or desired due to rewarding.
  6. The action sequence is performed. It is relegated to the automatic execution / background.
  7. The goal stimulus is perceived; it comes with the reward or it has an unexpected (novel) part.
  8. If the goal stimulus matches the expected the subconscious action sequence ends without attracting attention.

[There is a gap here in my research due to developing the Learning Tree course 447 - Introduction to Modelling for Business Analysis]