logo logo
Home arrow Conscious Machines... arrow Theories of Consciousness arrow Pentti Haikonen's architecture for conscious machines
Sunday, 20 April 2014
Pentti Haikonen's architecture for conscious machines Print E-mail
Written by Trung Doan   
Thursday, 10 December 2009
Article Index
Pentti Haikonen's architecture for conscious machines
Page 2
Page 3
Page 4
Page 5

Pentti Haikonen's architecture for conscious machines

By Trung Doan (doanviettrung a_t gmail dot com). 

Haikonen's contribution to the machine-consciousness endeavor is an architecture based on cognitive principles. He also developed some electronic microchips as a first step to building a machine based on that architecture.

Below, we look at how a Haikonen machine might achieve consciousness once built, by examining some of its cognitive capabilities, and in the process will briefly discuss the Haikonen architecture.

The Haikonen machine perceives

Say the Haikonen machine's cameras are  focusing on a yellow ball. The cameras' pixel pattern is fed into a preprocessor circuit which produces an array of, say, 10,000 signals, each signal carried by, for example, a wire. One wire is the output from the preprocessor's "roundness" circuitry and, in this case, the signal is On. Another wire, from the "squareness" circuitry, would be Off, i.e. carrying no voltage. A group of wires is the output from the spectrum-analysis circuitry, the wire corresponding to frequencies which we humans recognise as "yellow" is On while "red", "blue", etc., wires are Off. There would be many other groups of wires depicting size, brightness, edges, etc.

The machine does not internally represent the ball as a round graphic, nor a set of numbers representing diameter, color, etc., but by this signal array. Haikonen calls this a "distributed signal representation".

Suppose the machine is shown several balls of different sizes, colors, etc., one at a time, and each time its microphone hears the sound pattern we humans understand as the word "ball". Because they appear at the same time repeatedly, the machine associates the sound pattern and the visual pattern together. The making of associations is how the machine's perception is done.

After several different balls are associated with that sound pattern, the machine finally learns to associate the "ball" sound pattern with anything that is round.

[Use the "Read more..." button below for the rest of the article]

It learns, remembers, and recalls

A word of warning: This Section is somewhat heavy going, but once understood will give a fairly good understanding of the machine's architecture and cognitive capabilities.

For Haikonen's associative machine, learning, memorising, and recalling are just a few different aspects among many of the one thing it does - making associations.

When the utterance "ball" is heard, the microphone's aural preprocessor circuit translates the sound vibrations into a signal array at its output side. Again, this is a bunch of, say, 1,000 signal lines, in which some wires indicate the various harmonic frequencies detected by the circuit's spectrum analyser, and some indicate the temporal pattern, etc. This signal array is broadcast to large numbers of "neuron groups", several of which will store this signal pattern, using one of 2 methods.


The circulation method:

Some neuron groups store the "ball" utterance this way: Each input line feeds straight through one collection of several dozen transistors, diodes, etc., (a Haikonen "neuron" in this "neuron group") and appears at the output side. Hence, these 1,000 neurons' outputs have the "ball" utterance's pattern of On's and Off's. But this pattern doesn't disappear when the preprocessor processes the next utterance. Each output signal line is, the way it is built, wired back to its own input. Thus the pattern just circulates itself indefinitely until some external control signal turns it off.
The above method suits limited-capacity and short-time working memories, it is expensive because each memory occupies a whole neuron group and keeps its circuits running hot. Another method, suitable for long term and vast memory stores, is related to learning, and it uses "synapses", explained below.


The synaptic method:

Apart from the above-mentioned collection of transistors and diodes, a typical neuron also has thousands of input lines, each coming from another neuron in a neuron group elsewhere. The job of these thousands of input lines is to help the neuron decide whether to pass the main input signal to the output side. Each of these thousands of "associative inputs" connects to the neuron via a transistor  playing the role of a "synapse". Every time the main input signal and a particular associative input signal go On or Off at the same time, its synapse transistor is fed higher and higher voltage. After a sufficient number of consecutive times, the synapse transistor's output permanently latches from no voltage to positive voltage. Thus, the neuron has learned to associate an associative input with this main input, via this associative input's synapse. Scores of other associative inputs, via their own synapses, can do that too. From now on, whenever the associative input on any of these synapses is On, the neuron will want to turn On. The more of them, the more likely it will turn On.

For example, if this neuron has 15,000 associative inputs, and 500 of them have synapses that have latched On, then with 1 such associative input sending an On signal to its synapse, the neuron wants to turn On, and with all such 500 associative inputs On, it will turn On (unless inhibited by some external control signal). When the neuron does turn On, it produces an output pulse just like the one on the main input signal that was used during the learning-to-associate time.

Consider a neuron group among those receiving the broadcast from the aural preprocessor. It has 1,000 neurons receiving the aural preprocessor's 1,000 output signal lines as its main inputs, and from near and far there are 15,000 associative inputs coming into this neuron group's each and every neuron, as above. Unlike many others, however, this particular neuron group receives its associative inputs from, among other sources, the visual preprocessor (which takes up 10,000 out of the 15,000 associative input lines). Therefore, it associates the visual pattern of a ball with the aural pattern of the utterance "ball", by relevant synapses (out of the neuron group's 15,000 * 1,000 = 15 million synapses) latching On.

From now on, whenever the above 10,000 associative inputs carry the visual representation of a ball, the neuron group turns On, producing the "ball" utterance pattern. Say this pattern consists of 1,000 lines, of which 270 particular lines are On and the other 730 are Off. It is the neuron group's relevant 270 neurons turning On, producing a voltage pulse, and the other 730 staying Off that produces this pattern.

Thus, by the above synaptic method, this utterance has been stored then later recalled by association.


The versatile neuron group

Can other patterns, apart from the visual representation of a ball, evoke the "ball" utterance representation from the above neuron group? Yes, they can. Say among the neuron group's 5,000 remaining associative inputs , 500 come from the preprocessor circuit of the right hand's skin sensors. If a ball was repeatedly put on this hand while the word "ball" was spoken, this neuron group will have made the association, and now a corresponding skin sensor pattern will activate the above 270 neurons to turn On this utterance representation.

Can this same neuron group store and recall aural output patterns other than "ball"? Yes it can. The above 270 neurons turning On will produce the "ball" utterance representation, but another 270, or 681, etc., neurons turning On will produce another pattern. Theoretically there could be 2 (each line is Off or On) raised to the power of 1,000 (lines) distinct output patterns, however the practical number will be far less to avoid interference between patterns, and to avoid putting all eggs in one neuron group basket.

Last Updated ( Friday, 11 December 2009 )

Lost Password?
 Conscious Robots RSS FeedConscious Robots RSS Feed

Find us on Facebook

Follow us on TwitterFollow us on twitter

Machine Consciousness Bibliography Database


The Cognitive Machine Consciousness Scale

Last Posts in Forum
miel continental