This blog is for notes on how to use the Nengo spiking neuron modeling system

Friday, June 24, 2011

un convolve

To unconvolve something take the convolved vector and convolve it with the inverse of one of the vectors that got convolved. This will produce the vector that it was convolved with. Here is some code

import nef
import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True)


A=net.make_array('A',neurons=8,dimensions=1,length=64)
B=net.make_array('B',neurons=8,dimensions=1,length=64)
C=net.make_array('C',neurons=8,dimensions=1,length=64)
D=net.make_array('D',neurons=8,dimensions=1,length=64)
E=net.make_array('E',neurons=8,dimensions=1,length=64)


# convolve A and B together into C - function only works on pairs
Conv1=nef.convolution.make_convolution(net,'conv1',A,B,C,N_per_D=200,quick=True,mode='direct') # direct tells it to do it without neurons
Conv2=nef.convolution.make_convolution(net,'conv2',C,D,E,N_per_D=200,quick=True,invert_second=True) # it saves power but produces no noise


net.add_to(world)

convolution

Convolution is a way of combining vectors into a single vector that represents the combination. There is a funciton for this in nengo. Here is an example. If you run it not that it combines the vectors dog and cat but not mouse. The result is a vector that highlights dog and cat but not mouse

import nef
import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True)

input1=net.make_input('input1',values=vocab.parse('cat').v)
input2=net.make_input('input2',values=vocab.parse('dog').v)

A=net.make_array('A',neurons=30,dimensions=1,length=128) # array is less costly to make than a large ensemble
B=net.make_array('B',neurons=30,dimensions=1,length=128) # creates a more localist representation
C=net.make_array('C',neurons=30,dimensions=1,length=128) # will not norm the data as well


# convolve A and B together into C - function only works on pairs
Conv=nef.convolution.make_convolution(net,'conv',A,B,C,N_per_D=200,quick=True) # neurons per dimension N_per_D


net.add_to(world)

Thursday, June 23, 2011

dot product on memory

Here is some code to get the dot product (measure of similarity) for a memory store and a probe. the memory remembers what it has been exposed to. The dot product should be high if anything in memory matches the probe. Try exposing the memory to two animals then see how it reacts to an animal it has not seen before

import nef
import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True)

#input1=net.make_input('input1',values=vocab.parse('cat').v)
#input2=net.make_input('input2',values=vocab.parse('cat').v)

A=net.make_array('A',neurons=30,dimensions=1,length=128)
B=net.make_array('B',neurons=30,dimensions=1,length=128)
C=net.make_array('C',neurons=30,dimensions=1,length=128)

M=net.make_array('M',neurons=30,dimensions=2,length=128)

Dot=net.make('Dot',neurons=100,dimensions=1)



#net.connect(input1,A)
#net.connect(input2,B,weight=0.01)
net.connect(B,B)
net.connect(C,B)

net.connect(A,M,index_post=[i*2 for i in range(128)]) # index_post refers to the destination, index_pre refers to the source
net.connect(B,M,index_post=[i*2+1 for i in range(128)]) # default is to take every one

# inside [] is python for take every second starting at 0 or take every second starting at 1


def multiply(x):
  return x[0]*x[1]

net.connect(M,Dot,func=multiply) # default is to apply the function to every ensemble in the network M

net.add_to(world)


storing vectors

To store a vector in a net the net has to feedback on itself. That is it constantly adds to itself what it just had. Multiple vectors can be added to this type of net, that is, added to the vector represented in the net. To maintain what was represented before the new vector cannot be added too quickly, this is controlled by weight of the connection. There is an interplay between the weight, the number of neurons, and the number of dimensions. The weight of the feedback loop also has an effect, if it is less than 1 it will cause a decay

Here is an example of a memory system that stores a representation of cat, mouse, and dog (use "set value" by clicking on the sematic pointer graph to change the input

import nef

import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True) # quick=true, if you have created these neurons in the past just re-use


input1=net.make_input('input1',values=vocab.parse('cat').v)

A=net.make_array('A',neurons=30,dimensions=1,length=128)
B=net.make_array('B',neurons=500,dimensions=8,length=16)

net.connect(input1,A)
net.connect(A,B,weight=0.1)
net.connect(B,B)

net.add_to(world)
#put this here, then when the network appears you know its finished loading

computer power

Working with vectors usually requires a high dimensional space, which requires more neurons. Creating these is computationally expensive and may require more (ram) memory than your computer has. Here are some ways to get around that

import nef

import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True)

# quick=true, if you have created these neurons in the past just re-use

net.add_to(world)

input1=net.make_input('input1',values=vocab.parse('cat').v)

A=net.make_array('A',neurons=30,dimensions=1,length=128)
B=net.make_array('B',neurons=500,dimensions=8,length=16)


# use an array

net.connect(input1,A)
net.connect(A,B,weight=0.3)
net.connect(B,B)

Using an array allows you to create a group of smaller nets that act as one larger net. A computer with less memory can cope with this. In the example, A is composed of 128 groups of 30 neurons that encode 1 dimension each, B is composed of 16 groups of 500 neurons that encode 8 dimensions each.

The relationship between the number of neurons and the number of dimensions is a judgement call, but more neurons will result in a cleaner, less noisy representation.

However, doing it this way creats a more localist representation of the dimensions (e.g., A represents each dimension seperately). This will reduce the amound of normalization that occurs

vector vocabulary

A vocabulary can be set up to relate random vectors to specific symbols. This example shows a vocabulary of A, B, and C. A is fed into a function then into a net called A then into a net called B. this can also be controlled in the interface by clicking on A and getting the semantic pointer graph that shows the activation of each vector in the vocabulary. Then click on the graph and choose set value to change or combine the vector values. Note that the vector representation is different from the dimension value representation (also note that the default is only to show the values for the first 5 dimensions, this can be changed by clicking on the graph). The dimensions need to be the same for all of the nets involved.

import nef

import hrr
vocab=hrr.Vocabulary(32)
vocab.parse('A,B,C')

net=nef.Network('Test Network')
net.add_to(world)

input1=net.make_input('input1',values=vocab.parse('A').v)

A=net.make('A',neurons=100,dimensions=32)
B=net.make('B',neurons=100,dimensions=32)

net.connect(input1,A)
net.connect(A,B)


Tuesday, June 21, 2011

vectors

Nengo can represent an individual values in each dimension of a net but it can also treat all the dimensions as a vector. Vectors are used as symbols in nengo, i.e., the pattern in the vector represents something specific, such as word. The dot product measures the similarity between two vectors

e.g.:

if you have two three dimensional vectors a1 a2 a3 and b1 b2 b3
you get the dot product by multiplying the vectors and adding up the product
so.... a1*b1+a2*b2+a3*b3 = dot product of ab

To do this in nengo the parts need to be created, then they can be added (note that the adding happens for free just by putting the vectors into the same dimension in a net)

Here is an example (you can test it by making the function controls look similar or dissimilar):

import nef
net=nef.Network('Test Network')
net.add_to(world)

input1=net.make_input('input1',values=[0,0,1])
input2=net.make_input('input2',values=[1,0,0])

A=net.make('A',neurons=100,dimensions=3)
B=net.make('B',neurons=100,dimensions=3)

m1=net.make('m1',neurons=100,dimensions=2)
m2=net.make('m2',neurons=100,dimensions=2)
m3=net.make('m3',neurons=100,dimensions=2)

Dot=net.make('Dot',neurons=100,dimensions=1)

net.connect(input1,A)
net.connect(input2,B)

net.connect(A,m1,transform=[[1,0,0],[0,0,0]])
net.connect(A,m2,transform=[[0,1,0],[0,0,0]])
net.connect(A,m3,transform=[[0,0,1],[0,0,0]])

net.connect(B,m1,transform=[[0,0,0],[1,0,0]])
net.connect(B,m2,transform=[[0,0,0],[0,1,0]])
net.connect(B,m3,transform=[[0,0,0],[0,0,1]])

def multiply(x):
  return x[0]*x[1]

net.connect(m1,Dot,func=multiply)
net.connect(m2,Dot,func=multiply)
net.connect(m3,Dot,func=multiply)