This blog is for notes on how to use the Nengo spiking neuron modeling system

Friday, June 24, 2011

un convolve

To unconvolve something take the convolved vector and convolve it with the inverse of one of the vectors that got convolved. This will produce the vector that it was convolved with. Here is some code

import nef
import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True)


A=net.make_array('A',neurons=8,dimensions=1,length=64)
B=net.make_array('B',neurons=8,dimensions=1,length=64)
C=net.make_array('C',neurons=8,dimensions=1,length=64)
D=net.make_array('D',neurons=8,dimensions=1,length=64)
E=net.make_array('E',neurons=8,dimensions=1,length=64)


# convolve A and B together into C - function only works on pairs
Conv1=nef.convolution.make_convolution(net,'conv1',A,B,C,N_per_D=200,quick=True,mode='direct') # direct tells it to do it without neurons
Conv2=nef.convolution.make_convolution(net,'conv2',C,D,E,N_per_D=200,quick=True,invert_second=True) # it saves power but produces no noise


net.add_to(world)

convolution

Convolution is a way of combining vectors into a single vector that represents the combination. There is a funciton for this in nengo. Here is an example. If you run it not that it combines the vectors dog and cat but not mouse. The result is a vector that highlights dog and cat but not mouse

import nef
import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True)

input1=net.make_input('input1',values=vocab.parse('cat').v)
input2=net.make_input('input2',values=vocab.parse('dog').v)

A=net.make_array('A',neurons=30,dimensions=1,length=128) # array is less costly to make than a large ensemble
B=net.make_array('B',neurons=30,dimensions=1,length=128) # creates a more localist representation
C=net.make_array('C',neurons=30,dimensions=1,length=128) # will not norm the data as well


# convolve A and B together into C - function only works on pairs
Conv=nef.convolution.make_convolution(net,'conv',A,B,C,N_per_D=200,quick=True) # neurons per dimension N_per_D


net.add_to(world)

Thursday, June 23, 2011

dot product on memory

Here is some code to get the dot product (measure of similarity) for a memory store and a probe. the memory remembers what it has been exposed to. The dot product should be high if anything in memory matches the probe. Try exposing the memory to two animals then see how it reacts to an animal it has not seen before

import nef
import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True)

#input1=net.make_input('input1',values=vocab.parse('cat').v)
#input2=net.make_input('input2',values=vocab.parse('cat').v)

A=net.make_array('A',neurons=30,dimensions=1,length=128)
B=net.make_array('B',neurons=30,dimensions=1,length=128)
C=net.make_array('C',neurons=30,dimensions=1,length=128)

M=net.make_array('M',neurons=30,dimensions=2,length=128)

Dot=net.make('Dot',neurons=100,dimensions=1)



#net.connect(input1,A)
#net.connect(input2,B,weight=0.01)
net.connect(B,B)
net.connect(C,B)

net.connect(A,M,index_post=[i*2 for i in range(128)]) # index_post refers to the destination, index_pre refers to the source
net.connect(B,M,index_post=[i*2+1 for i in range(128)]) # default is to take every one

# inside [] is python for take every second starting at 0 or take every second starting at 1


def multiply(x):
  return x[0]*x[1]

net.connect(M,Dot,func=multiply) # default is to apply the function to every ensemble in the network M

net.add_to(world)


storing vectors

To store a vector in a net the net has to feedback on itself. That is it constantly adds to itself what it just had. Multiple vectors can be added to this type of net, that is, added to the vector represented in the net. To maintain what was represented before the new vector cannot be added too quickly, this is controlled by weight of the connection. There is an interplay between the weight, the number of neurons, and the number of dimensions. The weight of the feedback loop also has an effect, if it is less than 1 it will cause a decay

Here is an example of a memory system that stores a representation of cat, mouse, and dog (use "set value" by clicking on the sematic pointer graph to change the input

import nef

import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True) # quick=true, if you have created these neurons in the past just re-use


input1=net.make_input('input1',values=vocab.parse('cat').v)

A=net.make_array('A',neurons=30,dimensions=1,length=128)
B=net.make_array('B',neurons=500,dimensions=8,length=16)

net.connect(input1,A)
net.connect(A,B,weight=0.1)
net.connect(B,B)

net.add_to(world)
#put this here, then when the network appears you know its finished loading

computer power

Working with vectors usually requires a high dimensional space, which requires more neurons. Creating these is computationally expensive and may require more (ram) memory than your computer has. Here are some ways to get around that

import nef

import hrr
vocab=hrr.Vocabulary(128)
vocab.parse('cat,dog,mouse')

net=nef.Network('Test Network',quick=True)

# quick=true, if you have created these neurons in the past just re-use

net.add_to(world)

input1=net.make_input('input1',values=vocab.parse('cat').v)

A=net.make_array('A',neurons=30,dimensions=1,length=128)
B=net.make_array('B',neurons=500,dimensions=8,length=16)


# use an array

net.connect(input1,A)
net.connect(A,B,weight=0.3)
net.connect(B,B)

Using an array allows you to create a group of smaller nets that act as one larger net. A computer with less memory can cope with this. In the example, A is composed of 128 groups of 30 neurons that encode 1 dimension each, B is composed of 16 groups of 500 neurons that encode 8 dimensions each.

The relationship between the number of neurons and the number of dimensions is a judgement call, but more neurons will result in a cleaner, less noisy representation.

However, doing it this way creats a more localist representation of the dimensions (e.g., A represents each dimension seperately). This will reduce the amound of normalization that occurs

vector vocabulary

A vocabulary can be set up to relate random vectors to specific symbols. This example shows a vocabulary of A, B, and C. A is fed into a function then into a net called A then into a net called B. this can also be controlled in the interface by clicking on A and getting the semantic pointer graph that shows the activation of each vector in the vocabulary. Then click on the graph and choose set value to change or combine the vector values. Note that the vector representation is different from the dimension value representation (also note that the default is only to show the values for the first 5 dimensions, this can be changed by clicking on the graph). The dimensions need to be the same for all of the nets involved.

import nef

import hrr
vocab=hrr.Vocabulary(32)
vocab.parse('A,B,C')

net=nef.Network('Test Network')
net.add_to(world)

input1=net.make_input('input1',values=vocab.parse('A').v)

A=net.make('A',neurons=100,dimensions=32)
B=net.make('B',neurons=100,dimensions=32)

net.connect(input1,A)
net.connect(A,B)


Tuesday, June 21, 2011

vectors

Nengo can represent an individual values in each dimension of a net but it can also treat all the dimensions as a vector. Vectors are used as symbols in nengo, i.e., the pattern in the vector represents something specific, such as word. The dot product measures the similarity between two vectors

e.g.:

if you have two three dimensional vectors a1 a2 a3 and b1 b2 b3
you get the dot product by multiplying the vectors and adding up the product
so.... a1*b1+a2*b2+a3*b3 = dot product of ab

To do this in nengo the parts need to be created, then they can be added (note that the adding happens for free just by putting the vectors into the same dimension in a net)

Here is an example (you can test it by making the function controls look similar or dissimilar):

import nef
net=nef.Network('Test Network')
net.add_to(world)

input1=net.make_input('input1',values=[0,0,1])
input2=net.make_input('input2',values=[1,0,0])

A=net.make('A',neurons=100,dimensions=3)
B=net.make('B',neurons=100,dimensions=3)

m1=net.make('m1',neurons=100,dimensions=2)
m2=net.make('m2',neurons=100,dimensions=2)
m3=net.make('m3',neurons=100,dimensions=2)

Dot=net.make('Dot',neurons=100,dimensions=1)

net.connect(input1,A)
net.connect(input2,B)

net.connect(A,m1,transform=[[1,0,0],[0,0,0]])
net.connect(A,m2,transform=[[0,1,0],[0,0,0]])
net.connect(A,m3,transform=[[0,0,1],[0,0,0]])

net.connect(B,m1,transform=[[0,0,0],[1,0,0]])
net.connect(B,m2,transform=[[0,0,0],[0,1,0]])
net.connect(B,m3,transform=[[0,0,0],[0,0,1]])

def multiply(x):
  return x[0]*x[1]

net.connect(m1,Dot,func=multiply)
net.connect(m2,Dot,func=multiply)
net.connect(m3,Dot,func=multiply)



Monday, June 20, 2011

Two functions

a net can compute more than one function, here is an example:

import nef
net=nef.Network('Test Network')
net.add_to(world)
input1=net.make_input('input1',values=[0])
input2=net.make_input('input2',values=[0])
input3=net.make_input('input3',values=[0])

A=net.make('A',neurons=100,dimensions=1)
B=net.make('B',neurons=100,dimensions=1)
F=net.make('F',neurons=100,dimensions=1)
C=net.make('C',neurons=100,dimensions=3)
D=net.make('D',neurons=100,dimensions=2)


net.connect(input1,A)
net.connect(input2,B)
net.connect(input3,F)

net.connect(A,C,transform=[[1],[0],[0]])
net.connect(B,C,transform=[[0],[1],[0]])
net.connect(F,C,transform=[[0],[0],[1]])


def multiply(x):
  return x[0]*x[1],x[0]*x[2]    # just add more functions
                                              # the dimension of the recieving network must match

net.connect(C,D,func=multiply)

neuron range - radius

the radius sets the range that a net can represent, the default is 1. however the nets have difficulty representing their extreme range because it involves half the neurons firing all the time. therefore, in practice the maximum is lower than the radius

Strong claim

In order to compute a non linear function you must have a combined representation of the variables. So it takes two steps, combine and then compute. The combining can be considered equivilent to the function of the hidden layer in a network - see the multiplication example

multipy and transpose

 Here is the multiplication example where the result is transposed into two different values

import nef
net=nef.Network('Test Network')
net.add_to(world)
input1=net.make_input('input1',values=[0])
input2=net.make_input('input2',values=[0])
A=net.make('A',neurons=100,dimensions=1)
B=net.make('B',neurons=100,dimensions=1)
C=net.make('C',neurons=100,dimensions=2)
D=net.make('D',neurons=100,dimensions=2)


net.connect(input1,A)
net.connect(input2,B)
net.connect(A,C,transform=[[1],[0]])
net.connect(B,C,transform=[[0],[1]],pstc=0.03) # can also set pstc here


def multiply(x):
  return x[0]*x[1]

net.connect(C,D,func=multiply,transform=[[.5],[1]])

script: combining values in a function

This example takes values from two different networks and multiplies them. Any function can be done in this way - by converting a python function

import nef
net=nef.Network('Test Network')
net.add_to(world)
input1=net.make_input('input1',values=[0])
input2=net.make_input('input2',values=[0])
A=net.make('A',neurons=100,dimensions=1)
B=net.make('B',neurons=100,dimensions=1)
C=net.make('C',neurons=100,dimensions=2)
D=net.make('D',neurons=100,dimensions=1)


net.connect(input1,A)
net.connect(input2,B)


# transform - the brackets are the dimensions of the receiving network
# the values in the brackets are the transpose weights

net.connect(A,C,transform=[[1],[0]])
net.connect(B,C,transform=[[0],[1]])


# define a python function



def multiply(x):
  return x[0]*x[1]




# compute that function here

net.connect(C,D,func=multiply)

script: passing values

Here is some script from the tutorial for passing values 
 
import nef
net=nef.Network('Test Network')
net.add_to(world)
input=net.make_input('input',values=[0])
A=net.make('A',neurons=100,dimensions=1)
B=net.make('B',neurons=100,dimensions=1)
net.connect(input,A)
net.connect(A,B)
 
Here it is broken down:


# this part sets it up
 
import nef
net=nef.Network('Test Network')
net.add_to(world)
# create an input
# input is the name of the input
# the fact that it is colored may mean input is a key word also 
# probably best to use something else, e.g., input1
input=net.make_input('input',values=[0])
 
# create some networks
A=net.make('A',neurons=100,dimensions=1)
B=net.make('B',neurons=100,dimensions=1)
 
# connect everything up
net.connect(input,A)
net.connect(A,B)
 

nengo script

Nengo uses Python to run script. To create a script open a txt document, write the script, and save it as name.py (i.e., python code). To run it, open the script from within the python environment and it will run in the same display window used by the nengo graphical interface. It is not currently possible to go back and forth between the graphical interface and script. If you want to encode a custom function into connection weights you need to use the script. The scripting language can encode any python function as connection weights. That is, nengo translates directly from python code to spiking neurons

see scripting tutorial here:

http://www.arts.uwaterloo.ca/~cnrglab/?q=node/616

Monday, June 6, 2011

create a function

  • functions can be computed by attaching an orignin to an ensemble
    • draggin does not work for this (possible bug)
    • set the number of output dimensions
    • set the function
      • default is constant function
        • this would behave like a neural approximation of what an input does
      • user defined
        • define a function using all or some of the input dimensions
        • use x0, x1, x2, etc to refer to the dimensions
        • input dimensions should be set to the number of dimensions of the ensemble, this is the default, don't change it

passing values: multiple sources

  • each input must have it's own termination
  • when there are two or more inputs the vectors are summed after all of them have been through the transformation matrix

passing values: unequal dimensionality

  • less dimensions
    • if the input has less dimensions then it can use only some of the available dimensions in the ensembler
    • Or, it can put values from one of its dimensions into more than one dimension of the ensemble
    • this can be set in the transform values of the termination
  • more dimensions
    • using the same approach:
      • an input dimension could be set to 0 to ignore it
      • or two or more input dimensions can load on the same dimension in the ensemble
        • in this case the vectors are added
          • the adding takes place after the weights of the transform matrix are applied

passing values: weighting

  • weighting values
    • values can be weighted by changing the weights in the transform matrix
      • click on inspect
      • click on the ensemble
      • click on termination
      • click on A transform (float[][])
      • double click on the transform matrix to edit it
      • note - clicking on the termination sometimes works (possible bug here)

passing values: 1 to 1

  • for templates just use last_used for now
  • create a network
  • create an input
    • an input is an unspecified source of neural activity
    • set the number of dimensions for the unspecified source of neural activity
    • set the functions
      • these are the values for the each dimension of the input
      • taken together these values form a vector with n dimensions
  • create an ensemble
    • an ensemble is a group of neurons
    • set the number of dimensions
      • this specifies the dimensionality of the vectors that can be held
  • create a termination for the ensemble
    • the default is for it to have the same number of dimensions as the ensemble
    • the default is for a 1 to 1 mapping between the termination and the ensemble
    • the number of dimensions in the termination must match the number of dimensions being outputted to it
    • the number of dimensions in the termination does not need to match the number of dimensions in the ensemble and the mapping does not need to be 1 to 1 (more on this later)