#67: Hypn(0)tism

 

Rosenblatt describes multiple algorithms for training this model

“reinforcement systems”

 

that are clearly influenced by Hebb’s theory of learning in real neural networks. Some algorithms that only strengthen the connections between activated

S-Units

“positive reinforcement”

 

and

A-Units

 

some that only weaken the connection between activate

 

S-Units

 

“negative reinforcement”

 

 

and some that use a combination of both. Interestingly, Rosenblatt is able to prove that these algorithms will always (eventually) yield a solution, if such a solution exists. Later Minsky and Papert found in their paper “Perceptrons: an introduction to computational geometry.” that such solutions only exist for linearly separable problems.

 

Their famous counterexample was

 

The XOR-problem

 

that can’t be solved using the perceptron as imagined by Rosenblatt.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s