public:courses:machine_learning:neural_networks:lecture_3

Lecture 3

  • For multi-layers neural networks the average of two good solutions may be a bad solution ⇒ we cannot use the perceptron learning procedure.
  • Here instead of making the weights get closer to a set of good weights, we just try to make the output get closer to the expected output.
  • For linear neurons, the output is \(y = w^T x\)
  • To measure the error we use the squared error between Y and t.
  • We then use the “delta-rule” for learning: \(\Delta w_i = \epsilon x_i (t-y)\)
  • We start with \(E = \frac 12 \sum\limits_n (t^n - y^n)^2\)
  • Then we differentiate to get error derivatives for weights: \(\frac{\partial E}{\partial w_i} = \frac 12 \sum\limits_n \frac {\partial y^n}{\partial w_i} \frac{d E^n}{d y^n} = - \sum\limits_n x_i^n (t^n - y^n)\)
  • Then we use \(\Delta w_i = - \epsilon \frac{\partial E}{\partial w_i} = \sum\limits_n \epsilon x_i^n (t^n - y^n)\)
  • For linear neuron, the error surface is a quadratic bowl
  • \(z = b + \sum\limits_i x_i w_i\)
  • \(\frac{\partial z}{\partial w_i} = x_i\)
  • \(\frac{\partial z}{\partial x_i} = w_i\)
  • Then \(y=\frac{1}{1+e^{-z}}\), so \(\frac{dy}{dz} = y(1-y)\)
  • So we get \(\frac{\partial y}{\partial w_i} = \frac {\partial z}{\partial w_i} \frac{d y}{d z} = x_i y (1-y)\)
  • And \(\frac{\partial E}{\partial w_i} = \sum\limits_n \frac {\partial y^n}{\partial w_i} \frac{d E^n}{d y^n} = - \sum\limits_n x_i^n y^n (1-y^n) (t^n - y^n)\)
  • Randomly pertubing one weight and then checking if it improves the performances may work, but it is very inefficient compared to backpropagation.

FIXME : add the formulas for the backpropagation here (after understanding them!)

  • public/courses/machine_learning/neural_networks/lecture_3.txt
  • Last modified: 2020/07/10 12:11
  • by 127.0.0.1