Cochrane–Orcutt estimation: Difference between revisions
en>Michael Hardy dash |
en>Mark viking Added wl |
||
Line 1: | Line 1: | ||
'''ALOPEX''' (an acronym from "'''''AL'''gorithms '''O'''f '''P'''attern '''EX'''traction''") is a correlation based machine learning algorithm first proposed by [[Evangelia Micheli-Tzanakou|Tzanakou]] and Harth in 1974. | |||
==Principle== | |||
In [[machine learning]], the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function. Many training algorithms, such as [[backpropagation]], have an inherent susceptibility to getting "stuck" in local minima or maxima of the response function. ALOPEX uses a cross-correlation of differences and a stochastic process to overcome this in an attempt to reach the absolute minimum (or maximum) of the response function. | |||
==Method== | |||
ALOPEX, in its simplest form is defined by an updating equation: | |||
<math>\Delta\ W_{ij}(n) = \gamma\ \Delta\ W_{ij}(n-1) \Delta\ R(n) + r_i(n) </math> | |||
Where: | |||
*<math>n \geq 0</math> is the iteration or time-step. | |||
*<math>\Delta\ W_{ij}(n)</math> is the difference between the current and previous value of system variable <math>\ W_{ij}</math> at iteration <math>n \ </math>. | |||
*<math>\Delta\ R(n)</math> is the difference between the current and previous value of the response function <math>\ R,</math> at iteration <math>n \ </math>. | |||
*<math>\gamma\ </math> is the learning rate parameter <math>(\gamma\ < 0 </math> minimizes <math>R, \ </math> and <math>\gamma\ > 0 </math> maximizes <math>R \ )</math> | |||
*<math>r_i(n) \sim\ N(0,\sigma\ ^2)</math> | |||
==Discussion== | |||
Essentially, ALOPEX changes each system variable <math>W_{ij}(n)</math> based on a product of: the previous change in the variable <math>\Delta</math><math>W_{ij}(n-1)</math>, the resulting change in the cost function <math>\Delta</math><math>R(n)</math>, and the learning rate parameter <math>\gamma</math>. Further, to find the absolute minimum (or maximum), the stochastic process <math>r_{ij}(n)</math> (Gaussian or other) is added to stochastically "push" the algorithm out of any local minima. | |||
==References== | |||
*Harth, E., & Tzanakou, E. (1974) Alopex: A stochastic method for determining visual receptive fields. Vision Research, '''14''':1475-1482. [http://dx.doi.org/10.1016/0042-6989(74)90024-8 Abstract from ScienceDirect] | |||
[[Category:Classification algorithms]] | |||
[[Category:Neural networks]] | |||
{{compu-AI-stub}} |
Revision as of 23:32, 25 July 2013
ALOPEX (an acronym from "ALgorithms Of Pattern EXtraction") is a correlation based machine learning algorithm first proposed by Tzanakou and Harth in 1974.
Principle
In machine learning, the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function. Many training algorithms, such as backpropagation, have an inherent susceptibility to getting "stuck" in local minima or maxima of the response function. ALOPEX uses a cross-correlation of differences and a stochastic process to overcome this in an attempt to reach the absolute minimum (or maximum) of the response function.
Method
ALOPEX, in its simplest form is defined by an updating equation:
Where:
- is the iteration or time-step.
- is the difference between the current and previous value of system variable at iteration .
- is the difference between the current and previous value of the response function at iteration .
- is the learning rate parameter minimizes and maximizes
Discussion
Essentially, ALOPEX changes each system variable based on a product of: the previous change in the variable , the resulting change in the cost function , and the learning rate parameter . Further, to find the absolute minimum (or maximum), the stochastic process (Gaussian or other) is added to stochastically "push" the algorithm out of any local minima.
References
- Harth, E., & Tzanakou, E. (1974) Alopex: A stochastic method for determining visual receptive fields. Vision Research, 14:1475-1482. Abstract from ScienceDirect