Cochrane–Orcutt estimation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Michael Hardy
dash
 
en>Mark viking
Added wl
Line 1: Line 1:
Hi there. Allow me begin by introducing  online psychic readings ([http://appin.co.kr/board_Zqtv22/688025 Recommended Studying]) the writer, her title is Sophia. My day job is a journey agent. The favorite pastime for him and his children is to play lacross and he'll be beginning some thing else along with it. Her family  tarot readings ([http://www.youronlinepublishers.com/authWiki/AdolphvhBladenqq http://www.youronlinepublishers.com/]) members life in Alaska but her spouse desires them to move.<br><br>Here is my web site :: authentic psychic readings - [http://www.010-5260-5333.com/index.php?document_srl=1880&mid=board_ALMP66 look these up] -
'''ALOPEX''' (an acronym from "'''''AL'''gorithms '''O'''f '''P'''attern '''EX'''traction''") is a correlation based machine learning algorithm first proposed by [[Evangelia Micheli-Tzanakou|Tzanakou]] and Harth in 1974.
 
==Principle==
In [[machine learning]], the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function. Many training algorithms, such as [[backpropagation]], have an inherent susceptibility to getting "stuck" in local minima or maxima of the response function.  ALOPEX uses a cross-correlation of differences and a stochastic process to overcome this in an attempt to reach the absolute minimum (or maximum) of the response function.
 
==Method==
ALOPEX, in its simplest form is defined by an updating equation:
 
<math>\Delta\ W_{ij}(n) = \gamma\ \Delta\ W_{ij}(n-1) \Delta\ R(n) + r_i(n) </math>
 
Where:
*<math>n \geq 0</math> is the iteration or time-step.
*<math>\Delta\ W_{ij}(n)</math> is the difference between the current and previous value of system variable <math>\ W_{ij}</math> at iteration <math>n \ </math>.
*<math>\Delta\ R(n)</math> is the difference between the current and previous value of the response function <math>\ R,</math> at iteration <math>n \ </math>.
*<math>\gamma\ </math> is the learning rate parameter <math>(\gamma\ < 0 </math> minimizes <math>R, \ </math> and <math>\gamma\ > 0 </math> maximizes <math>R \ )</math>
*<math>r_i(n) \sim\ N(0,\sigma\ ^2)</math>
 
==Discussion==
Essentially, ALOPEX changes each system variable <math>W_{ij}(n)</math> based on a product of: the previous change in the variable <math>\Delta</math><math>W_{ij}(n-1)</math>, the resulting change in the cost function <math>\Delta</math><math>R(n)</math>, and the learning rate parameter <math>\gamma</math>.  Further, to find the absolute minimum (or maximum), the stochastic process <math>r_{ij}(n)</math> (Gaussian or other) is added to stochastically "push" the algorithm out of any local minima.
 
==References==
*Harth, E., & Tzanakou, E. (1974) Alopex: A stochastic method for determining visual receptive fields. Vision Research, '''14''':1475-1482. [http://dx.doi.org/10.1016/0042-6989(74)90024-8 Abstract from ScienceDirect]
 
[[Category:Classification algorithms]]
[[Category:Neural networks]]
 
 
{{compu-AI-stub}}

Revision as of 23:32, 25 July 2013

ALOPEX (an acronym from "ALgorithms Of Pattern EXtraction") is a correlation based machine learning algorithm first proposed by Tzanakou and Harth in 1974.

Principle

In machine learning, the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function. Many training algorithms, such as backpropagation, have an inherent susceptibility to getting "stuck" in local minima or maxima of the response function. ALOPEX uses a cross-correlation of differences and a stochastic process to overcome this in an attempt to reach the absolute minimum (or maximum) of the response function.

Method

ALOPEX, in its simplest form is defined by an updating equation:

ΔWij(n)=γΔWij(n1)ΔR(n)+ri(n)

Where:

  • n0 is the iteration or time-step.
  • ΔWij(n) is the difference between the current and previous value of system variable Wij at iteration n.
  • ΔR(n) is the difference between the current and previous value of the response function R, at iteration n.
  • γ is the learning rate parameter (γ<0 minimizes R, and γ>0 maximizes R)
  • ri(n)N(0,σ2)

Discussion

Essentially, ALOPEX changes each system variable Wij(n) based on a product of: the previous change in the variable ΔWij(n1), the resulting change in the cost function ΔR(n), and the learning rate parameter γ. Further, to find the absolute minimum (or maximum), the stochastic process rij(n) (Gaussian or other) is added to stochastically "push" the algorithm out of any local minima.

References

  • Harth, E., & Tzanakou, E. (1974) Alopex: A stochastic method for determining visual receptive fields. Vision Research, 14:1475-1482. Abstract from ScienceDirect


Template:Compu-AI-stub