|
|
Line 1: |
Line 1: |
| {{Merge to|Neural coding|discuss=Talk:Neural coding#Merger possibilities|date=July 2010}}
| | I would like [http://galab-work.cs.pusan.ac.kr/Sol09B/?document_srl=1489804 free psychic] to introduce myself to you, I am Jayson Simcox but I don't like when people use my full name. North Carolina is exactly where we've been residing psychic readings online [[http://myoceancounty.net/groups/apply-these-guidelines-when-gardening-and-grow/ myoceancounty.net]] for many years and will by no means transfer. Since I was eighteen I've been operating as a bookkeeper but quickly my spouse and I will start our personal business. As a lady what she really likes is fashion and she's been doing it for quite a while.<br><br>Feel free to visit my blog ... real psychic ([http://brazil.amor-amore.com/irboothe brazil.amor-amore.com]) |
| The '''sparse code''' is a kind of [[neural code]] in which each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons.
| |
| | |
| As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. A major result in neural coding from Olshausen et al. is that sparse coding of natural images produces [[wavelet]]-like oriented filters that resemble the receptive fields of simple cells in the visual cortex.
| |
| | |
| ==Overview==
| |
| Given a potentially large set of input patterns, sparse coding algorithms attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.
| |
| | |
| ==Linear Generative Model==
| |
| Most models of sparse coding are based on the linear generative model.<ref name=Rehn>{{cite journal|first1=Martin|last1=Rehn|first2=Friedrich T.|last2=Sommer|title=A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields|journal=Journal of Computational Neuroscience|year=2007|volume=22|pages=135–146|doi=10.1007/s10827-006-0003-9|url=http://redwood.berkeley.edu/fsommer/papers/rehnsommer07jcns.pdf}}</ref> In this model, the symbols are combined in a [[Linear combination|linear fashion]] to approximate the input.
| |
| | |
| More formally, given a k-dimensional set of real-numbered input vectors <math>\vec{\xi }\in \mathbb{R}^{k}</math>, the goal of sparse coding is to determine n k-dimensional [[Basis (linear algebra)|basis vectors]] <math>\vec{b_1}, \ldots, \vec{b_n} \in \mathbb{R}^{k}</math> along with a [[Sparse vector|sparse]] n-dimensional vector of weights or coefficients <math>\vec{s} \in \mathbb{R}^{n}</math> for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector: <math>\vec{\xi} \approx \sum_{j=1}^{n} s_{j}\vec{b}_{j}</math>.<ref name=Lee>{{cite journal|last1=Lee|first1=Honglak|last2=Battle|first2=Alexis|last3=Raina|first3=Rajat|last4=Ng|first4=Andrew Y.|title=Efficient sparse coding algorithms|journal=Advances in Neural Information Processing Systems|year=2006|url=http://ai.stanford.edu/~hllee/nips06-sparsecoding.pdf}}</ref>
| |
| | |
| The codings generated by algorithms implementing a linear generative model can be classified into codings with ''soft sparseness'' and those with ''hard sparseness''.<ref name=Rehn/> These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smooth [[Normal distribution|Gaussian]]-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values, ''no'' or ''hardly any'' small absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing.<ref name=Rehn/>
| |
| | |
| Another measure of coding is whether it is ''critically complete'' or ''overcomplete''. If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding is ''overcomplete''. Overcomplete codings smoothly interpolate between input vectors and are robust under input noise.<ref name=Olshausen>{{cite journal|first1=Bruno A.|last1=Olshausen|first2=David J.|last2=Field|title=Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?|journal=Vision Research|year=1997|volume=37|number=23|pages=3311–3325|url=http://www.chaos.gwdg.de/~michael/CNS_course_2004/papers_max/OlshausenField1997.pdf}}</ref> The human primary [[visual cortex]] is estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons.<ref name=Rehn/>
| |
| | |
| ==See also==
| |
| * [[Rate coding]]
| |
| * [[Independent-spike coding]]
| |
| * [[Correlation coding]]
| |
| * [[Population coding]]
| |
| * [[Grandmother cell]]
| |
| * [[Non-negative matrix factorization]]
| |
| | |
| ==References==
| |
| {{reflist}}
| |
| | |
| ==Bibliography==
| |
| * Földiák P, Endres D, [http://www.scholarpedia.org/article/Sparse_Coding Sparse coding], [[Scholarpedia]], 3(1):2984, 2008.
| |
| * Dayan P & Abbott LF. ''Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems''. Cambridge, Massachusetts: The MIT Press; 2001. ISBN 0-262-04199-5
| |
| * Rieke F, Warland D, de Ruyter van Steveninck R, Bialek W. ''Spikes: Exploring the Neural Code''. Cambridge, Massachusetts: The MIT Press; 1999. ISBN 0-262-68108-0
| |
| * B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–9, jun 1996.
| |
| | |
| [[Category:Neural coding]]
| |
I would like free psychic to introduce myself to you, I am Jayson Simcox but I don't like when people use my full name. North Carolina is exactly where we've been residing psychic readings online [myoceancounty.net] for many years and will by no means transfer. Since I was eighteen I've been operating as a bookkeeper but quickly my spouse and I will start our personal business. As a lady what she really likes is fashion and she's been doing it for quite a while.
Feel free to visit my blog ... real psychic (brazil.amor-amore.com)