Hille–Yosida theorem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Myasuda
m avoid redirect
en>K9re11
removed Category:Functional analysis using HotCat as there is already a more specific category
 
Line 1: Line 1:
{{Machine learning bar}}
I'm a 41 years old, married and work at the college (Environmental Management).<br>In my free time I'm trying to learn Italian. I have been there and look [http://En.Search.Wordpress.com/?q=forward forward] to go there sometime in the future. I love to read, [http://Search.About.com/?q=preferably preferably] on my ipad. I really love to watch How I Met Your Mother and Arrested Development as well as docus about anything astronomical. I enjoy Model Aircraft Hobbies.<br><br>Also visit my site ... [http://Agilekorea.org/event/2012/post/13 how to get free fifa 15 coins]
In [[machine learning]] and [[statistics]], '''classification''' is the problem of identifying to which of a set of [[categorical data|categories]] (sub-populations) a new [[observation]] belongs, on the basis of a [[training set]] of data containing observations (or instances) whose category membership is known.  The individual observations are analyzed into a set of quantifiable properties, known as various [[explanatory variables]], ''features'', etc.  These properties may variously be [[categorical data|categorical]] (e.g. "A", "B", "AB" or "O", for [[blood type]]), [[ordinal data|ordinal]] (e.g. "large", "medium" or "small"), [[integer|integer-valued]] (e.g. the number of occurrences of a part word in an [[email]]) or [[real number|real-valued]] (e.g. a measurement of [[blood pressure]]).  Some [[algorithm]]s work only in terms of discrete data and require that real-valued or integer-valued data be ''discretized'' into groups (e.g. less than 5, between 5 and 10, or greater than 10).  An example would be assigning a given email into "spam" or "non-spam" classes or assigning a diagnosis to a given patient as described by observed characteristics of the patient (gender, blood pressure, presence or absence of certain symptoms, etc.).
 
An algorithm that implements classification, especially in a concrete implementation, is known as a '''[[Pattern_recognition|classifier]]'''.  The term "classifier" sometimes also refers to the mathematical [[function (mathematics)|function]], implemented by a classification algorithm, that maps input data to a category.
 
In the terminology of machine learning, classification is considered an instance of [[supervised learning]], i.e. learning where a training set of correctly identified observations is available.  The corresponding [[unsupervised learning|unsupervised]] procedure is known as ''clustering'' or [[cluster analysis]], and involves grouping data into categories based on some measure of inherent similarity (e.g. the [[distance]] between instances, considered as vectors in a multi-dimensional [[vector space]]).
 
Terminology across fields is quite varied. In [[statistics]], where classification is often done with [[logistic regression]] or a similar procedure, the properties of observations are termed [[explanatory variable]]s (or [[independent variable]]s, regressors, etc.), and the categories to be predicted are known as outcomes, which are considered to be possible values of the [[dependent variable]]In machine learning, the observations are often known as ''instances'', the explanatory variables are termed ''features'' (grouped into a [[feature vector]]), and the possible categories to be predicted are ''classes''.  There is also some argument over whether classification methods that do not involve a [[statistical model]] can be considered "statistical".  Other fields may use different terminology: e.g. in [[community ecology]], the term "classification" normally refers to [[cluster analysis]], i.e. a type of [[unsupervised learning]], rather than the supervised learning described in this article.
 
==Relation to other problems==
Classification and clustering are examples of the more general problem of [[pattern recognition]], which is the assignment of some sort of output value to a given input value.  Other examples are [[regression analysis|regression]], which assigns a real-valued output to each input; [[sequence labeling]], which assigns a class to each member of a sequence of values (for example, [[part of speech tagging]], which assigns a [[part of speech]] to each word in an input sentence); [[parsing]], which assigns a [[parse tree]] to an input sentence, describing the [[syntactic structure]] of the sentence; etc.
 
A common subclass of classification is '''probabilistic classification'''.  Algorithms of this nature use [[statistical inference]] to find the best class for a given instance.  Unlike other algorithms, which simply output a "best" class, probabilistic algorithms output a [[probability]] of the instance being a member of each of the possible classes.  The best class is normally then selected as the one with the highest probability.  However, such an algorithm has numerous advantages over non-probabilistic classifiers:
*It can output a confidence value associated with its choice (in general, a classifier that can do this is known as a ''confidence-weighted classifier'').
*Correspondingly, it can ''abstain'' when its confidence of choosing any particular output is too low.
*Because of the probabilities which are generated, probabilistic classifiers can be more effectively incorporated into larger machine-learning tasks, in a way that partially or completely avoids the problem of ''error propagation''.
 
==Frequentist procedures==
 
Early work on statistical classification was undertaken by Fisher,<ref>[[R. A. Fisher|Fisher R.A.]] (1936) " The use of multiple measurements in taxonomic problems", ''Annals of Eugenics'', 7, 179&ndash;188</ref><ref>Fisher R.A. (1938) " The statistical utilization of multiple measurements", ''Annals of Eugenics'', 8, 376&ndash;386</ref> in the context of two-group problems, leading to [[Fisher's linear discriminant]] function as the rule for assigning a group to a new observation.<ref name=G1977>Gnanadesikan, R. (1977) ''Methods for Statistical Data Analysis of Multivariate Observations'', Wiley. ISBN 0-471-30845-5 (p. 83&ndash;86)</ref> This early work assumed that data-values within each of the two groups had a [[multivariate normal distribution]]. The extension of this same context to more than two-groups has also been considered with a restriction imposed that the classification rule should be [[linear]].<ref name=G1977/><ref>[[C. R. Rao|Rao, C.R.]] (1952) ''Advanced Statistical Methods in Multivariate Analysis'', Wiley. (Section 9c)</ref> Later work for the multivariate normal distribution allowed the classifier to be [[nonlinear]]:<ref>[[T. W. Anderson|Anderson,T.W.]] (1958) ''An Introduction to Multivariate Statistical Analysis'', Wiley.</ref> several classification rules can be derived based on slight different adjustments of the [[Mahalanobis distance]], with a new observation being assigned to the group whose centre has the lowest adjusted distance from the observation.
 
==Bayesian procedures==
 
Unlike frequentist procedures, Bayesian classification procedures provide a natural way of taking into account any available information about the relative sizes of the sub-populations associated with the different groups within the overall population.<ref>Binder, D.A. (1978) "Bayesian cluster analysis", ''[[Biometrika]]'', 65, 31&ndash;38.</ref> Bayesian procedures tend to be computationally expensive and, in the days before [[Markov chain Monte Carlo]] computations were developed, approximations for Bayesian clustering rules were devised.<ref>Binder, D.A. (1981) "Approximations to Bayesian clustering rules", ''[[Biometrika]]'', 68, 275&ndash;285.</ref>
 
Some Bayesian procedures involve the calculation of  [[class membership probabilities|group membership probabilities]]: these can be viewed as providing a more informative outcome of a data analysis than a simple attribution of a single group-label to each new observation.
 
==Binary and multiclass classification==
Classification can be thought of as two separate problems - [[binary classification]] and [[multiclass classification]]. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes.<ref>Har-Peled, S., Roth, D., Zimak, D. (2003) "Constraint Classification for Multiclass Classification and Ranking." In: Becker, B., Thrun, S., Obermayer, K. (Eds) ''Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference'', MIT Press. ISBN 0-262-02550-7</ref> Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers.
 
== Feature vectors ==
Most algorithms describe an individual instance whose category is to be predicted using a [[feature vector]] of individual, measurable properties of the instance.  Each property is termed a [[feature (pattern recognition)|feature]], also known in statistics as an [[explanatory variable]] (or [[independent variable]], although in general different features may or may not be [[statistically independent]]).  Features may variously be [[binary data|binary]] ("male" or "female"); [[categorical data|categorical]] (e.g. "A", "B", "AB" or "O", for [[blood type]]); [[ordinal data|ordinal]] (e.g. "large", "medium" or "small"); [[integer|integer-valued]] (e.g. the number of occurrences of a particular word in an email); or [[real number|real-valued]] (e.g. a measurement of blood pressure).  If the instance is an image, the feature values might correspond to the pixels of an image; if the instance is a piece of text, the feature values might be occurrence frequencies of different words.  Some algorithms work only in terms of discrete data and require that real-valued or integer-valued data be ''discretized'' into groups (e.g. less than 5, between 5 and 10, or greater than 10).
 
The [[vector space]] associated with these vectors is often called the ''[[feature space]]''. In order to reduce the dimensionality of the feature space, a number of [[dimensionality reduction]] techniques can be employed.
 
== Linear classifiers ==
A large number of [[algorithm]]s for classification can be phrased in terms of a [[linear function]] that assigns a score to each possible category ''k'' by [[linear combination|combining]] the feature vector of an instance with a vector of weights, using a [[dot product]].  The predicted category is the one with the highest score.  This type of score function is known as a [[linear predictor function]] and has the following general form:
 
:<math>\operatorname{score}(\mathbf{X}_i,k) = \boldsymbol\beta_k \cdot \mathbf{X}_i,</math>
 
where '''X'''<sub>''i''</sub> is the feature vector for instance ''i'', '''&beta;'''<sub>''k''</sub> is the vector of weights corresponding to category ''k'', and score('''X'''<sub>''i''</sub>, ''k'') is the score associated with assigning instance ''i'' to category ''k''. In [[discrete choice]] theory, where instances represent people and categories represent choices, the score is considered the [[utility]] associated with person ''i'' choosing category ''k''.
 
Algorithms with this basic setup are known as [[linear classifier]]s.  What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted.
 
Examples of such algorithms are
*[[Logistic regression]] and [[multinomial logit]]
*[[Probit regression]]
*The [[perceptron]] algorithm
*[[Support vector machine]]s
*[[Linear discriminant analysis]].
 
== Algorithms ==
{{prose|date=May 2012}}
The most widely used classifiers are the [[neural network]] (multi-layer perceptron), [[support vector machines]], [[k-nearest neighbor algorithm|k-nearest neighbours]], Gaussian mixture model, Gaussian, [[naive Bayes]], [[decision tree]] and [[radial basis function|RBF]] classifiers.{{cn|date=May 2012}}
 
Examples of classification algorithms include:
* [[Linear classifier]]s
** [[Fisher's linear discriminant]]
** [[Logistic regression]]
** [[Naive Bayes classifier]]
** [[Perceptron]]
*[[Support vector machine]]s
**[[Least squares support vector machine]]s
* [[Quadratic classifier]]s
* [[Variable kernel density estimation#Use for statistical classification|Kernel estimation]]
** [[k-nearest neighbor algorithm|k-nearest neighbor]]
* [[Boosting (meta-algorithm)]]
* [[Decision tree learning|Decision tree]]s
** [[Random forest]]s
* [[Artificial neural networks|Neural network]]s
* [[Gene expression programming|Gene Expression Programming]]
* [[Bayesian network]]s
* [[Hidden Markov model]]s
* [[Learning vector quantization]]
 
== Evaluation ==
Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems (a phenomenon that may be explained by the [[No free lunch in search and optimization|no-free-lunch theorem]]). Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than a science.
 
The measures [[precision and recall]] are popular metrics used to evaluate the quality of a classification system. More recently, [[receiver operating characteristic]] (ROC) curves have been used to evaluate the tradeoff between true- and false-positive rates of classification algorithms.
 
As a performance metric, the [[uncertainty coefficient]] has the advantage over simple [[accuracy]] in that it is not affected by the relative sizes of the different classes.
<ref name="Mills2010">
{{Cite journal
| author = Peter Mills
| title = Efficient statistical classification of satellite measurements
| journal = International Journal of Remote Sensing
| doi= 10.1080/01431161.2010.507795
| year = 2011
}}</ref>
Further, it will not penalize an algorithm for simply ''rearranging'' the classes.
 
==Application domains==
Classification problems has many applications. In some of these it is employed as a [[data mining]] procedure, while in others more detailed statistical modeling is undertaken.
 
* [[Computer vision]]
** [[Medical imaging]] and medical image analysis
** [[Optical character recognition]]
** [[Video tracking]]
* [[Drug discovery]] and [[Drug development|development]]
** [[Toxicogenomics]]
** [[Quantitative structure-activity relationship]]
* [[Geostatistics]]
* [[Speech recognition]]
* [[Handwriting recognition]]
* [[Biometric]] identification
*[[Biological classification]]
* [[Statistical natural language processing]]
* [[Document classification]]
* Internet [[search engines]]
* [[Credit scoring]]
* [[Pattern recognition]]
 
{{More footnotes|date=January 2010}}
 
== See also ==
* [[Class membership probabilities]]
* [[Classification rule]]
* [[Binary classification]]
* [[Compound term processing]]
* [[Data mining]]
* [[Fuzzy logic]]
* [[Data warehouse]]
* [[Information retrieval]]
* [[Artificial intelligence]]
* [[Machine learning]]
* [[Pattern recognition]]
 
==References==
{{Reflist}}
 
==External links==
* [http://blog.peltarion.com/2006/07/10/classifier-showdown/ Classifier showdown] A practical comparison of classification algorithms.
* [http://cmp.felk.cvut.cz/cmp/software/stprtool/ Statistical Pattern Recognition Toolbox for Matlab].
* [http://sites.google.com/site/tooldiag/ TOOLDIAG Pattern recognition toolbox].
* [http://libagf.sourceforge.net Statistical classification software] based on [[adaptive kernel density estimation]].
* [https://pal.sri.com/Plone/framework/Components/learning-methods/classification-suite-jw PAL Classification suite] written in Java.
* [http://www.math.le.ac.uk/people/ag153/homepage/KNN/KNN3.html  kNN and Potential energy] (Applet), [[University of Leicester]]
 
 
{{Portal|Statistics}}
{{Statistics|analysis||state=expanded}}
 
{{DEFAULTSORT:Statistical Classification}}
[[Category:Machine learning]]
[[Category:Classification algorithms|*]]
[[Category:Statistical classification| ]]
 
[[fa:شناساگر (ریاضیات)]]

Latest revision as of 05:28, 9 December 2014

I'm a 41 years old, married and work at the college (Environmental Management).
In my free time I'm trying to learn Italian. I have been there and look forward to go there sometime in the future. I love to read, preferably on my ipad. I really love to watch How I Met Your Mother and Arrested Development as well as docus about anything astronomical. I enjoy Model Aircraft Hobbies.

Also visit my site ... how to get free fifa 15 coins