Predicate functor logic: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Myasuda
m References: added missing diacritic
en>BD2412
m Motivation: minor fixes, mostly disambig links using AWB
 
Line 1: Line 1:
In [[statistical modeling]] (especially [[process modeling]]), polynomial functions and rational functions are sometimes used as an empirical technique for [[curve fitting]].
I'm Winfred and I live in a seaside city in northern Canada, Windsor. I'm 40 and I'm will soon finish my study at Environmental Studies.<br><br>my blog ... [http://fastrolls.com/index.php?do=/profile-120276/info/ backup plugin]
 
==Polynomial function models==
{{Main|polynomial interpolation}}
A [[polynomial function]] is one that has the form
 
:<math>
y = a_{n}x^{n} + a_{n-1}x^{n-1} + \cdots + a_{2}x^{2} + a_{1}x + a_{0}
</math>
 
where ''n'' is a non-negative [[integer]] that defines the degree of the polynomial. A polynomial with a degree of 0 is simply a [[constant function]]; with a degree of 1 is a [[linear function|line]]; with a degree of 2 is a [[quadratic function|quadratic]]; with a degree of 3 is a [[cubic function|cubic]], and so on.
 
Historically, polynomial models are among the most frequently used empirical models for [[curve fitting]].
 
===Advantages===
These models are popular for the following reasons.
#Polynomial models have a simple form.
#Polynomial models have well known and understood properties.
#Polynomial models have moderate flexibility of shapes.
#Polynomial models are a closed family. [[translation (geometry)|Changes of location]] and [[scaling (geometry)|scale]] in the raw data result in a polynomial model being mapped to a polynomial model. That is, polynomial models are not dependent on the underlying [[metric (mathematics)|metric]].
#Polynomial models are computationally easy to use.
 
===Disadvantages===
However, polynomial models also have the following limitations.
#Polynomial models have poor [[interpolation|interpolatory]] properties. High-degree polynomials are notorious for [[Runge's phenomenon|oscillations between exact-fit values]].
#Polynomial models have poor [[extrapolation|extrapolatory]] properties. Polynomials may provide good fits within the range of data, but they will frequently deteriorate rapidly outside the range of the data.
#Polynomial models have poor [[asymptote|asymptotic]] properties. By their nature, polynomials have a finite response for finite ''x'' values and have an infinite response if and only if the ''x'' value is infinite. Thus polynomials may not model asymptotic phenomena very well.
#While no procedure is immune to the [[bias of an estimator|bias]]-[[variance]] tradeoff, polynomial models exhibit a particularly poor tradeoff between shape and degree. In order to model data with a complicated structure, the degree of the model must be high, indicating that the associated number of [[statistical parameter|parameter]]s to be [[estimation theory|estimated]] will also be high. This can result in highly unstable models.
 
When modeling via polynomial functions is inadequate due to any of the limitations above, the use of rational functions for modeling may give a better fit.
 
==Rational function models==
A [[rational function]] is simply the ratio of two polynomial functions.
:<math>
y = \frac{a_{n}x^{n} + a_{n-1}x^{n-1} + \ldots + a_{2}x^{2} + a_{1}x + a_{0}} {b_{m}x^{m} + b_{m-1}x^{m-1} + \ldots + b_{2}x^{2} + b_{1}x + b_{0}}
</math>
with ''n'' denoting a non-negative integer that defines the degree of the numerator and ''m'' is a non-negative integer that defines the degree of the denominator. For fitting rational function models, the constant term in the denominator is usually set to 1. Rational functions are typically identified by the degrees of the numerator and denominator. For example, a quadratic for the numerator and a cubic for the denominator is identified as a quadratic/cubic rational function. A rational function model is a generalization of the polynomial model: rational function models contain polynomial models as a subset (i.e., the case when the denominator is a constant).
 
===Advantages===
Rational function models have the following advantages:
#Rational function models have a moderately simple form.
#Rational function models are a closed family. As with polynomial models, this means that rational function models are not dependent on the underlying metric.
#Rational function models can take on an extremely wide range of shapes, accommodating a much wider range of shapes than does the polynomial family.
#Rational function models have better interpolatory properties than polynomial models. Rational functions are typically smoother and less oscillatory than polynomial models.
#Rational functions have excellent extrapolatory powers. Rational functions can typically be tailored to model the function not only within the domain of the data, but also so as to be in agreement with theoretical/asymptotic behavior outside the domain of interest.
#Rational function models have excellent asymptotic properties. Rational functions can be either finite or infinite for finite values, or finite or infinite for infinite x values. Thus, rational functions can easily be incorporated into a rational function model.
#Rational function models can often be used to model complicated structure with a fairly low degree in both the numerator and denominator. This in turn means that fewer coefficients will be required compared to the polynomial model.
#Rational function models are moderately easy to handle computationally. Although they are [[nonlinear regression|nonlinear models]], rational function models are particularly easy nonlinear models to fit.
 
===Disadvantages===
Rational function models have the following disadvantages:
#The properties of the rational function family are not as well known to engineers and scientists as are those of the polynomial family. The literature on the rational function family is also more limited. Because the properties of the family are often not well understood, it can be difficult to answer the following modeling question: ''Given that data has a certain shape, what values should be chosen for the degree of the numerator and the degree on the denominator?''
#Unconstrained rational function fitting can, at times, result in undesired vertical [[asymptote]]s due to roots in the denominator polynomial. The range of ''x'' values affected by the function "blowing up" may be quite narrow, but such asymptotes, when they occur, are a nuisance for local interpolation in the neighborhood of the asymptote point. These asymptotes are easy to detect by a simple plot of the fitted function over the range of the data. These nuisance asymptotes occur occasionally and unpredictably, but practitioners argue that the gain in flexibility of shapes is well worth the chance that they may occur, and that such asymptotes should not discourage choosing rational function models for empirical modeling.
 
One common difficulty in fitting nonlinear models is finding adequate starting values. A major advantage of rational function models is the ability to compute starting values using a [[Ordinary least squares|linear least squares]] fit. To do this, ''p'' points are chosen from the data set, with ''p'' denoting the number of parameters in the rational model. For example, given the linear/quadratic model
:<math>
y=\frac{A_0 + A_1x} {1 + B_1x + B_2x^{2}}
</math>
one would need to select four representative points, and perform a linear fit on the model
:<math>
y = A_0 + A_1x + \ldots + A_{p_n}x^{p_n} - B_1xy - \ldots - B_{p_d}x^{p_d}y
</math>
Here, ''p<sub>n</sub>'' and ''p<sub>d</sub>'' are the degrees of the numerator and denominator, respectively, and the ''x'' and ''y'' contain the subset of points, not the full data set. The estimated coefficients from this linear fit are used as the starting values for fitting the nonlinear model to the full data set.
 
Note: This type of fit, with the response variable appearing on both sides of the function, should only be used to obtain starting values for the nonlinear fit. The statistical properties of fits like this are not well understood.
 
The subset of points should be selected over the range of the data. It is not critical which points are selected, although obvious outliers should be avoided.
 
==See also==
* [[Response surface methodology]]
 
==Bibliography==
* {{cite book |author=[http://stats.lse.ac.uk/atkinson/ Atkinson, A. C.] and [http://www.maths.manchester.ac.uk/~adonev/  Donev, A. N.] and [http://support.sas.com/publishing/bbu/companion_site/index_author.html#tobias Tobias, R. D.]|title=Optimum Experimental Designs, with SAS |url=http://books.google.se/books?id=oIHsrw6NBmoC|publisher=[http://www.us.oup.com/us/catalog/general/subject/Mathematics/ProbabilityStatistics/~~/dmlldz11c2EmY2k9OTc4MDE5OTI5NjYwNg==  Oxford University Press]|year=2007 |pages=511+xvi |isbn=978-0-19-929660-6 |oclc= |doi=}}
* Box, G. E. P. and Draper, Norman. 2007. ''Response Surfaces, Mixtures, and Ridge Analyses'', Second Edition [of ''Empirical Model-Building and Response Surfaces'', 1987], Wiley.
* {{cite book |author=[[Jack Kiefer (mathematician)|Kiefer, Jack Carl]]. |title=[[Jack Kiefer (mathematician)|Jack Carl Kiefer]] Collected Papers III Design of Experiments | editor=[[Lawrence D. Brown|L. D. Brown]] et al.|publisher=Springer-Verlag|year=1985|isbn=0-387-96004-X}}
* R. H. Hardin and [[Neil Sloane|N. J. A. Sloane]], [http://neilsloane.com/doc/design.pdf "A New Approach to the Construction of Optimal Designs", ''Journal of Statistical Planning and Inference'', vol. 37, 1993, pp. 339-369]
* R. H. Hardin and [[Neil Sloane|N. J. A. Sloane]], [http://neilsloane.com/doc/doeh.pdf "Computer-Generated Minimal (and Larger) Response Surface Designs: (I) The Sphere"]
* R. H. Hardin and [[Neil Sloane|N. J. A. Sloane]], [http://neilsloane.com/doc/meatball.pdf "Computer-Generated Minimal (and Larger) Response Surface Designs: (II) The Cube"]
*  {{Cite book| title=Design and Analysis of Experiments | series=Handbook of Statistics| volume=13|editor=Ghosh, S. and [[Calyampudi Radhakrishna Rao|Rao, C. R.]] | publisher=North-Holland| year=1996| isbn=0-444-82061-2}}
**  {{Cite book| author=Draper, Norman and Lin, Dennis K. J.| chapter=Response Surface Designs |pages=343–375}}
**  {{Cite book| author=Gaffke, N. and Heiligers, B| chapter=Approximate Designs for [[Linear regression|Polynomial Regression]]: [[Invariant estimator|Invariance]], [[Admissible decision rule|Admissibility]], and [[Optimal design|Optimality]] |pages=1149–1199}}
* {{cite book |author=Melas, Viatcheslav B.|title=Functional Approach to Optimal Experimental Design |series=Lecture Notes in Statistics| volume=184 | publisher=Springer-Verlag | year=2006 |isbn=0-387-98741-X}} (Modeling with rational functions)
 
===Historical===
*{{cite journal
|title=Application de la méthode des moindre quarrés a l'interpolation des suites<!--  [The application of the method of least squares to the interpolation of sequences] -->
|author=[[Joseph Diaz Gergonne|Gergonne, J. D.]]
|journal=Annales de mathématiques pures et appliquées
|volume=6
|year=1815
|pages=242–252
}}
*{{cite journal
|title=The application of the method of least squares to the interpolation of sequences
|author=[[Joseph Diaz Gergonne|Gergonne, J. D.]]
|journal=Historia Mathematica
|volume=1
|issue=4 <!-- |month=November -->
|year=1974 |origyear=1815
|pages=439–447
|edition=Translated by Ralph St. John and [[Stephen M. Stigler|S. M. Stigler]] from the 1815 French
|doi=10.1016/0315-0860(74)90034-2
|url=http://www.sciencedirect.com/science/article/B6WG9-4D7JMHH-20/2/df451ec5fbb7c044d0f4d900af80ec86
}}
*{{cite journal
|title=Gergonne's 1815 paper on the design and analysis of polynomial regression experiments
|author=[[Stephen M. Stigler|Stigler, Stephen M.]]
|journal=Historia Mathematica
|volume=1
|issue=4 <!-- |month=November -->
|year=1974
|pages=431–439
|doi=10.1016/0315-0860(74)90033-0
|url=http://www.sciencedirect.com/science/article/B6WG9-4D7JMHH-1Y/2/680c7ada0198761e9866197d53512ab4}}
* {{cite journal
|author=[http://www.webdoe.cc/publications/kirstine.php Smith, Kirstine]
|title=On the Standard Deviations of Adjusted and Interpolated Values of an Observed Polynomial Function and its Constants and the Guidance They Give Towards a Proper Choice of the Distribution of the Observations
|year=1918
|journal=[http://biomet.oxfordjournals.org/cgi/content/citation/12/1-2/1 ''Biometrika'']
|volume=12
|issue=1/2
|pages=1–85
|jstor=2331929}}
 
==External links==
*[http://www.itl.nist.gov/div898/handbook/pmd/section6/pmd642.htm Rational Function Models]
 
{{Least squares and regression analysis}}
{{Statistics}}
{{NIST-PD}}
 
[[Category:Regression analysis]]
[[Category:Interpolation]]
[[Category:Statistical ratios]]

Latest revision as of 23:53, 20 August 2014

I'm Winfred and I live in a seaside city in northern Canada, Windsor. I'm 40 and I'm will soon finish my study at Environmental Studies.

my blog ... backup plugin