Braided vector space: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Xubia
m correction of misprints
en>Gdfusion
m Fixed typo
 
Line 1: Line 1:
In [[type theory]] and [[functional programming]], '''Hindley–Milner''' ('''HM''') (also known as '''Damas–Milner''' or '''Damas–Hindley–Milner''') is a classical [[type system]] for the [[lambda calculus]] with [[parametric polymorphism]], first described by [[J. Roger Hindley]]<ref>R. Hindley, (1969) "The Principal Type-Scheme of an Object in Combinatory Logic", ''Transactions of the American Mathematical Society'', Vol. 146, pp. 29–60 [http://www.jstor.org/stable/1995158]</ref> and later rediscovered by [[Robin Milner]].<ref>Milner, (1978) "A Theory of Type Polymorphism in Programming". ''Journal of Computer and System Science (JCSS)'' 17, pp. 348–374 [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.67.5276&rep=rep1&type=pdf]</ref> Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis.<ref>Luis Damas (1985): ''Type Assignment in Programming Languages''. PhD thesis, University of Edinburgh (CST-33-85)</ref><ref name="Damas">Damas, Milner (1982), "Principal type-schemes for functional programs". ''9th Symposium on Principles of programming languages (POPL'82)'' pp. 207–212, ACM: [http://web.cs.wpi.edu/~cs4536/c12/milner-damas_principal_types.pdf]</ref>
I'm Foster and I live in a seaside city in northern Brazil, Anapolis. I'm 20 and I'm will soon finish my study at Hotel Administration.<br>xunjie 中国のブランド子供の健康ネットワークの急速な発展だけでなく、
時間光成熟したスタイルを作成するには:今から2012年7月5日まで。
我々は予算を削減している」と広報担当·リッツは、 [http://www.rheintalverlag.ch/newsletter/gaga.php �����ߥ��] オリジナルコンサートエディタでカレンの最初のスタートの大きなデビュー20周年:新竹の節:人気インテリジェンス出典:彼女自身の女性ネットワークコンテンツソース:自​​分自身の女性ネットワーク2013年11月21日これは、
シンプルで美しいとなるように設計アッパーに透明色のリボンの弓を貫通して形成され、
今年のジュエリーショーに1,200人以上の国内外の出展者の宝石大きなスケールを集めました。 [http://aphroditeinn.gr/webmail/mime/li/shop/mall/nb.html �˥�`�Х�� ��ǥ��`�� �ѥ�ץ�] 服装シリーズ:国民サービスクラス、
ランセルハンドバッグ - ダリのユニークな解釈にこのODE Daligrammeを気に入るはず。
かつての子供の家の子供の冬の会議や受注東莞市伊丹曼良いニュース、[http://bvlgarishop.sd27dpac.com/ ���� �Хå� ����] ピュアホワイトジンディーホテルの住所を長引く姿勢や頭の魂の言語解釈上のアクション、
人々を担当する旧フェニックス次長を呼び出しますが、
グリッドを選ぶ英国の美の小節線がインストールされます!まるで人々が自然に戻ってキャンパスライフに格子縞のドレスは、
もったいないハンディキャップを知らない様子が出てくるのだ。 [http://www.rheintalverlag.ch/newsletter/gaga.php gaga <br><br>no �rӋ ����]


Among the properties making HM so outstanding is completeness and its ability to deduce the [[principal type|most general type]] of a given program without the need of any [[type annotation]]s or other hints supplied by the programmer. '''Algorithm W''' is a fast algorithm, performing [[type inference]] in almost [[linear time]] with respect to the size of the source, making it practically usable to type large programs.<ref group="note">Hindley–Milner is [[DEXPTIME]]-complete. However, non-linear behaviour only manifests itself on pathological inputs, as such the complexity theoretic proofs by {{harv|Mairson|1990}} and {{harv|Kfoury|Tiuryn|Urzyczyn|1990}} came as a surprise to the research community. When the depth of nested let-bindings is bounded&mdash;as is the case in realistic programs&mdash;Hindley&ndash;Milner type inference becomes polynomial.</ref> HM is preferably used for [[functional language]]s. It was first implemented as part of the type system of the programming language [[ML (programming language)|ML]]. Since then, HM has been extended in various ways, most notably by [[Bounded types|constrained types]] as used in [[Haskell (programming language)|Haskell]].
Also visit my web blog :: [http://aphroditeinn.gr/virtual/e/p/new/shoe/jimmy.html ジミーチュウ ブーツ レディース]
 
== Introduction ==
Organizing their original paper, Damas and Milner<ref name="Damas"/> clearly separated two very different tasks. One is to describe what types an expression can have and another to present an algorithm actually computing a type. Keeping both aspects apart from each other allows one to focus separately on the logic (i.e. meaning) behind the algorithm, as well as to establish a benchmark for the algorithm's properties.
 
How expressions and types fit to each other is described by means of a [[deductive system]]. Like any [[proof system]], it allows different ways to come to a conclusion and since one and the same expression arguably might have different types, dissimilar conclusions about an expression are possible. Contrary to this, the type inference method itself ([[#Algorithm W|Algorithm W]]) is defined as a deterministic step-by-step procedure, leaving no choice what to do next. Thus clearly, decisions not present in the logic might have been made constructing the algorithm, which demand a closer look and justifications but would perhaps remain non-obvious without the above differentiation.
 
== Syntax ==
{| class=infobox
|align=center style="background:#e0e0ff"|'''Expressions'''
|-
| <math>
  \begin{array}{lrll}
  e & =    & x                                  & \textrm{variable}\\
    & \vert & e\ e                                & \textrm{application}\\
    & \vert & \lambda\ x\ .\ e                    & \textrm{abstraction} \\
    & \vert & \mathtt{let}\ x = e\ \mathtt{in}\ e \\
  \end{array}
  </math>
|-
|align=center style="background:#e0e0ff"|'''Types'''
|-
| <math>
  \begin{array}{llrll}
  \textrm{mono} & \tau  &=    & \alpha                    & \ \textrm{variable} \\
                    &        &\vert &  D\ \tau\dots\tau        & \ \textrm{application} \\
  \textrm{poly} & \sigma &=    & \tau                                          \\
                    &        &\vert& \forall\ \alpha\ .\ \sigma & \ \textrm{quantifier}\\
  \\
  \end{array}
  </math>
|}
Logic and algorithm share the notions of "expression" and "type", whose form is made precise by the [[syntax]].
 
The expressions to be typed are exactly those of the [[lambda calculus]], enhanced by a let-expression.
These are shown in the table to the right.
For readers unfamiliar with the lambda calculus, here is a brief explanation:
The application <math>e_1 e_2</math> represents
applying the function  <math>e_1 </math> to the argument  <math>e_2</math>, often written <math>e_1(e_2)</math>.
The abstraction <math>\lambda\ x\ .\ e  </math> represents an [[anonymous function]] that maps the input <math>x</math> to the output <math> e  </math>.  This is also called function literal, common in most contemporary programming languages, and sometimes written as <math>\mathtt{function}\,(x)\ \mathtt{return}\ e\ \mathtt{end}</math>.
The let expression <math>\mathtt{let}\ x = e_1\ \mathtt{in}\ e_2</math> represents the result of substituting every occurrence of <math>x </math> in <math>e_2 </math> with <math>e_1 </math>.
 
Types as a whole are split into two groups, called mono- and polytypes.<ref group="note">Polytypes are called "type schemes" in the original article.</ref>
 
=== Monotypes ===
 
Monotypes <math>\tau</math> are syntactically represented as [[Term (logic)|terms]].  A monotype  always designates a particular type, in the sense that it is equal only to itself and different from all others.
 
Examples of monotypes include type constants like <math>\mathtt{int}</math> or <math>\mathtt{string}</math>, and parametric
types like <math>\mathtt{Map\ (Set\ string)\ int}</math>.  These types are examples of ''applications'' of type functions, for example, from the set
<math> \{ \mathtt{Map^2,\ Set^1,\ string^0,\ int^0} \} </math>,
where the superscript indicates the number of type parameters.  The complete set of type functions <math>D</math> is arbitrary in HM, except that it ''must'' contain at least <math>\rightarrow^2</math>, the type of functions.  It is often written in infix notation for convenience.  For example, a function mapping integers to strings has type <math>\mathtt{int}\rightarrow \mathtt{string}</math>.
<ref group="note">The parametric types <math>D\ \tau\dots\tau</math> were not present in the original paper on HM and are not needed to present the method. None of the inference rules below will take care or even note them. The same holds for the non-parametric "primitive types" in said paper. All the machinery for polymorphic type inference can be defined without them. They have been included here for sake of examples but also because the nature of HM is all about parametric types. This comes from the function type <math>\tau\rightarrow\tau</math>, hard-wired in the inference rules, below, which already has two parameters and has been presented here as only a special case.</ref>
 
Type variables are monotypes. Standing alone, a type variable <math>\alpha</math> is meant
to be as concrete as <math>\mathtt{int}</math> or <math>\beta</math>, and clearly different from both. Type variables occurring as monotypes behave as if they were type constants whose identity is unknown. Correspondingly, a function typed <math>\alpha\rightarrow\alpha</math> only maps values of the particular type <math>\alpha</math> on itself. Such a function can only be applied to values having type <math>\alpha</math> and to no others.
 
=== Polytype ===
 
A function with polytype <math>\forall\alpha.\alpha\rightarrow\alpha</math> by contrast can map ''any'' value of the same type to itself,
and the [[identity function]] is a value for this type. As another example <math>\forall\alpha.(\mathtt{Set}\ \alpha)\rightarrow \mathtt{int}</math> is the type of
a function mapping all finite sets to integers. The count of members is a value for this type. Note that quantifiers can only appear top level, i.e. a
type <math>\forall\alpha.\alpha\rightarrow\forall\alpha.\alpha</math> for instance, is excluded by syntax of types and that monotypes
are included in the polytypes, thus a type has the general form <math>\forall\alpha_1\dots\forall\alpha_n.\tau</math>.
 
=== Free type variables ===
{| class=infobox
|align=center style="background:#e0e0ff"|'''Free Type Variables'''
|-
| <math>
\begin{array}{ll}
\text{free}(\ \alpha\ ) &=\ \left\{\alpha\right\}\\
\text{free}(\ D\ \tau_1\dots\tau_n\ ) &=\ \bigcup\limits_{i=1}^n{\text{free}(\ \tau_i\ )} \\
\text{free}(\ \forall\ \alpha\ .\ \sigma\ ) &=\ \text{free}(\ \sigma\ )\  -\  \left\{\alpha\right\}\\
\end{array}
</math>
|}
 
In a type <math>\forall\alpha_1\dots\forall\alpha_n.\tau</math>, the symbol <math>\forall</math> is the quantifier binding the type variables <math>\alpha_i</math> in the monotype <math>\tau</math>. The variables <math>\alpha_i</math> are called ''quantified'' and any occurrence of a quantified type variable in <math>\tau</math> is called ''bound'' and all unbound type variables in <math>\tau</math> are called ''free''. Like in the [[Lambda_calculus#Free_variables|lambda calculus]], the notion of [[Free variables and bound variables|free and bound variables]] is essential for the understanding of the meaning of types.
 
This is certainly the hardest part of HM, perhaps because polytypes containing free variables are not represented in programming languages like [[Haskell (programming language)|Haskell]]. Likewise, one does not have clauses with free variables in [[Prolog]]. In particular developers experienced with both languages and actually knowing all the prerequisites of HM, are likely to slip this point. In Haskell for example, all type variables implicitly occur quantified, i.e. a Haskell type
<code>a -> a</code> means <math>\forall\alpha.\alpha\rightarrow\alpha</math> here. Because a type like <math>\alpha\rightarrow\alpha</math>, though it may practically occur in a Haskell program, cannot be expressed there, it can easily be confused with its quantified version.
 
So what function can have a type like e.g. <math>\forall\beta.\beta\rightarrow\alpha</math>, i.e. a mixture of both bound and free type variables and what could the free type variable <math>\alpha</math> therein mean?
 
{| class=infobox
|align=center style="background:#e0e0ff"|'''Example 1'''
|-
| <math>
\begin{array}{l}
\textbf{let}\ \mathit{bar}\ [\forall\alpha.\forall\beta.\alpha\rightarrow(\beta\rightarrow\alpha)] = \lambda\ x.\\
\quad\textbf{let}\ \mathit{foo}\ [\forall\beta.\beta\rightarrow\alpha] = \lambda\ y.x\\
\quad\textbf{in}\ \mathit{foo}\\
\textbf{in}\ \mathit{bar}
\end{array}
</math>
|}
Consider <math>\mathit{foo}</math> in Example 1, with type annotations in brackets.
Its parameter <math>y</math> is not used in the body, but the variable <math>x</math> bound in the outer context of <math>\mathit{foo}</math> surely is.
As a consequence, <math>\mathit{foo}</math> accepts every value as argument, while returning a value bound outside and with it its type. <math>\mathit{bar}</math> to the contrary has type <math>\forall\alpha.\forall\beta.\alpha\rightarrow(\beta\rightarrow\alpha)</math>, in which all occurring type variables are bound.
Evaluating, for instance <math>\mathit{bar}\ 1</math>, results in a function of type <math>\forall\beta.\beta\rightarrow\ \mathit{int}</math>, perfectly reflecting that foo's monotype <math>\alpha</math> in <math>\forall\beta.\beta\rightarrow\alpha</math> has been refined by this call.
 
In this example, the free monotype variable <math>\alpha</math> in foo's type becomes meaningful by being quantified in the outer scope, namely in bar's type.
I.e. in context of the example, the same type variable <math>\alpha</math> appears both bound and free in different types. As a consequence, a free type variable cannot be interpreted better than stating it is a monotype without knowing the context. Turning the statement around, in general, a typing is not meaningful without a context.
 
=== Context and typing ===
 
{| class=infobox
|align=center style="background:#e0e0ff"|'''Syntax'''
|-
| <math>
\begin{array}{llrl}
  \text{Context}    & \Gamma & = & \epsilon\ \mathtt{(empty)}\\
                    &        & \vert& \Gamma,\ x : \sigma\\
  \text{Typing}      &        & = & \Gamma \vdash e : \sigma\\
\\
\end{array}
</math>
|-
|align=center style="background:#e0e0ff"|'''Free Type Variables'''
|-
|align=center|<math>
\begin{array}{ll}
\text{free}(\ \Gamma\ ) &=\ \bigcup\limits_{x:\sigma \in \Gamma}\text{free}(\ \sigma\ )
\end{array}
</math>
|}
 
Consequently, to get the yet disjoint parts of the syntax, expressions and types together meaningfully, a third
part, the context is needed. Syntactically, it is a list of pairs <math>x:\sigma</math>, called
[[Assignment (mathematical logic)|assignments]] or [[:wikt:assumption|assumptions]], stating for each value variable <math>x_i</math>
therein a type <math>\sigma_i</math>. All three parts combined gives a ''typing judgment'' of the form <math>\Gamma\ \vdash\ e:\sigma</math>,
stating, that under assumptions <math>\Gamma</math>, the expression <math>e</math> has type <math>\sigma</math>.
 
Now having the complete syntax at hand, one can finally make a meaningful statement about the type of <math>\mathit{foo}</math> in example 1, above,
namely <math>x:\alpha \vdash \lambda\ y.x : \forall\beta.\beta\rightarrow\alpha</math>. Contrary to the above formulations, the monotype
variable <math>\alpha</math> no longer appears unbound, i.e. meaningless, but bound in the context as the type of the value variable
<math>x</math>. The circumstance whether a type variable is bound or free in the context apparently plays a significant
role for a type as part of a typing, so <math>\text{free}(\ \Gamma\ )</math> it is made precise in the side box.
 
=== Note on expressiveness ===
 
The expression syntax might appear far too inexpressive to readers unfamiliar with the [[lambda calculus]].  Because the examples given below will likely support this misconception, some notes that the HM is not dealing with toy languages might be helpful. As a central result in research on [[computability]], the expression syntax defined above (even excluding the let-variant) is able to express any computable function. Moreover all other programming language constructions can be relatively directly transformed syntactically into expressions of the lambda calculus. Therefore this simple expression is used as a model for programming languages in research. A method known to work well for the lambda calculus can easily be extended to all or at least many other syntactical constructions of a particular programming language using the before mentioned syntactical transformations.
 
As an example, the additional expression variant <math>\textbf{let}\ x = e_1\ \textbf{in}\ e_2</math> can be transformed to <math>(\lambda x.e_2)\ e_1</math> when <math>x</math> is not free in <math>e_1</math> (so excludes <math>\textbf{let}\ x = x\ \textbf{in}\ x</math>).
It is added to expression syntax in HM only to support generalization during the type inference and not because syntax lacks computational strength.
Thus HM deals with inference of types in programs in general and the various functional languages using this method demonstrate how well a result formulated only for the syntax of the lambda calculus can be extended to syntactically complex languages.
 
Contrary to the impression that the expressions might be too inexpressive for practical application, they are actually far too expressive to be meaningfully typed at all. This is a consequence of the [[decision problem]] being [[undecidable problem|undecidable]] for anything as expressive as the expression of the lambda calculus. Consequently, computing typings is a hopeless venture in general. Depending on the nature of the type system, it will either never terminate or otherwise refuse to work.
 
HM belongs to the later group of type systems.  A collapse of the type system presents itself then as more subtle situation in that suddenly only one and the same type is yielded for the expressions of interest. This is not a fault in HM, but inherent in the problem of typing itself and can easily be created within any strongly typed programming language e.g. by coding an evaluator (the [[universal function]]) for the "too simple" expression. One then has a single concrete type that represents the universal data type as in untyped languages. The type system of the host programming language is then collapsed and can no longer differentiate between the various types of values handed to or produced by the evaluator. In this context, it still delivers or checks types, but always yields the same type, just as if the type system were no longer present at all.{{or|date=January 2013}}
 
== Polymorphic type order ==
 
While the equality of monotypes is purely syntactical, polytypes offer a richer structure by being related to other types through a specialization relation <math>\sigma \sqsubseteq \sigma'</math> expressing that <math>\sigma'</math> is more special than <math>\sigma</math>.
 
When being applied to a value a polymorphic function has to change its shape specializing to deal with this particular type of values. During this process, it also changes its type to match that of the parameter. If for instance the identity function having type <math>\forall\alpha.\alpha\rightarrow\alpha</math> is to be applied on a number having type <math>int</math>, both simply cannot work together, because all the types are different and nothing fits. What is needed is a function of type <math>int\rightarrow int</math>. Thus, during application, the polymorphic identity is specialized to a monomorphic version of itself. In terms of the specialization relation, one writes <math>\forall\alpha.\alpha\rightarrow\alpha \sqsubseteq\ int\rightarrow int</math>
 
Now the shape shifting of polymorphic values is not fully arbitrary but rather limited by their pristine polytype. Following what has happened in the example one could paraphrase the rule of specialization, saying, a polymorphic type <math>\forall\alpha.\tau</math> is specialized by consistently replacing each occurrence of <math>\alpha</math> in <math>\tau</math> and dropping the quantifier. While this rule works well for any monotype uses as replacement, it fails when a polytype, say <math>\forall\beta.\beta</math> is tried as a replacement, resulting in the non-syntactical type <math>\forall\beta.\beta\rightarrow\forall\beta.\beta</math>.
But not only that. Even if a type with nested quantified types would be allowed in the syntax, the result of the substitution would not longer preserve the property of the pristine type, in which both the parameter and the result of the function have the same type, which are now only seemingly equal because both subtypes became independent from each other allowing to specialize the parameter and the result with different types resulting in, e.g. <math>string\rightarrow Set\ int</math>, hardly the right task for an identity function.
 
The syntactic restriction to allow quantification only top-level is imposed to prevent generalization while specializing. Instead of <math>\forall\beta.\beta\rightarrow\forall\beta.\beta</math>, the more special type <math>\forall\beta.\beta\rightarrow\beta</math> must be produced in this case.
 
One could undo the former specialization by specializing on some value of type <math>\forall\alpha.\alpha</math> again. In terms of the relation
one gains <math>\forall\alpha.\alpha\rightarrow\alpha \sqsubseteq \forall\beta.\beta\rightarrow\beta \sqsubseteq\forall\alpha.\alpha\rightarrow\alpha</math> as a summary, meaning that syntactically different polytypes are equal w.r.t. to renaming their quantified variables.
 
{| class=infobox
|align=center style="background:#e0e0ff"|'''Specialization Rule'''
|-
| <math>\displaystyle\frac{\tau' = \left[\alpha_i := \tau_i\right] \tau \quad \beta_i \not\in \textrm{free}(\forall \alpha_1...\forall\alpha_n . \tau)}{\forall \alpha_1...\forall\alpha_n . \tau \sqsubseteq \forall \beta_1...\forall\beta_m . \tau'}</math>
|}
Now focusing only on the question whether a type is more special than another and not longer what the specialized type is used for, one could summarize the specialization as in the box above. Paraphrasing it clockwise, a type <math>\forall\alpha_1\dots\forall\alpha_n.\tau</math> is specialized by consistently replacing any of the quantified variables <math>\alpha_i</math> by arbitrary monotypes <math>\tau_i</math> gaining a monotype <math>\tau'</math>. Finally, type variables in <math>\tau'</math> not occurring free in the pristine type can optionally be quantified.
 
Thus the specialization rules makes sure that no free variable, i.e. monotype in the pristine type becomes unintentionally bound by a quantifier, but originally quantified variable can be replaced with whatever, even with types introducing new quantified or unquantified type variables.
 
Starting with a polytype <math>\forall\alpha.\alpha</math>, the specialization could either replace the body by another quantified variable, actually a rename or by some type constant (including the function type) which may or may not have parameters filled either with monotypes or quantified type variables. Once a quantified variable is replaced by a type application, this specialization cannot be undone through another substitution as it was possible for quantified variables. Thus the type application is there to stay. Only if it contains another quantified type variable, the specialization could continue further replacing for it.
 
So the specialization introduces no further equivalence on polytype beside the already known renaming. Polytypes are syntactically equal up to renaming their quantified variables. The equality of types is a reflexive, antisymmetric and transitive relation and the remaining specializations of polytypes are transitive and with this the relation <math>\sqsubseteq</math> is an [[partial order|order]].
 
== Deductive system ==
{| class=infobox
|align=center style="background:#e0e0ff"|'''The Syntax of Rules'''
|-
| <math>
\begin{array}{lrl}
  \text{Predicate}  & =      &\sigma\sqsubseteq\sigma'\\
                    & \vert\ &\alpha\not\in free(\Gamma)\\
                    & \vert\ &x:\alpha\in \Gamma\\
\\
  \text{Judgment}  & =      &\text{Typing}\\
  \text{Premise}    & =      &\text{Judgment}\ \vert\ \text{Predicate}\\
  \text{Conclusion} & =      &\text{Judgment}\\
\\
  \text{Rule}      & =      &\displaystyle\frac{\textrm{Premise}\ \dots}{\textrm{Conclusion}}\quad [\mathtt{Name}]
\end{array}
</math>
|}
 
The syntax of HM is carried forward to the syntax of the [[Rule of inference|inference rules]] that form the body of the [[formal system]], by using the typings as [[Judgment (mathematical logic)|judgments]]. Each of the rules define what conclusion could be drawn from what premises. Additionally to the judgments, some extra conditions introduced above might be used as premises, too.
 
A proof using the rules is a sequence of judgments such that all premises are listed before a conclusion. Please see the Examples 2 and 3 below for a possible format of proofs. From left to right, each line shows the conclusion, the <math>[\mathtt{Name}]</math> of the rule applied and the premises, either by referring to an earlier line (number) if the premise is a judgment or by making the predicate explicit.
 
=== Typing rules ===
{{see also|Type rules}}
{| class=infobox
|align=center style="background:#e0e0ff"|'''Declarative Rule System'''
|-
| <math>
\begin{array}{cl}
\displaystyle\frac{x:\sigma \in \Gamma}{\Gamma \vdash x:\sigma}&[\mathtt{Var}]\\ \\
\displaystyle\frac{\Gamma \vdash e_0:\tau \rightarrow \tau' \quad\quad \Gamma \vdash e_1 : \tau }{\Gamma \vdash e_0\ e_1 : \tau'}&[\mathtt{App}]\\ \\
\displaystyle\frac{\Gamma,\;x:\tau\vdash e:\tau'}{\Gamma \vdash \lambda\ x\ .\ e : \tau \rightarrow \tau'}&[\mathtt{Abs}]\\ \\
\displaystyle\frac{\Gamma \vdash e_0:\sigma \quad\quad \Gamma,\,x:\sigma \vdash e_1:\tau}{\Gamma \vdash \mathtt{let}\ x = e_0\ \mathtt{in}\ e_1 : \tau} &[\mathtt{Let}]\\ \\ \\
\displaystyle\frac{\Gamma \vdash e:\sigma' \quad \sigma' \sqsubseteq \sigma}{\Gamma \vdash e:\sigma}&[\mathtt{Inst}]\\ \\
\displaystyle\frac{\Gamma \vdash e:\sigma \quad \alpha \notin \text{free}(\Gamma)}{\Gamma \vdash e:\forall\ \alpha\ .\ \sigma}&[\mathtt{Gen}]\\ \\
\end{array}</math>
|}
 
The side box shows the deduction rules of the HM type system. One can roughly divide them into two groups:
 
The first four rules <math>[\mathtt{Var}]</math> (variable or function access), <math>[\mathtt{App}]</math> (''application'', i.e. function call with one parameter), <math>[\mathtt{Abs}]</math> (''abstraction'', i.e. function declaration) and <math>[\mathtt{Let}]</math> (variable declaration) are centered around the syntax, presenting one rule for each of the expression forms. Their meaning is pretty obvious at the first glance, as they decompose each expression, prove their sub-expressions and finally combine the individual types found in the premises to the type in the conclusion.
 
The second group is formed by the  remaining two rules <math>[\mathtt{Inst}]</math> and <math>[\mathtt{Gen}]</math>.
They handle specialization and generalization of types. While the rule <math>[\mathtt{Inst}]</math> should be clear from the section on specialization above, <math>[\mathtt{Gen}]</math> complements the former, working in the opposite direction. It allows generalization, i.e. to quantify monotype variables that are not bound in the context. The necessity of this restriction <math>\alpha \not\in free(\ \Gamma\ )</math> is introduced the section on [[#Free type variables|free type variables]].
The following two examples exercise the rule system in action
 
'''Example 2''': A proof for <math>\Gamma \vdash id(n):int</math> where <math>\Gamma = id:\forall \alpha . \alpha\rightarrow\alpha,\ n:int</math>,
could be written
 
: <math>\begin{array}{llll}
1:&\Gamma \vdash id : \forall\alpha.\alpha \rightarrow \alpha  &[\mathtt{Var}]& (id : \forall\alpha.\alpha \rightarrow \alpha \in \Gamma) \\
2:&\Gamma \vdash id : int \rightarrow int & [\mathtt{Inst}]&(1),\ (\forall\alpha.\alpha \rightarrow \alpha \sqsubseteq int\rightarrow int)\\
3:&\Gamma \vdash n : int&[\mathtt{Var}]&(n : int \in \Gamma)\\
4:&\Gamma \vdash id(n) : int&[\mathtt{App}]& (2),\ (3)\\
\end{array}
</math>
 
'''Example 3''': To demonstrate generalization,
<math>\vdash\ \textbf{let}\, id = \lambda x . x\ \textbf{in}\ id\, :\, \forall\alpha.\alpha\rightarrow\alpha</math>
is shown below:
 
: <math>
\begin{array}{llll}
1: & x:\alpha \vdash x : \alpha & [\mathtt{Var}] & (x:\alpha \in \left\{x:\alpha\right\})\\
2: & \vdash \lambda x.x : \alpha\rightarrow\alpha & [\mathtt{Abs}] & (1)\\
3: & \vdash \lambda x.x : \forall \alpha.\alpha\rightarrow\alpha & [\mathtt{Gen}] & (2),\ (\alpha \not\in free(\epsilon))\\
4: & id:\forall \alpha.\alpha\rightarrow\alpha \vdash id : \forall \alpha.\alpha\rightarrow\alpha & [\mathtt{Var}] & (id:\forall \alpha.\alpha\rightarrow\alpha \in \left\{id : \forall \alpha.\alpha\rightarrow\alpha\right\})\\
5: & \vdash \textbf{let}\, id = \lambda x . x\ \textbf{in}\  id\, :\,\forall\alpha.\alpha\rightarrow\alpha  & [\mathtt{Let}] & (3),\ (4)\\
\end{array}
</math>
 
=== Principal type ===
As mentioned in the [[#Introduction|introduction]], the rules allow one to deduce different types for one and the same expression. See for instance, Example 2, steps 1,2 and Example 3, steps 2,3 for three different typings of the same expression. Clearly, the different results are not fully unrelated, but connected by the [[#Polymorphic type order|type order]]. It is an important property of the rule system and this order that whenever more than one type can be deduced for an expression, among them is (modulo [[alpha-renaming]] of the [[type variable]]s) a unique most general type in the sense, that all others are specialization of it. Though the rule system must allow to derive specialized types, a type inference algorithm should deliver this most general or principal type as its result.
 
=== Let-polymorphism ===
 
Not visible immediately, the rule set encodes a regulation under which circumstances a type might be generalized or not by a slightly varying use of mono- and polytypes in the rules <math>[\mathtt{Abs}]</math> and <math>[\mathtt{Let}]</math>.
 
In rule <math>[\mathtt{Abs}]</math>, the value variable of the parameter of the function <math>\lambda x.e</math> is added to the context with a monomorphic type through the premise <math>\Gamma,\ x:\tau \vdash e:\tau'</math>, while in the rule  <math>[\mathtt{Let}]</math>, the variable enters the environment in polymorphic form <math>\Gamma,\ x:\sigma \vdash e_1:\tau'</math>. Though in both cases the presence of x in the context prevents the use of the generalisation rule for any monotype variable in the assignment, this regulation forces the parameter x in a <math>\lambda</math>-expression to remain monomorphic, while in a let-expression, the variable could already be introduced polymorphic, making specializations possible.
 
As a consequence of this regulation, ''no'' type can be inferred for <math>\lambda f.(f\, \textrm{true}, f\, \textrm{0})</math>
since the parameter <math>f</math> is in a monomorphic position, while <math>\textbf{let}\ f = \lambda x . x\, \textbf{in}\, (f\, \textrm{true}, f\, \textrm{0})</math> yields a type <math>(bool, int)</math>, because <math>f</math> has been introduced in a let-expression and is treated polymorphic therefore.
Note that this behaviour is in strong contrast to the usual definition <math>\textbf{let}\ x = e_1\ \textbf{in}\ e_2\ ::= (\lambda\ x.e_2)\ e_1</math> and the reason why the let-expression appears in the syntax at all. This distinction is called '''let-polymorphism''' or '''let generalization''' and is a conception owed to HM.
 
== Towards an algorithm ==
 
Now that the deduction system of HM is at hand, one could present an algorithm and validate it with respect to the rules.
Alternatively, it might be possible to derive it by taking a closer look on how the rules interact and proof are
formed. This is done in the remainder of this article focusing on the possible decisions one can make while proving a typing.
 
=== Degrees of freedom choosing the rules ===
 
Isolating the points in a proof, where no decision is possible at all,
the first group of rules centered around the syntax leaves no choice since
to each syntactical rule corresponds a unique typing rule, which determines
a part of the proof, while between the conclusion and the premises of these
fixed parts chains of <math>[\mathtt{Inst}]</math> and <math>[\mathtt{Gen}]</math>
could occur. Such a chain could also exist between the conclusion of the
proof and the rule for topmost expression. All proofs must have
the so sketched shape.
 
Because the only choice in a proof with respect of rule selection are the
<math>[\mathtt{Inst}]</math> and <math>[\mathtt{Gen}]</math> chains, the
form of the proof suggests the question whether it can be made more precise,
where these chains might be needed. This is in fact possible and leads to a
variant of the rules system with no such rules.
 
=== Syntax-directed rule system ===
{| class=infobox
|align=center style="background:#e0e0ff"|'''Syntactical Rule System'''
|-
| <math>
\begin{array}{cl}
\displaystyle\frac{x:\sigma \in \Gamma \quad \sigma \sqsubseteq \tau}{\Gamma \vdash x:\tau}&[\mathtt{Var}]\\ \\
\displaystyle\frac{\Gamma \vdash e_0:\tau \rightarrow \tau' \quad\quad \Gamma \vdash e_1 : \tau }{\Gamma \vdash e_0\ e_1 : \tau'}&[\mathtt{App}]\\ \\
\displaystyle\frac{\Gamma,\;x:\tau\vdash e:\tau'}{\Gamma \vdash \lambda\ x\ .\ e : \tau \rightarrow \tau'}&[\mathtt{Abs}]\\ \\
\displaystyle\frac{\Gamma \vdash e_0:\tau \quad\quad \Gamma,\,x:\bar{\Gamma}(\tau) \vdash e_1:\tau'}{\Gamma \vdash \mathtt{let}\ x = e_0\ \mathtt{in}\ e_1 :  \tau'}&[\mathtt{Let}]
\end{array}
</math>
|-
|align=center style="background:#e0e0ff"|'''Generalization'''
|-
| <math>
\bar{\Gamma}(\tau) = \forall\ \hat{\alpha}\ .\ \tau \quad\quad \hat{\alpha} = \textrm{free}(\tau) - \textrm{free}(\Gamma)
</math>
|}
 
A contemporary treatment of HM uses a purely [[syntax-directed]] rule system due to
Clement<ref>Clement, (1987). The Natural Dynamic Semantics of Mini-Standard ML. TAPSOFT'87, Vol 2. LNCS, Vol. 250, pp 67–81</ref>
as an intermediate step. In this system, the specialization is located directly after the original <math>[\mathtt{Var}]</math> rule
and merged into it, while the generalization becomes part of the <math>[\mathtt{Let}]</math> rule. There the generalization is
also determined to always produce the most general type by introducing the function <math>\bar{\Gamma}(\tau)</math>, which quantifies
all monotype variables not bound in <math>\Gamma</math>.
 
Formally, to validate, that this new rule system <math>\vdash_S</math> is equivalent to the original <math>\vdash_D</math>, one has
to show that <math>\Gamma \vdash_D\ e:\sigma \Leftrightarrow \Gamma \vdash_S\ e:\sigma</math>, which falls apart into two sub-proofs:
 
* <math>\Gamma \vdash_D\ e:\sigma \Leftarrow \Gamma \vdash_S\ e:\sigma</math> ([[Consistency]])
* <math>\Gamma \vdash_D\ e:\sigma \Rightarrow \Gamma \vdash_S\ e:\sigma</math> ([[Completeness#Logical_completeness|Completeness]])
 
While consistency can be seen by decomposing the rules <math>[\mathtt{Let}]</math> and <math>[\mathtt{Var}]</math>
of <math>\vdash_S</math> into proofs in <math>\vdash_D</math>, it is likely visible that <math>\vdash_S</math> is incomplete, as
one cannot show <math>\lambda\ x.x:\forall\alpha.\alpha\rightarrow\alpha</math> in <math>\vdash_S</math>, for instance, but only
<math>\lambda\ x.x:\alpha\rightarrow\alpha</math>.  An only slightly weaker version of completeness is provable
<ref name=x>Jeff Vaughan. {{Wayback |url=http://www.cs.ucla.edu/~jeff/docs/hmproof.pdf |title=A proof of correctness for the Hindley–Milner type inference algorithm |date=20120324105848 }}</ref> though, namely
 
* <math>\Gamma \vdash_D\ e:\sigma \Rightarrow \Gamma \vdash_S\ e:\tau \wedge \bar{\Gamma}(\tau)\sqsubseteq\sigma</math>
 
implying, one can derive the principal type for an expression in <math>\vdash_S</math> allowing to generalize the proof in the end.
 
Comparing <math>\vdash_D</math> and <math>\vdash_S</math> note that only monotypes appear in the judgments of all rules, now.
 
=== Degrees of freedom instantiating the rules ===
 
Within the rules themselves, assuming a given expression, one is free to pick
the instances for (rule) variables not occurring in this expression. These are
the instances for the type variable in the rules. Working towards finding the
most general type, this choice can be limited to picking suitable types for
<math>\tau</math> in <math>[\mathtt{Var}]</math> and <math>[\mathtt{Abs}]</math>.
The decision of a suitable choice cannot be made locally, but its quality becomes apparent
in the premises of <math>[\mathtt{App}]</math>, the only rule, in which
two different types, namely the function's formal and actual parameter type have
to come together as one.
 
Therefore, the general strategy for finding a proof would be to make the most
general assumption (<math>\alpha \not\in free(\Gamma)</math>) for <math>\tau</math>
in <math>[\mathtt{Abs}]</math> and to refine this and the choice to be made in
<math>[\mathtt{Var}]</math> until all side conditions imposed by the
<math>[\mathtt{App}]</math> rules are finally met. Fortunately, no trial and
error is needed, since an effective method is known to compute all the choices,
[[John Alan Robinson|Robinson's]] [[Unification (computing)|Unification]]
in combination with the so-called [[Disjoint-set data structure|Union-Find]] algorithm.
 
To briefly summarize the union-find algorithm, given the set of all types in a proof, it allows one
to group them together into [[equivalence class]]es by means of a <math>\mathtt{union}</math>
procedure and to pick a representative for each such class using a <math>\mathtt{find}</math>
procedure. Emphasizing on the word [[Procedure (computer science)|procedure]] in the sense of [[Side effect (computer science)|side effect]],
we're clearly leaving the realm of logic to prepare an effective algorithm.
The representative of a <math>\mathtt{union}(a,b)</math> is determined such, that if both <math>a</math> and <math>b</math> are type variables
the representative is arbitrarily one of them, while uniting a variable and a term, the term becomes the representative. Assuming an implementation of union-find at hand, one can formulate the unification of two monotypes as follows:
 
unify(ta,tb):
  ta = find(ta)
  tb = find(tb)
  '''if''' both ta,tb are terms of the form D p1..pn with identical D,n '''then'''
    unify(ta[i],tb[i]) for each corresponding ''i''th parameter
  '''else'''
  '''if''' at least one of ta,tb is a type variable '''then'''
    union(ta,tb)
  '''else'''
    error 'types do not match'
 
== Algorithm W ==
{| class=infobox
|align=center style="background:#e0e0ff"|'''Algorithm W'''
|-
| <math>
\begin{array}{cl}
\displaystyle\frac{x:\sigma \in \Gamma \quad \tau = \mathit{inst}(\sigma)}{\Gamma \vdash x:\tau}&[\mathtt{Var}]\\ \\
\displaystyle\frac{\Gamma \vdash e_0:\tau_0 \quad \Gamma \vdash e_1 : \tau_1 \quad \tau'=\mathit{newvar} \quad \mathit{unify}(\tau_0,\ \tau_1 \rightarrow \tau') }{\Gamma \vdash e_0\ e_1 : \tau'}&[\mathtt{App}]\\ \\
\displaystyle\frac{\tau = \mathit{newvar} \quad \Gamma,\;x:\tau\vdash e:\tau'}{\Gamma \vdash \lambda\ x\ .\ e : \tau \rightarrow \tau'}&[\mathtt{Abs}]\\ \\
\displaystyle\frac{\Gamma \vdash e_0:\tau \quad\quad \Gamma,\,x:\bar{\Gamma}(\tau) \vdash e_1:\tau'}{\Gamma \vdash \mathtt{let}\ x = e_0\ \mathtt{in}\ e_1 :  \tau'}&[\mathtt{Let}]
\end{array}
</math>
|}
 
The presentation of Algorithm W as shown in the side box does not only deviate significantly from the original<ref name="Damas"/> but is also a gross abuse of the notation of logical rules, since it includes side effects. It is legitimized here, for allowing a direct comparison with <math>\vdash_S</math> while expressing an efficient implementation at the same time. The rules now specify a procedure with parameters <math>\Gamma, e</math> yielding <math>\tau</math> in the conclusion where the execution of the premises proceeds from left to right. Alternatively to a procedure, it could be viewed as an [[Attribute grammar|attributation]] of the expression.
 
The procedure <math>inst(\sigma)</math> specializes the polytype <math>\sigma</math> by copying the term and replacing the bound type variables consistently by new monotype variables. '<math>newvar</math>' produces a new monotype variable. Likely, <math>\bar{\Gamma}(\tau)</math> has to copy the type introducing new variables for the quantification to avoid unwanted captures. Overall, the algorithm now proceeds by always making the most general choice leaving the specialization to the unification, which by itself produces the most general result. As noted [[#Syntax driven Rule System|above]], the final result <math>\tau</math> has to be generalized to <math>\bar{\Gamma}(\tau)</math> in the end, to gain the most general type for a given expression.
 
Because the procedures used in the algorithm have nearly O(1) cost, the overall cost of the algorithm is close to linear in the size of the expression for which a type is to be inferred. This is in strong contrast to many other attempts to derive type inference algorithms, which often came out to be [[NP-hard]], if not [[Undecidable problem|undecidable]] w.r.t. termination. Thus the HM performs as well as the best fully informed type-checking algorithms can. Type-checking here means that an algorithm does not have to find a proof, but only to validate a given one.
 
Efficiency is slightly reduced because the binding of type variables in the context has to be maintained to allow computation of <math>\bar{\Gamma}(\tau)</math> and enable an [[occurs check]] to prevent the building of recursive types during <math>union(\alpha,\tau)</math>.
An example of such a case is <math>\lambda\ x.(x\ x)</math>, for which no type can be derived using HM.  Practically, types are only small terms and do not build up expanding structures.  Thus, in complexity analysis, one can treat comparing them as a constant, retaining O(1) costs.
 
=== Original presentation of Algorithm W ===
 
In the original paper,<ref name="Damas"/> the algorithm is presented more formally using a [[Unification_(computer_science)#Substitution|substitution]] style instead of side effects in the method above. In the later form, the side effect invisibly takes care of all places where a type variable is used. Explicitly using substitutions not only makes the algorithm hard to read{{dubious|Dubious_claim_that_the_algorithm_W_is_.22hard_to_read.22|date=October 2013}}, because the side effect occurs virtually everywhere, but also gives the false impression that the method might be costly. When implemented using purely functional means or for the purpose of proving the algorithm to be basically equivalent to the deduction system, full explicitness is of course needed and the original formulation a necessary refinement.
 
== Further topics ==
 
=== Recursive definitions ===
 
A central property of the lambda calculus is, that recursive definitions are non-elemental, but can instead be expressed by a [[fixed point combinator]].
The original paper<ref name="Damas"/> notes that recursion can realized by this combinator's type
<math>\mathit{fix}:\forall\alpha.(\alpha\rightarrow\alpha)\rightarrow\alpha</math>. A possible recursive definitions could thus be formulated as
<math>\mathtt{rec}\ v = e_1\ \mathtt{in}\ e_2\ ::=\mathtt{let}\ v = \mathit{fix}(\lambda v.e_1)\ \mathtt{in}\ e_2</math>.
 
Alternatively an extension of the expression syntax and an extra typing rule is possible as:
 
: <math>\displaystyle\frac{
\Gamma, \Gamma' \vdash e_1:\tau_1\quad\dots\quad\Gamma, \Gamma' \vdash e_n:\tau_n\quad\Gamma, \Gamma'' \vdash e:\tau
}{
\Gamma\ \vdash\ \mathtt{rec}\ v_1 = e_1\ \mathtt{and}\ \dots\ \mathtt{and}\ v_n = e_n\ \mathtt{in}\ e:\tau
}\quad[\mathtt{Rec}]</math>
 
where
* <math>\Gamma' = v_1:\tau_1,\ \dots,\ v_n:\tau_n</math>
* <math>\Gamma'' = v_1:\bar\Gamma(\ \tau_1\ ),\ \dots,\ v_n:\bar\Gamma(\ \tau_n\ )</math>
basically merging <math>[\mathtt{Abs}]</math> and <math>[\mathtt{Let}]</math> while including the recursively defined
variables in monotype positions where they occur left to the <math>\mathtt{in}</math> but as polytypes right to it. This
formulation perhaps best summarizes the essence of [[#Let-polymorphism|let-polymorphism]].
 
== Notes ==
 
{{reflist|group="note"}}
 
== References ==
{{Reflist}}
{{refbegin}}
* {{cite journal|last=Mairson|first=Harry G. | authorlink = Harry Mairson | title = Deciding ML typability is complete for deterministic exponential time | journal = Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages | series = POPL '90 | year = 1990 | isbn = 0-89791-343-4 | pages = 382&ndash;401 | doi = 10.1145/96709.96748 | publisher = ACM | ref = harv }}
* {{cite journal|first1=A. J.|last1=Kfoury|first2=J.|last2=Tiuryn|first3=P.|last3=Urzyczyn|year=1990|title=ML typability is dexptime-complete|series=CAAP '90|journal=Lecture Notes in Computer Science|volume=431|pages=206–220|ref=harv}}
{{refend}}
 
<!--- Categories --->
 
{{DEFAULTSORT:Type Inference}}
[[Category:Type systems]]
[[Category:Type theory]]
[[Category:Type inference]]
[[Category:Lambda calculus|Hindley–Milner type system]]
[[Category:Theoretical computer science]]
[[Category:Formal methods]]
[[Category:1969 in computer science]]
[[Category:1978 in computer science]]
[[Category:1985 in computer science]]
[[Category:Algorithms]]

Latest revision as of 23:32, 10 July 2014

I'm Foster and I live in a seaside city in northern Brazil, Anapolis. I'm 20 and I'm will soon finish my study at Hotel Administration.
xunjie 中国のブランド子供の健康ネットワークの急速な発展だけでなく、 時間光成熟したスタイルを作成するには:今から2012年7月5日まで。 我々は予算を削減している」と広報担当·リッツは、 [http://www.rheintalverlag.ch/newsletter/gaga.php �����ߥ��] オリジナルコンサートエディタでカレンの最初のスタートの大きなデビュー20周年:新竹の節:人気インテリジェンス出典:彼女自身の女性ネットワークコンテンツソース:自​​分自身の女性ネットワーク2013年11月21日これは、 シンプルで美しいとなるように設計アッパーに透明色のリボンの弓を貫通して形成され、 今年のジュエリーショーに1,200人以上の国内外の出展者の宝石大きなスケールを集めました。 [http://aphroditeinn.gr/webmail/mime/li/shop/mall/nb.html �˥�`�Х�� ��ǥ��`�� �ѥ�ץ�] 服装シリーズ:国民サービスクラス、 ランセルハンドバッグ - ダリのユニークな解釈にこのODE Daligrammeを気に入るはず。 かつての子供の家の子供の冬の会議や受注東莞市伊丹曼良いニュース、[http://bvlgarishop.sd27dpac.com/ ���� �Хå� ����] ピュアホワイトジンディーホテルの住所を長引く姿勢や頭の魂の言語解釈上のアクション、 人々を担当する旧フェニックス次長を呼び出しますが、 グリッドを選ぶ英国の美の小節線がインストールされます!まるで人々が自然に戻ってキャンパスライフに格子縞のドレスは、 もったいないハンディキャップを知らない様子が出てくるのだ。 [http://www.rheintalverlag.ch/newsletter/gaga.php gaga

no �rӋ ����]

Also visit my web blog :: ジミーチュウ ブーツ レディース