Distortion function: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>AnomieBOT
m Dating maintenance tags: {{Merge}}
en>David Eppstein
{{probability-stub}}
 
Line 1: Line 1:
{{Bayesian statistics}}


{{unreferenced|date=March 2012}}
In [[statistics]], and especially [[Bayesian statistics]], the '''posterior predictive distribution''' is the distribution that a new [[independent identically distributed|i.i.d.]] data point <math>\tilde{x}</math> would have, given a set of ''N'' existing i.i.d. observations <math>\mathbf{X} = \{x_1, \dots, x_N\}</math> .  In a [[frequentist statistics|frequentist]] context, this might be derived by computing the [[maximum likelihood]] estimate (or some other estimate) of the parameter(s) given the observed data, and then plugging them into the distribution function of the new observations.


However, the concept of posterior predictive distribution is normally used in a [[Bayesian statistics|Bayesian]] context, where it makes use of the entire [[posterior distribution]] of the parameter(s) given the observed data to yield a probability distribution over an interval rather than simply a point estimate. Specifically, it is computed by [[marginal distribution|marginalising]] over the parameters, using the posterior distribution:
Quantity versus quality. That�s an issue that feels relevant in fashion today, with the current focus on fast fashion from high street through to pre-collection. It also feels relevant coming from New York, a fashion week with a schedule that at first glance seems rammed, but upon closer inspection, like a mirage, fades away to practically nothing.<br><br>Nothing of note, in any case. Clocking in at nine days, New York is only rivalled by Paris when it comes to sheer heft, but the city�s fashion direction is really defined by a handful of designers. Despite the name, there wasn�t that much in New York this spring/summer 2015 season that really registered as �new�<br><br>
Instead, what we often saw was a composite of tried-and-tested crowd pleasers, and maybe a few novel ideas filched from the back catalogues of other designers<br>
That [http://Www.Britannica.com/search?query=approach+- approach -] hoodwinking press and buyers with the illusion of novelty, like the man behind the curtain making believe he was the great and all-powerful Oz - is endemic across the industry as a whole. Nevertheless, there�s something about New York that throws it into sharper relief<br><br>
Take Victoria Beckham�s show, liberal as it was in its borrowing of stylist tropes culled from past collections by [http://www.pcs-systems.co.uk/Images/celinebag.aspx Celine Bags Outlet] and Jil Sander. Beckham, however, has made no qualms about the fact that she isn�t a trained designer: instead, she acts as an editing eye, more like a magazine stylist - or an especially canny shopper<br><br>
She and her team (because they deserve a hefty dose of credit as the power behind the throne) are adept at nailing what feels right in a particular moment. Sometimes, of course, what feels right is a moment another designer nailed not that long ago. It looks fine second time around<br><br>
The bags were good. And it�ll sell<br>
New York Fashion Week in pictures <br>
Sales are often seen as the key motivating factor in the New York fashion scene. That�s not to say that Milanese, Parisian and London designers don�t shift product. They do, but America was built on a history of mass manufacture, rather than the handmade ethos that still informs the European capitals via the haute couture, alta moda and Savile Row<br><br>
Maybe that conditions American designers into thinking about their labels as true businesses - young New York designers throw about the word  �brand� with wild abandon, in a manner that their French or British counterparts shy away from (young Milanese designers are virtually non-existent, but that�s a discussion for another time, and a different city<br>
Kendall Jenner walks the runway at the Diane Von Furstenberg fashion show during New York Fashion Wee<br>
There�s no designer younger and more branded than Alexander Wang: he founded his label aged 21 in 2005, off the back of a few knitted sweaters. It�s now valued, conservatively, at �20m and Wang is a fashion-week fixture. Wang slots into the Victoria Beckham camp when it comes to design, although he doubtless won�t appreciate the comparison. Nevertheless, his collections aren�t groundbreaki<br><br>


:<math>p(\tilde{x}|\mathbf{X},\alpha) = \int_{\theta} p(\tilde{x}|\theta) \, p(\theta|\mathbf{X},\alpha) \operatorname{d}\!\theta</math>
Rather, they�re artful bricolage, fusing existing fashion references, tricky, techy textiles, odd accessories and ever-shifting ideas of co<br>.
Bricolage sounds cool, but is actually just French for �tinkering�, which is exactly what Wang does. After a few seasons of duds (silly fur mittens, tired logo-mania, last season�s ugly utility), this collection got the mix down.  Sexed-up, stripped-back sportswear, in neon-flushed fabrics with plenty of Aertex, rubberised treatments and fake function. It wasn�t original in the slightest, but it had enough energy to sweep you al<br><br>


where <math>\theta\,</math> represents the parameter(s) and <math>\alpha\,</math> the [[hyperparameter|hyperparameter(s)]].  Any of <math>\tilde{x}, \theta, \alpha</math> may be vectors (or equivalently, may stand for multiple parameters).
Quite a few New York designers get by on that, by pumping up the energy around their clothes rather than translating said energy into the garments themselves. It can frequently lead to a zinging, post-show high followed by a crash when you actually see the stuff out of cont<br><br>


Note that this is equivalent to the [[expected value]] of the distribution of the new data point, when the expectation is taken over the posterior distribution, i.e.:
Spring/summer 2015 looks by Jason Wu You sometimes get that with Thom Browne, so complex and convoluted are the catwalk mise en sc�nes within which he places his clothing. This season, models paraded lavishly embroidered tailoring, feather-pricked cardigan suits and sequinned PVC on a freshly-mown lawn, to a spoken-word soundtrack waffling on about a bunch of sisters and what they w<br><br>


:<math>p(\tilde{x}|\mathbf{X},\alpha) = \mathbb{E}_{\theta|\mathbf{X},\alpha}\Big[p(\tilde{x}|\theta)\Big]</math>
The story was written by Browne himself, the voice was Diane Kea<br>n.
Apparently, the six sisters are a cross between the Beale sisters of Grey Gardens (taste levels) and the Rockerfellers of Park Avenue (cash levels - Browne�s plainest suits come in at around two grand). It was an uplifting distraction, and the energy came not from a thumping soundtrack or styling gimmicks, but from the clothes themselves - however untenable they may be for real women�s real li<br><br><br>


(To get an intuition for this, keep in mind that expected value is a type of average. The predictive probability of seeing a particular value of a new observation will vary depending on the parameters of the distribution of the observation.  In this case, we don't know the exact value of the parameters, but we have a posterior distribution over them, that specifies what we believe the parameters to be, given the data we've already seen. Logically, then, to get "the" predictive probability, we should average all of the various predictive probabilities over the different possible parameter values, weighting them according to how strongly we believe in them.  This is exactly what this expected value does.  Compare this to the approach in [[frequentist statistics]], where a single estimate of the parameters, e.g. a [[maximum likelihood estimate]], would be computed, and this value plugged in.  This is equivalent to averaging over a posterior distribution with no [[variance]], i.e. where we are completely certain of the parameter having a single value.  The result is weighted too strongly towards the mode of the posterior, and takes no account of other possible values, unlike in the Bayesian approach.)
There was a sense of reality to what Lazaro Hernandez and Jack Mccollough offered at Proenza Schouler. �It�s really about American sportswear, and this idea of �normal�,� said Hernandez before a show that was anything but. Their �normal� included leather vests plaited to resemble houndstooth, nylon thread crocheted into openwork dresses, and perforated blouses and skirts in leather so tissue-fine it ended up looking like nylon. �It�s dumb,� they said �It�s the clothes we all wear every day.� Meaning clothes that weren�t clever-clever or trying too hard, and that the work in, say, an argyle dress composed of 144 pattern pieces and executed without fit seams, couldn�t be immediately read in a two-dimensional image but had to be experienced in t<br><br>lesh.


Read more: Marc Jacobs' no makeup models du<br>ng NYFW
Victoria Beckha<br>at NYFW
Too many New York designers stick to tried and tested recipesThere was a touch of the dumb to Jason Wu�s show, too. The good kind of dumb - the dumb glamour of a bugle-beaded evening gown with the easiness of aT-shirt, or a billowing silk-jersey dress with a Grecian si<br><br>city.


==Prior vs. posterior predictive distribution==
They felt easy, really ready to wear - as opposed to so many of the resoundingly difficult clothes dubbed that way. The final say from New York fashion week comes from Marc Jacobs. This season, he, too, seemed [http://search.un.org/search?ie=utf8&site=un_org&output=xml_no_dtd&client=UN_Website_en&num=10&lr=lang_en&proxystylesheet=UN_Website_en&oe=utf8&q=fixated&Submit=Go fixated] on the notion of real - or perhaps, hy<br><br>real.
The '''prior predictive distribution''', in a Bayesian context, is the distribution of a data point marginalized over its prior distribution.  That is, if <math>\tilde{x} \sim F(\tilde{x}|\theta)</math> and <math>\theta \sim G(\theta|\alpha)</math>, then the prior predictive distribution is the corresponding distribution <math>H(\tilde{x}|\alpha)</math>, where


:<math>p_H(\tilde{x}|\alpha) = \int_{\theta} p_F(\tilde{x}|\theta) \, p_G(\theta|\alpha) \operatorname{d}\!\theta</math>
His audience listened, via headphones on each seat, to piped-in background noise from a middle-American house, while a distinctly Koonsian reworking of one, 10,000sq ft of shocking pink, sat in the middle of his catwalk. A model presents a creation by Jason Wu Spring/Summer 2015 collection during New York Fash<br>n Week
Those bore no relation to the clothes, riffs on army surplus in satin punctuated with cartoonish holes and peppered with buckshot spherical embroidery, in plump, doll-like shapes. Barbie meets Action Man - maybe the pink, centre-stage shack was her dream house? <br><br>s me.


Note that this is similar to the posterior predictive distribution except that the marginalization (or equivalently, expectation) is taken with respect to the prior distribution instead of the posterior distribution.
Whatever the rationale, neither clothes nor show looked like anything else this week. Which was precisely the point. Marc Jacobs keeps his eye on what other designers are doing. It�s not to copy them, or even to check if they copy him, but out of a perverse contrariness, a wish to buck the st<br><br> quo.


 
If other designers do gingham and sugary-sweet bridesmaid pastels, you can bet Jacobs will show polka dots and sludgy fatigues. Regardless of taste, or even relevance, you have to applaud Jacobs for at least showing us something consistently, contrarily new, in a New York that desperately needs it.
Furthermore, if the prior distribution <math>G(\theta|\alpha)</math> is a [[conjugate prior]], then the posterior predictive distribution will belong to the same family of distributions as the prior predictive distribution.  This is easy to see.  If the prior distribution <math>G(\theta|\alpha)</math> is conjugate, then
 
:<math>p(\theta|\mathbf{X},\alpha) = p_G(\theta|\alpha'),</math>
 
i.e. the posterior distribution also belongs to <math>G(\theta|\alpha),</math> but simply with a different parameter <math>\alpha'</math> instead of the original parameter <math>\alpha .</math> Then,
 
:<math>
\begin{align}
p(\tilde{x}|\mathbf{X},\alpha) & = \int_{\theta} p_F(\tilde{x}|\theta) \, p(\theta|\mathbf{X},\alpha) \operatorname{d}\!\theta \\
& = \int_{\theta} p_F(\tilde{x}|\theta) \, p_G(\theta|\alpha') \operatorname{d}\!\theta \\
& = p_H(\tilde{x}|\alpha')
\end{align}
</math>
 
 
Hence, the posterior predictive distribution follows the same distribution ''H'' as the prior predictive distribution, but with the posterior values of the hyperparameters substituted for the prior ones.
 
The prior predictive distribution is in the form of a [[compound distribution]], and in fact is often used to ''define'' a [[compound distribution]], because of the lack of any complicating factors such as the dependence on the data <math>\mathbf{X}</math> and the issue of conjugacy.  For example, the [[Student's t-distribution]] can be ''defined'' as the prior predictive distribution of a [[normal distribution]] with known [[mean]] ''&mu;'' but unknown [[variance]] ''&sigma;<sub>x</sub><sup>2</sup>'', with a conjugate prior [[scaled-inverse-chi-squared distribution]] placed on ''&sigma;<sub>x</sub><sup>2</sup>'', with hyperparameters ''&nu;'' and ''&sigma;<sup>2</sup>''.  The resulting compound distribution <math>t(x|\mu,\nu,\sigma^2)</math> is indeed a non-standardized [[Student's t-distribution]], and follows one of the two most common parameterizations of this distribution.  Then, the corresponding posterior predictive distribution would again be Student's t, with the updated hyperparameters <math>\nu', {\sigma^2}'</math> that appear in the posterior distribution also directly appearing in the posterior predictive distribution.
 
Note in some cases that the appropriate compound distribution is defined using a different parameterization than the one that would be most natural for the predictive distributions in the current problem at hand.  Often this results because the prior distribution used to define the compound distribution is different from the one used in the current problem.  For example, as indicated above, the [[Student's t-distribution]] was defined in terms of a [[scaled-inverse-chi-squared distribution]] placed on the variance.  However, it is more common to use an [[inverse gamma distribution]] as the conjugate prior in this situation.  The two are in fact equivalent except for parameterization; hence, the Student's t-distribution can still be used for either predictive distribution, but the hyperparameters must be reparameterized before being plugged in.
 
==In exponential families==
Most, but not all, common families of distributions belong to the [[exponential family]] of distributions.  Exponential families have a large number of useful properties.  One of which is that all members have [[conjugate prior]] distributions — whereas very few other distributions have conjugate priors. 
 
===Prior predictive distribution in exponential families===
Another useful property is that the [[probability density function]] of the [[compound distribution]] corresponding to the prior predictive distribution of an [[exponential family]] distribution [[marginal distribution|marginalized]] over its [[conjugate prior]] distribution can be determined analytically.  Assume that <math>F(x|\boldsymbol{\theta})</math> is a member of the exponential family with parameter <math>\boldsymbol{\theta}</math> that is parametrized according to the [[natural parameter]] <math>\boldsymbol{\eta} = \boldsymbol{\eta}(\boldsymbol{\theta})</math>, and is distributed as
 
:<math>p_F(x|\boldsymbol{\eta}) = h(x)g(\boldsymbol{\eta})e^{\boldsymbol{\eta}^{\rm T}\mathbf{T}(x)}</math>
 
while <math>G(\boldsymbol{\eta}|\boldsymbol{\chi},\nu)</math> is the appropriate conjugate prior, distributed as
 
:<math>p_G(\boldsymbol{\eta}|\boldsymbol{\chi},\nu) = f(\boldsymbol{\chi},\nu)g(\boldsymbol{\eta})^\nu e^{\boldsymbol{\eta}^{\rm T}\boldsymbol{\chi}}</math>
 
Then the prior predictive distribution <math>H</math> (the result of compounding <math>F</math> with <math>G</math>) is
 
: <math>
\begin{align}
p_H(x|\boldsymbol{\chi},\nu) &= {\displaystyle \int\limits_\boldsymbol{\eta} p_F(x|\boldsymbol{\eta}) p_G(\boldsymbol{\eta}|\boldsymbol{\chi},\nu) \,\operatorname{d}\boldsymbol{\eta}} \\
                          &= {\displaystyle \int\limits_\boldsymbol{\eta} h(x)g(\boldsymbol{\eta})e^{\boldsymbol{\eta}^{\rm T}\mathbf{T}(x)} f(\boldsymbol{\chi},\nu)g(\boldsymbol{\eta})^\nu e^{\boldsymbol{\eta}^{\rm T}\boldsymbol{\chi}} \,\operatorname{d}\boldsymbol{\eta}} \\
                          &= {\displaystyle h(x) f(\boldsymbol{\chi},\nu) \int\limits_\boldsymbol{\eta} g(\boldsymbol{\eta})^{\nu+1} e^{\boldsymbol{\eta}^{\rm T}(\boldsymbol{\chi} + \mathbf{T}(x))} \,\operatorname{d}\boldsymbol{\eta}} \\
                          &= h(x) \dfrac{f(\boldsymbol{\chi},\nu)}{f(\boldsymbol{\chi} + \mathbf{T}(x), \nu+1)}
\end{align}
</math>
 
The last line follows from the previous one by recognizing that the function inside the integral is the density function of a random variable distributed as <math>G(\boldsymbol{\eta}| \boldsymbol{\chi} + \mathbf{T}(x), \nu+1)</math>, excluding the [[normalizing constant|normalizing]] function <math>f(\dots)\,</math>.  Hence the result of the integration will be the reciprocal of the normalizing function.
 
The above result is independent of choice of parametrization of <math>\boldsymbol{\theta}</math>, as none of <math>\boldsymbol{\theta}</math>, <math>\boldsymbol{\eta}</math> and <math>g(\dots)\,</math> appears. (Note that <math>g(\dots)\,</math> is a function of the parameter and hence will assume different forms depending on choice of parametrization.) For standard choices of <math>F</math> and <math>G</math>, it is often easier to work directly with the usual parameters rather than rewrite in terms of the [[natural parameter]]s.
 
Note also that the reason the integral is tractable is that it involves computing the [[normalization constant]] of a density defined by the product of a [[prior distribution]] and a [[likelihood]].  When the two are [[conjugate prior|conjugate]], the product is a [[posterior distribution]], and by assumption, the normalization constant of this distribution is known. As shown above, the [[density function]] of the compound distribution follows a particular form, consisting of the product of the function <math>h(x)</math> that forms part of the density function for <math>F</math>, with the quotient of two forms of the normalization "constant" for <math>G</math>, one derived from a prior distribution and the other from a posterior distribution.  The [[beta-binomial distribution]] is a good example of how this process works.
 
Despite the analytical tractability of such distributions, they are in themselves usually not members of the [[exponential family]].  For example, the three-parameter [[Student's t distribution]], [[beta-binomial distribution]] and [[Dirichlet-multinomial distribution]] are all predictive distributions of exponential-family distributions (the [[normal distribution]], [[binomial distribution]] and [[multinomial distribution]]s, respectively), but none are members of the exponential family.  This can be seen above due to the presence of functional dependence on <math>\boldsymbol{\chi} + \mathbf{T}(x)</math>.  In an exponential-family distribution, it must be possible to separate the entire density function into multiplicative factors of three types: (1) factors containing only variables, (2) factors containing only parameters, and (3) factors whose logarithm factorizes between variables and parameters.  The presence of <math>\boldsymbol{\chi} + \mathbf{T}(x){\chi}</math> makes this impossible unless the "normalizing" function <math>f(\dots)\,</math> either ignores the corresponding argument entirely or uses it only in the exponent of an expression.
 
===Posterior predictive distribution in exponential families===
As noted above, when a conjugate prior is being used, the posterior predictive distribution belongs to the same family as the prior predictive distribution, and is determined simply by plugging the updated hyperparameters for the posterior distribution of the parameter(s) into the formula for the prior predictive distribution.  Using the general form of the posterior update equations for exponential-family distributions (see the [[exponential family#Bayesian estimation: conjugate distributions|appropriate section in the exponential family article]]), we can write out an explicit formula for the posterior predictive distribution:
 
: <math>
\begin{array}{lcl}
p(\tilde{x}|\mathbf{X},\boldsymbol{\chi},\nu) &=& p_H\left(\tilde{x}|\boldsymbol{\chi} + \mathbf{T}(
\mathbf{X}), \nu+N\right)
\end{array}
</math>
 
where
 
:<math>\mathbf{T}(\mathbf{X}) = \sum_{i=1}^N \mathbf{T}(x_i)</math>
 
 
This shows that the posterior predictive distribution of a series of observations, in the case where the observations follow an [[exponential family]] with the appropriate [[conjugate prior]], has the same probability density as the compound distribution, with parameters as specified above. 
 
Note in particular that the observations themselves enter only in the form  <math>\mathbf{T}(\mathbf{X}) = \sum_{i=1}^N \mathbf{T}(x_i) .</math>
This is termed the ''[[sufficient statistic]]'' of the observations, because it tells us everything we need to know about the observations in order to compute a posterior or posterior predictive distribution based on them (or, for that matter, anything else based on the [[likelihood function|likelihood]] of the observations, such as the [[marginal likelihood]]).
 
===Joint predictive distribution, marginal likelihood===
It is also possible to consider the result of compounding a joint distribution over a fixed number of [[independent identically distributed]] samples with a prior distribution over a shared parameter.  In a Bayesian setting, this comes up in various contexts: computing the prior or posterior predictive distribution of multiple new observations, and computing the [[marginal likelihood]] of observed data (the denominator in [[Bayes' law]]).  When the distribution of the samples is from the exponential family and the prior distribution is conjugate, the resulting compound distribution will be tractable and follow a similar form to the expression above.  It is easy to show, in fact, that the joint compound distribution of a set <math>\mathbf{X} = \{x_1, \dots, x_N\}</math> for <math>N</math> observations is
 
:<math>p_H(\mathbf{X}|\boldsymbol{\chi},\nu) = \left( \prod_{i=1}^N h(x_i) \right) \dfrac{f(\boldsymbol{\chi},\nu)}{f\left(\boldsymbol{\chi} + \mathbf{T}(\mathbf{X}), \nu+N \right)}</math>
 
 
This result and the above result for a single compound distribution extend trivially to the case of a distribution over a vector-valued observation, such as a [[multivariate Gaussian distribution]].
 
 
==Relation to Gibbs sampling==
 
Note also that collapsing out a node in a [[collapsed Gibbs sampler]] is equivalent to [[compound distribution|compounding]].  As a result, when a set of [[independent identically distributed]] (i.i.d.) nodes all depend on the same prior node, and that node is collapsed out, the resulting [[conditional probability]] of one node given the others as well as the parents of the collapsed-out node (but not conditioning on any other nodes, e.g. any child nodes) is the same as the posterior predictive distribution of all the remaining i.i.d. nodes (or more correctly, formerly i.i.d. nodes, since collapsing introduces dependencies among the nodes).  That is, it is generally possible to implement collapsing out of a node simply by attaching all parents of the node directly to all children, and replacing the former conditional probability distribution associated with each child with the corresponding posterior predictive distribution for the child conditioned on its parents and the other formerly i.i.d. nodes that were also children of the removed node.  For an example, for more specific discussion and for some cautions about certain tricky issues, see the [[Dirichlet-multinomial distribution]] article.
 
==See also==
* [[Compound probability distribution]]
* [[Marginal probability]]
 
[[Category:Probability theory]]
[[Category:Bayesian statistics]]

Latest revision as of 07:19, 5 May 2014


Quantity versus quality. That�s an issue that feels relevant in fashion today, with the current focus on fast fashion from high street through to pre-collection. It also feels relevant coming from New York, a fashion week with a schedule that at first glance seems rammed, but upon closer inspection, like a mirage, fades away to practically nothing.

Nothing of note, in any case. Clocking in at nine days, New York is only rivalled by Paris when it comes to sheer heft, but the city�s fashion direction is really defined by a handful of designers. Despite the name, there wasn�t that much in New York this spring/summer 2015 season that really registered as �new�

Instead, what we often saw was a composite of tried-and-tested crowd pleasers, and maybe a few novel ideas filched from the back catalogues of other designers
That approach - hoodwinking press and buyers with the illusion of novelty, like the man behind the curtain making believe he was the great and all-powerful Oz - is endemic across the industry as a whole. Nevertheless, there�s something about New York that throws it into sharper relief

Take Victoria Beckham�s show, liberal as it was in its borrowing of stylist tropes culled from past collections by Celine Bags Outlet and Jil Sander. Beckham, however, has made no qualms about the fact that she isn�t a trained designer: instead, she acts as an editing eye, more like a magazine stylist - or an especially canny shopper

She and her team (because they deserve a hefty dose of credit as the power behind the throne) are adept at nailing what feels right in a particular moment. Sometimes, of course, what feels right is a moment another designer nailed not that long ago. It looks fine second time around

The bags were good. And it�ll sell
New York Fashion Week in pictures
Sales are often seen as the key motivating factor in the New York fashion scene. That�s not to say that Milanese, Parisian and London designers don�t shift product. They do, but America was built on a history of mass manufacture, rather than the handmade ethos that still informs the European capitals via the haute couture, alta moda and Savile Row

Maybe that conditions American designers into thinking about their labels as true businesses - young New York designers throw about the word �brand� with wild abandon, in a manner that their French or British counterparts shy away from (young Milanese designers are virtually non-existent, but that�s a discussion for another time, and a different city
Kendall Jenner walks the runway at the Diane Von Furstenberg fashion show during New York Fashion Wee
There�s no designer younger and more branded than Alexander Wang: he founded his label aged 21 in 2005, off the back of a few knitted sweaters. It�s now valued, conservatively, at �20m and Wang is a fashion-week fixture. Wang slots into the Victoria Beckham camp when it comes to design, although he doubtless won�t appreciate the comparison. Nevertheless, his collections aren�t groundbreaki

Rather, they�re artful bricolage, fusing existing fashion references, tricky, techy textiles, odd accessories and ever-shifting ideas of co
. Bricolage sounds cool, but is actually just French for �tinkering�, which is exactly what Wang does. After a few seasons of duds (silly fur mittens, tired logo-mania, last season�s ugly utility), this collection got the mix down. Sexed-up, stripped-back sportswear, in neon-flushed fabrics with plenty of Aertex, rubberised treatments and fake function. It wasn�t original in the slightest, but it had enough energy to sweep you al

Quite a few New York designers get by on that, by pumping up the energy around their clothes rather than translating said energy into the garments themselves. It can frequently lead to a zinging, post-show high followed by a crash when you actually see the stuff out of cont

Spring/summer 2015 looks by Jason Wu You sometimes get that with Thom Browne, so complex and convoluted are the catwalk mise en sc�nes within which he places his clothing. This season, models paraded lavishly embroidered tailoring, feather-pricked cardigan suits and sequinned PVC on a freshly-mown lawn, to a spoken-word soundtrack waffling on about a bunch of sisters and what they w

The story was written by Browne himself, the voice was Diane Kea
n. Apparently, the six sisters are a cross between the Beale sisters of Grey Gardens (taste levels) and the Rockerfellers of Park Avenue (cash levels - Browne�s plainest suits come in at around two grand). It was an uplifting distraction, and the energy came not from a thumping soundtrack or styling gimmicks, but from the clothes themselves - however untenable they may be for real women�s real li


There was a sense of reality to what Lazaro Hernandez and Jack Mccollough offered at Proenza Schouler. �It�s really about American sportswear, and this idea of �normal�,� said Hernandez before a show that was anything but. Their �normal� included leather vests plaited to resemble houndstooth, nylon thread crocheted into openwork dresses, and perforated blouses and skirts in leather so tissue-fine it ended up looking like nylon. �It�s dumb,� they said �It�s the clothes we all wear every day.� Meaning clothes that weren�t clever-clever or trying too hard, and that the work in, say, an argyle dress composed of 144 pattern pieces and executed without fit seams, couldn�t be immediately read in a two-dimensional image but had to be experienced in t

lesh.

Read more: Marc Jacobs' no makeup models du
ng NYFW Victoria Beckha
at NYFW Too many New York designers stick to tried and tested recipesThere was a touch of the dumb to Jason Wu�s show, too. The good kind of dumb - the dumb glamour of a bugle-beaded evening gown with the easiness of aT-shirt, or a billowing silk-jersey dress with a Grecian si

city.

They felt easy, really ready to wear - as opposed to so many of the resoundingly difficult clothes dubbed that way. The final say from New York fashion week comes from Marc Jacobs. This season, he, too, seemed fixated on the notion of real - or perhaps, hy

real.

His audience listened, via headphones on each seat, to piped-in background noise from a middle-American house, while a distinctly Koonsian reworking of one, 10,000sq ft of shocking pink, sat in the middle of his catwalk. A model presents a creation by Jason Wu Spring/Summer 2015 collection during New York Fash
n Week Those bore no relation to the clothes, riffs on army surplus in satin punctuated with cartoonish holes and peppered with buckshot spherical embroidery, in plump, doll-like shapes. Barbie meets Action Man - maybe the pink, centre-stage shack was her dream house?

s me.

Whatever the rationale, neither clothes nor show looked like anything else this week. Which was precisely the point. Marc Jacobs keeps his eye on what other designers are doing. It�s not to copy them, or even to check if they copy him, but out of a perverse contrariness, a wish to buck the st

quo.

If other designers do gingham and sugary-sweet bridesmaid pastels, you can bet Jacobs will show polka dots and sludgy fatigues. Regardless of taste, or even relevance, you have to applaud Jacobs for at least showing us something consistently, contrarily new, in a New York that desperately needs it.