Paul Milgrom

Paul Robert Milgrom (born April 20, 1948 in Detroit, Michigan) is an American economist. He is the Shirley and Leonard Ely Professor of Humanities and Sciences at Stanford University, a position he has held since 1987. Professor Milgrom is an expert in game theory, specifically auction theory and pricing strategies. He is the co-creator of the no-trade theorem with Nancy Stokey. He is the co-founder of several companies, the most recent of which, Auctionomics,[1] provides software and services that create efficient markets for complex commercial auctions and exchanges.

Biography

Paul Milgrom[2] was born to Abraham Isaac Milgrom (born in Toronto, Canada) and Anne Lillian Finkelstein (born in Detroit). He was the second of four sons. At the age of six, his family moved to Oak Park, Michigan and Milgrom attended the Dewey School and then Oak Park High School (Michigan). In high school, Milgrom learned to play and analyze chess. He soon shifted his interests in strategic games to bridge. Milgrom showed an early interest in mathematics and attended summer programs at Ohio State University and entered the Michigan Mathematics Prize Competition while in high school.

Milgrom graduated with high honors from the University of Michigan in 1970 with an A.B. in mathematics. He was also actively involved in the Vietnam War protest movement. He worked as an actuary for several years in San Francisco at the Metropolitan Insurance Company and then at the Nelson and Warren consultancy in Columbus, Ohio. Milgrom became a Fellow of the Society of Actuaries in 1974. In 1975, Milgrom enrolled for graduate studies at Stanford University in the MBA program. After his first year, he was invited to the Ph.D. program, earning an M.S. in statistics in 1978 and a Ph.D. in business in 1979. His dissertation on the theory of auctions (Milgrom, 1979a) won the Leonard Savage prize. This also led to the first of his several seminal articles on auction theory (Milgrom, 1979b). His thesis advisor, Robert B. Wilson, would later become his collaborator in designing the spectrum auction used by the Federal Communications Commission.

After earning his Ph.D., Milgrom assumed a teaching position at the Kellogg School of Management at Northwestern University where he served from 1979 to 1983. At Kellogg's Department of Managerial Economics and Decision Sciences (MEDS), Milgrom was part of a group of professors including future Nobel laureate Roger Myerson, Bengt Holmstrom, Nancy Stokey, Robert J. Weber, John Roberts and Mark Satterthwaite that helped to bring game theory and information economics to bear on a wide range of problems in economics such as pricing, auctions, financial markets, and industrial organization.

At MEDS, Milgrom was influential in developing a deeper appreciation of how the mathematics of probability can be applied to economic theory. He emphasized that, for example, the mathematics of conditional expectation was essential to understanding applied informational questions like the winner's curse. His work with Robert Weber on distributional strategies introduced new ways of using topological properties of probability spaces in the analysis of games where players have different information.

Weber recounted his collaboration with Milgrom. During what was supposed to be a brief meeting to ponder a problem faced by Weber, Milgrom had a key insight. Weber wrote, "And there, in a matter of a few minutes, was the heart of our first two joint papers."[3]

From 1982 to 1987, Milgrom was a professor of economics and management at Yale University. In 1987, Milgrom returned as an economics professor to his alma mater, Stanford University, where he is currently the Shirley and Leonard Ely Professor of Humanities and Sciences in the Department of Economics. He was the doctoral thesis advisor for several students, notably John Bates Clark Medal winner Susan Athey.

Milgrom held editorial positions at various prestigious journals including the American Economic Review, Econometrica and the Journal of Economic Theory. He became a Fellow of the Econometric Society in 1984, and the American Academy of Arts and Sciences in 1992. In 1996, he gave the Nobel memorial lecture[4] honoring the laureate William Vickrey, who had died just three days after the Nobel prize announcement. In 2006, Milgrom was elected to the National Academy of Sciences.

Milgrom received the Erwin Plein Nemmers Prize in Economics in 2008 "for contributions dramatically expanding the understanding of the role of information and incentives in a variety of settings, including auctions, the theory of the firm, and oligopolistic markets." He also received the 2012 BBVA Frontiers of Knowledge Award in the area of economics, finance and management "for his seminal contributions to an unusually wide range of fields of economics including auctions, market design, contracts and incentives, industrial economics, economics of organizations, finance, and game theory."

Paul Milgrom won the 2012 BBVA Foundation Frontiers of Knowledge Award in Economy, Finance and Management category for his seminal contributions to an unusually wide range of fields of economics including auctions, market design, contracts and incentives, industrial economics, economics of organizations, finance, and game theory.

In 2013, Milgrom was elected as Vice President of the American Economic Association.[4] On 19–20 April 2013, a conference was held in honor of Milgrom's 65th Birthday at Stanford University.[5]

Personal life

While living in Columbus, Ohio in the early 1970s, Milgrom met and later married Jan Thurston. They had two children, Josh Thurston-Milgrom and Elana Thurston-Milgrom. Milgrom also has a grandson.

He now is married to Eva Meyersson Milgrom, whom he met in Sweden, on December 10, 1996, when he was seated next to her at the Nobel prize dinner. Milgrom has a step son, Erik Gustaf Meyersson.

Research

Milgrom has made important contributions to several fields of economics, including auction theory, game theory, information economics, industrial organization, and the theory of organizations. He has published nearly 100 papers, and his papers have received more than 55,000 citations on Google Scholar.

Context and Overview

Economic theory underwent a major change in the late 1970s and the early 1980s. While the general equilibrium theory of perfectly competitive markets had been the main focus of theoretical research, a host of young researchers started to tackle new sets of problems using the tools of modern non-cooperative game theory. Those researchers realized that a number of important economic problems were outside the realm of perfectly competitive markets and that they can be fruitfully analyzed by focusing on incentives and information. Milgrom was one of the leading figures in this new movement in economic theory.

The new movement in economic theory provided a closer look at how the market mechanism works. In particular, while traditional economic theory did not pay attention to the detailed procedure of price formation, Auction Theory focuses on how the market price is formed under clearly specified procedure, taking into account the fact that the participants in the market have diverse private information. Milgrom provided fundamental contributions to Auction Theory. One of the first papers of Milgrom (1979b) solved a long-standing open problem about how auction correctly aggregate private information held by the bidders. Milgrom and Weber (1982) provided fundamental results when bidders’ valuations are interdependent. In a paper which is closely related to auctions (Glosten and Milgrom, 1985) Milgrom provided a seminal contribution to the theory of Market Microstructure, which analyses detailed price formation mechanisms in financial markets. In the 1990s Milgrom went one step further to apply auction theory to solve important practical problems, most notably the FCC spectrum auction in 1994. This was an important event in the economic theory where it reached the stage where its engineering applications to practical problems became feasible. Milgrom is one of the leading figures in this respect, and he is one of the founders of the new research area of Market Design. Milgrom (2004) is a landmark monograph in Market Design.

Milgrom also demonstrated that important stylized facts in Industrial Organization, which had previously been analyzed under ad-hoc assumptions, can be consistently explained by game theoretic analysis under asymmetric information. In a collection of highly influential papers with John Roberts, he showed that predatory practices, limit pricing (charging a low price, perhaps even below marginal cost, to discourage entry), and wasteful spending on apparently uninformative advertising can all be rational strategic behavior under asymmetric information. Milgrom is one of the pioneers who rewrote the theory of Industrial Organization on the basis of the logic of modern game theory.

Another innovation provided by Milgrom was to show that the activities within a firm or an organization, which had been widely regarded as the topics for management sciences or organization theorists from other disciplines, could lend themselves to formal mathematical analysis. In particular, Milgrom contributed to the formation of new research area, Contract Theory, which provides formal analysis of incentives, both within organizations and in markets. Milgrom analyzed the optimal design of incentive schemes and organizations, and he showed that common practices in reality, such as the use of simple piece rate, can be optimal under a realistic set of assumptions.

In terms of pure economic theory, Milgrom provided fundamental analysis of “complements”, a set of variables that tend to move in the same direction because increasing any one of them increases the payoff to increasing the others, in a very general setting. Milgrom provided formal analyses of strategic complementarities (complementarities among the choices of different players in a game)and supermodularity, and he went on to derive a number of implications in various fields in economics.

Upon receiving the Nemmers Prize in 2008, the official release[5] highlighted the following:

"Milgrom's path-breaking work has developed and popularized new tools for the analysis of asymmetric information and strategic interaction and, most significantly, has shown the usefulness of those tools for the analysis of applied problems," said Charles Manski, professor and chair of economics at Northwestern. Milgrom's work on auctions helped lay the groundwork for one of the most fruitful research areas in microeconomics over the last 30 years. His work on the theory of the firm has been equally influential. Milgrom has also made important contributions to the study of how asymmetric information can affect firm behavior in oligopolistic markets.

The jury citation for the BBVA Award wrote:[6]

His work on auction theory is probably his best-known. He has explored issues of design, bidding and outcomes for auctions with different rules. He designed auctions for multiple complementary items, with an eye towards practical applications such as frequency spectrum auctions. Professor Milgrom’s research in industrial organization includes influential studies on limit pricing, entry deterrence, predation, and advertising. In addition, Milgrom has added important novel insights to finance, particularly in connection to speculative trading and market micro-structure. The common theme of his works on auctions, industrial strategies, and financial markets is that economic actors infer from prices and other observables information about the fundamental market values.

He has also contributed to agency theory by describing conditions under which linear incentives are optimal, and by developing a tractable model of multitask agency relationships. His work on contract and organization theory has been very influential in management science. Finally, Professor Milgrom has contributed to mathematical economics and game theory, with studies on reputation and adaptive learning.

Game Theory

Milgrom made several fundamental contributions to game theory in the 1980s and 1990s on topics including the game-theoretic analysis of reputation formation, repeated games, supermodular games and learning in games.

Reputation Formation

From the game-theoretic perspective, a starting point for the theory of reputation formation is the repeated prisoners' dilemma. In a standard, one-shot prisoners' dilemma, the only equilibrium is (Defect, Defect), which yields a Pareto inefficient outcome. Similarly, when the prisoners' dilemma is repeated a fixed number of times, defection in every period remains the unique equilibrium outcome (that this is the only perfect equilibrium follows by backward induction, but the claim is true of simple Nash equilibrium too). This seems counterintuitive, since the players have a strong incentive to cooperate and have a wide range of strategies at their disposal. For example, if one player could commit to playing a tit-for-tat strategy, then it would be optimal for the other player to cooperate until the last few periods of the game, which would yield a Pareto superior outcome. In an influential 1982 paper with David M. Kreps, John Roberts, and Robert B. Wilson, Milgrom showed that if one or both players have even a very small probability of being committed to playing tit-for-tat, then in equilibrium both players cooperate until the last few periods. This is because even an uncommitted player has an incentive to “build a reputation” for being committed to tit-for-tat, as doing so makes the other player want to cooperate. The Kreps-Milgrom-Roberts-Wilson "Gang of Four" paper launched an entire branch of the game theory literature on such “reputation effects.”

Distributional Strategies

Milgrom's 1985 paper with Robert J. Weber on distributional strategies showed the general existence of equilibria for a Bayesian game with finitely many players, if the players' sets of types and actions are compact metric spaces, the players' payoffs are continuous functions of the types and actions, and the joint distribution of the players' types is absolutely continuous with respect to the product of their marginal distributions. These basic assumptions are always satisfied if the sets of types and actions are finite.

Repeated Games

Milgrom made a fundamental contribution to the theory of repeated games. When players’ actions are hidden and noisy signals about their actions are observable (i.e., in the case of imperfect monitoring), there are two general ways to achieve efficiency. One way is to transfer future payoffs from one player to others. This is a way to punish a potential deviator without reducing the total future payoffs. The classical folk theorem result under imperfect monitoring[7] is built on this idea. The second general method is to delay the release of information. Under the second method, the outcomes of the noisy signals are released in every T periods, and upon the release of information players “review” the signals in the last T periods and decide to push or reward each other. This is now widely known as the “review strategy”, and Milgrom’s paper with D. Abreu and D. Pearce (Abreu, Milgrom and Pearce, 1991) was the first to show the efficiency of review strategy equilibrium in discounted repeated games. The review strategy turns out to be useful when players receive private signals about each other’s actions (the case of private monitoring), and the folk theorem for the private monitoring case[8] is built on the idea of the review strategy.

Supermodular Games

The theory of supermodular games is one of the most impressive and important recent developments in economic theory. Key contributions to this theory include seminal work Topkis's Theorem, Vives (1990),[9] and the important paper by Milgrom and Roberts (1990c).

The impact and importance of the theory of supermodular games came from its breadth of application, including search, technology adoption, bank runs, arms races, pretrial negotiations, two-player Cournot competition, N-player Bertrand competition, and oil exploration, and the economics of organizations (Milgrom and Roberts, 1990b).

There are two basic reasons why the theory of supermodular games has had a major and enduring impact on both theoretical and applied economics. First, the theory provides robust predictions under minimal behavioral assumptions. The Milgrom-Roberts(1990c) paper’s first central result (Theorem 5) is that (i) each player has a largest and a smallest rationalizable strategy (not at all obvious when players’ strategies are multidimensional) and (ii) the strategy profile in which every player adopts its largest rationalizable strategy is a pure-strategy Nash equilibrium, as is the strategy profile in which every player adopts its smallest rationalizable strategy. What this means is that, as solution concepts, Nash equilibrium imposes the same “bounds” on behavior as rationalizability. (Moroever, the paper shows that the extremal Nash equilibria also often possess extremal welfare properties.)

Second, the theory generates powerful equilibrium comparative statics results. Suppose that players’ payoffs are influenced by some parameter X and, in particular, that each player’s payoff satisfies increasing differences in its own strategy and X. The paper’s second central result (Theorem 6) is that the largest and smallest equilibria are themselves increasing in X. An intuition is that an increase in the parameter X has both a direct effect (due to increasing differences) and an indirect effect (since the game is supermodular and others are playing higher strategies) that encourage players to play higher strategies.

Learning in Games

Milgrom and Roberts build on their work in supermodular games to understand the processes by which strategic agents reach equilibrium in a normal-form game. In Milgrom and Roberts (1991), they proposed two learning processes each with a degree of generality so as to not model learning but learning processes. They considered a sequence of plays over time which, for a player n, is denoted {xn(t)} where for each possible time, t, xn(t) is a pure strategy. Given this, an observed sequence, {xn(t)}, is consistent with adaptive learning if a player n eventually chooses only strategies that are nearly best-replies to some probability distribution over the joint strategies of other players (with near zero probability being assigned to strategies that have not been played for a sufficiently long time). By contrast, {xn(t)}, is consistent with sophisticated learning if the player eventually chooses only nearly best-replies to their probabilistic forecast of the choices of other players, where the support of that probability distribution may include not only past plays but also strategies that the players might choose if they themselves were adaptive or sophisticated learners. Thus, a sequence consistent with adaptive learning is also consistent with sophisticated learning. Sophisticated learning allows players to make use of payoff information that is used in equilibrium analysis but does not impose the fulfilled expectations requirement of equilibrium analysis.

With these definitions in place, Milgrom and Roberts showed that if a sequence converges to a Nash equilibrium or correlated equilibrium then it is consistent with adaptive learning. This gave a certain generality to those processes. They then showed how these processes related to the elimination of dominated strategies. This was shown to have implications for convergence in Cournot and Bertrand games.[10]

Comparative Statics

Comparative statics analyses, the study of how individual decisions and equilibria react to changes in the economic environment, are pervasive in economics. They are used to analyze how equilibrium prices and quantities react to demand and supply shocks, to study complementarities between goods, tasks, or workers, to measure equality or similarity within social groups, to help establish the existence of equilibrium in various games, to study the stability of matching procedures, and to predict market reactions to incoming news, to cite only a few applications.

Apart from the explicit computation of equilibrium variables (such as in the standard Cournot game), early graphical analyses and a few cases where direct, "revealed preference" arguments could be used, the most common general comparative statics techniques were based on Hicks’s and Samuelson's approach via the implicit function theorem. This approach typically relies on concavity and differentiability assumptions to track the local evolution of an equilibrium point that varies smoothly with exogenous parameters.

Milgrom's research has often highlighted the restrictiveness (and often superfluity) of these assumptions in economic applications. For example, in the study of modern manufacturing (Milgrom and Roberts, 1990b), one would like to focus on the complementarity or substitutability across production inputs, without making assumptions on scale economies or divisibility (through a concavity condition on the production function).

Monotonic relationships, in which more of one quantity would imply more of another, are found pervasively in economic analysis. Milgrom pioneered in the development of new mathematical methods for understanding monotonic relationships in economics. His work on auctions with Robert Weber introduced the concept of affiliation of random variables, to indicate systems of unknown quantities where learning that any one of them is higher than some given level would cause beliefs about others to be higher. His work with John Roberts and Chris Shannon advanced the use of supermodularity as a property of individuals' preferences that can yield general monotonicity results in economic analysis.

The work of Milgrom and Shannon (1994) showed that comparative statics results could often be obtained through more relevant and intuitive ordinal conditions. Indeed, they show that their concept of quasi-supermodularity (a generalization of supermodular function) along with the single-crossing property, is necessary and sufficient for comparative statics to obtain on arbitrary choice sets. Their theory extends earlier work in the Operations Research literature (Topkis, 1968;[11] Veinott, 1989[12]) which already uses lattice theory but focuses on cardinal concepts. Milgrom and John Roberts (1994) extended this to comparative statics on equilibria, while Milgrom (1994) demonstrated its wider applicability in comparing optima. Milgrom and Roberts (1996) also generalized Paul Samuelson's application of Le Chatelier's Principle in economics. In related work, Milgrom and Ilya Segal (2002) reconsidered the Envelope Theorem and its applications in light of the developments in monotone comparative statics. Due to the influence of Milgrom and Shannon's paper and related research by Milgrom and others, these techniques, now often referred to as monotone comparative statics, are widely known and used in economic modelling.

The single-crossing property as reformulated by Milgrom and Shannon was subsequently shown by Joshua Gans and Michael Smart not only to resolve Condorcet's Voting paradox in majority voting and social choice theory but also to give rise to a complete characterization of social preferences.[13] Susan Athey extended these results to consider economic problems with uncertainty.[14]

Milgrom's work on comparative statics illustrates an important element of Milgrom's philosophy regarding theoretical modelling in economics. Writing in 1994 on the subject and having related a theorem that would demonstrate when a result with a specific functional form may easily generalise, Milgrom wrote:

These conclusions do not mean that functional form assumptions are either useless or inconsequential for economic analysis. Functional form assumptions may be helpful for deriving explicit formulas for empirical estimation or simulations or simply to lend insight into the problem structure, and they certainly can help determine the magnitude of comparative statics effects. But with economic knowledge at its current state, functional form assumptions are never really convincing, and this lends importance to the question I ask and to its answer: One can indeed often draw valid general comparative statics inferences from special cases.

....these results suggest that comparative statics conclusions obtained in models with special simplifying assumptions can often be significantly generalized. The theorems help to distinguish the critical assumptions of an analysis from the other assumptions that simplify calculations but do not alter the qualitative comparative statics conclusions. In that way, the theorems improve our ability to develop useful models of parts of the economy and to interpret those models accurately.

Market Design

In his 2008, Nemmers Prize lecture,[15] Milgrom gave the following definition of Market Design

Market design is a kind of economic engineering, utilizing laboratory research, game theory, algorithms, simulations, and more. Its challenges inspire us to rethink longstanding fundamentals of economic theory.

He outlined that two broad theoretical and practical efforts defined the field: auction theory and matching theory. Milgrom has contributed to both and also, in many respects, to their synthesis.

Auction Theory

Early research on auctions focused on two special cases: common value auctions in which buyers have private signals of an items true value and private value auctions in which values are identically and independently distributed. Milgrom and Weber (1982) present a much more general theory of auctions with positively related values. Each of n buyers receives a private signal ${\displaystyle {{x}_{i}}}$ . Buyer i’s value ${\displaystyle \phi ({{x}_{i}},{{x}_{-i}})}$ is strictly increasing in ${\displaystyle {{x}_{i}}}$ and is an increasing symmetric function of ${\displaystyle {{x}_{-i}}}$. If signals are independently and identically distributed, then buyer i’s expected value ${\displaystyle {{v}_{i}}={{E}_{{x}_{-i}}}\{\phi ({{x}_{i}},{{x}_{-i}})\}}$ is independent of the other buyers’ signals. Thus, the buyers’ expected values are independently and identically distributed. This is the standard private value auction. For such auctions the revenue equivalence theorem holds. That is, expected revenue is the same in the sealed first-price and second-price auctions.

Milgrom and Weber assumed instead that the private signals are “affiliated”. With two buyers, the random variables ${\displaystyle {{v}_{1}}}$ and ${\displaystyle {{v}_{2}}}$ with probability density function ${\displaystyle f({{v}_{1}},{{v}_{2}})}$ are affiliated if

${\displaystyle f({{v}_{1}}^{\prime },{{v}_{2}}^{\prime })f({{v}_{1}},{{v}_{2}})\geq f({{v}_{1}},{{v}_{2}}^{\prime })f({{v}_{1}}^{\prime },{{v}_{2}})}$, for all ${\displaystyle v}$ and all ${\displaystyle {v}'.

Rearranging this inequality and integrating with respect to ${\displaystyle {{v}_{2}}^{\prime }}$ it follows that

${\displaystyle {\frac {F({{v}_{2}}|{{v}_{1}}^{\prime })}{f({{v}_{2}}|{{v}_{1}}^{\prime })}}\geq {\frac {F({{v}_{2}}|{{v}_{1}})}{f({{v}_{2}}|{{v}_{1}})}}}$, for all ${\displaystyle {{v}_{2}}}$ and all${\displaystyle {{v}_{1}}^{\prime }<{{v}_{1}}}$. (1)

It is this implication of affiliation that is critical in the discussion below.

For more than two symmetrically distributed random variables, let ${\displaystyle V=\{{{v}_{1}},...,{{v}_{n}}\}}$ be a set of random variables that are continuously distributed with joint probability density function f(v) . The n random variables are affiliated if

${\displaystyle f({x}',{y}')f(x,y)\geq f(x,{y}')f({x}',y)}$ for all ${\displaystyle (x,y)}$ and ${\displaystyle ({x}',{y}')}$ in ${\displaystyle X\times Y}$ where ${\displaystyle ({x}',{y}')<(x,y)}$.

Revenue Ranking Theorem (Milgrom and Weber[16])

Suppose each of n buyers receives a private signal ${\displaystyle {{x}_{i}}}$ . Buyer i’s value ${\displaystyle \phi ({{x}_{i}},{{x}_{-i}})}$ is strictly increasing in ${\displaystyle {{x}_{i}}}$ and is an increasing symmetric function of ${\displaystyle {{x}_{-i}}}$. If signals are affiliated, the equilibrium bid function in a sealed first-price auction ${\displaystyle {{b}_{i}}=B({{x}_{i}})}$ is smaller than the equilibrium expected payment in the sealed second price auction.

The intuition for this result is as follows: In the sealed second-price auction the expected payment of a winning bidder with value v is based on their own information. By the revenue equivalence theorem if all buyers had the same beliefs, there would be revenue equivalence. However, if values are affiliated, a buyer with value v knows that buyers with lower values have more pessimistic beliefs about the distribution of values. In the sealed high-bid auction such low value buyers therefore bid lower than they would if they had the same beliefs. Thus the buyer with value v does not have to compete so hard and bids lower as well. Thus the informational effect lowers the equilibrium payment of the winning bidder in the sealed first-price auction.

Equilibrium bidding in the sealed first- and second-price auctions: We consider here the simplest case in which there are two buyers and each buyer’s value ${\displaystyle {{v}_{i}}=\phi ({{x}_{i}})}$ depends only on his own signal. Then the buyers’ values are private and affiliated. In the sealed second-price (or Vickrey auction), it is a dominant strategy for each buyer to bid his value. If both buyers do so, then a buyer with value v has an expected payment of

${\displaystyle e(v)={\frac {\int \limits _{0}^{v}{}yf(y|v)dy}{F(v|v)}}}$ (2) .

In the sealed first-price auction, the increasing bid function B(v) is an equilibrium if bidding strategies are mutual best responses. That is, if buyer 1 has value v, their best response is to bid b = B(v) if they believes that their opponent is using this same bidding function. Suppose buyer 1 deviates and bids b = B(z) rather than B(v) . Let U(z) be their resulting payoff. For B(v) to be an equilibrium bid function, U(z) must take on its maximum at x = v. With a bid of b = B(z) buyer 1 wins if

${\displaystyle B({{v}_{2}}) , that is, if ${\displaystyle {{v}_{2}} .

The win probability is then ${\displaystyle w=F(z|v)}$ so that buyer 1’s expected payoff is

${\displaystyle U(z)=w(v-B(z))=F(z|v)(v-B(z))}$.

Taking logs and differentiating by z,

${\displaystyle {\frac {{{U}^{\prime }}(z)}{U(z)}}={\frac {{w}'(z)}{w(z)}}-{\frac {{B}'(z)}{v-B(z)}}={\frac {f(z|v)}{F(z|v)}}-{\frac {{B}'(z)}{v-B(z)}}}$. (3)

The first term on the right hand side is the proportional increase in the win probability as the buyer raises his bid from ${\displaystyle B(z)}$ to ${\displaystyle B(z+\Delta z)}$. The second term is the proportional drop in the payoff if the buyer wins. We have argued that, for equilibrium, U(z) must take on its maximum at z = v . Substituting for z in (3) and setting the derivative equal to zero yields the following necessary condition.

${\displaystyle {B}'(v)={\frac {f(v|v)}{F(v|v)}}(v-B(v))}$. (4)

Proof of the revenue ranking theorem

Buyer 1 with value x has conditional p.d.f. ${\displaystyle f({{v}_{2}}|x)}$. Suppose that he naively believes that all other buyers have the same beliefs. In the sealed high bid auction he computes the equilibrium bid function using these naive beliefs. Arguing as above, condition (3) becomes

${\displaystyle {\frac {{{U}^{\prime }}(z)}{U(z)}}={\frac {f(z|x)}{F(z|x)}}-{\frac {{B}'(z)}{v-B(z)}}}$. (3’)

Since x > v it follows by affiliation (see condition (1)) that the proportional gain to bidding higher is bigger under the naive beliefs that place higher mass on higher values. Arguing as before, a necessary condition for equilibrium is that (3’) must be zero at x = v. Therefore the equilibrium bid function ${\displaystyle {{B}_{x}}(v)}$ satisfies the following differential equation.

${\displaystyle {{B}_{x}}^{\prime }(v)={\frac {f(v|x)}{F(v|x)}}(v-{{B}_{x}}(v))}$ . (5)

Appealing to the revenue equivalence theorem, if all buyers have values that are independent draws from the same distribution then the expected payment of the winner is the same in the two auctions. Therefore ${\displaystyle {{B}_{x}}(x)=e(x)}$. Thus, to complete the proof we need to establish that ${\displaystyle B(x)\leq {{B}_{x}}(x)}$. Appealing to (1), it follows from (4) and (5) that for all v < x.

${\displaystyle {{B}_{x}}^{\prime }(v)\geq \left({\frac {v-{{B}_{x}}(v)}{v-B(v)}}\right)B(v)}$

Therefore for any v in the interval [0,x]

${\displaystyle B(v)-{{B}_{x}}(v)>{{0}_{}}{{\Rightarrow }_{}}{B}'(v)-{{B}_{x}}^{\prime }(v)<0}$ .

Suppose that ${\displaystyle B(x)>{{B}_{x}}(x)}$. Since the equilibrium bid of a buyer with value 0 is zero, there must be some y < x such that

${\displaystyle {{(i)}_{}}B(y)-{{B}_{x}}(y)={{0}_{}}}$ and ${\displaystyle {{(ii)}_{}}B(v)-{{B}_{x}}(v)>0{{,}_{}}\forall v\in [y,x]}$.

But this is impossible since we have just shown that over such an interval, ${\displaystyle B(v)-{{B}_{x}}(v)}$ is decreasing. Since ${\displaystyle {{B}_{x}}(x)=e(x)}$ it follows that the winner bidder’s expected payment is lower in the sealed high-bid auction.

Ascending auctions with package bidding

Milgrom has also contributed to the understanding of combinatorial auctions. In work with Larry Ausubel (Ausubel and Milgrom, 2002), auctions of multiple items, which may be substitutes or complements, are considered. They define a mechanism, the “ascending proxy auction,” constructed as follows. Each bidder reports his values to a proxy agent for all packages that the bidder is interested in. Budget constraints can also be reported. The proxy agent then bids in an ascending auction with package bidding on behalf of the real bidder, iteratively submitting the allowable bid that, if accepted, would maximize the real bidder’s profit (value minus price), based on the reported values. The auction is conducted with negligibly small bid increments. After each round, provisionally winning bids are determined that maximize the total revenue from feasible combinations of bids. All of a bidder’s bids are kept live throughout the auction and are treated as mutually exclusive. The auction ends after a round occurs with no new bids. The ascending proxy auction may be viewed either as a compact representation of a dynamic combinatorial auction or as a practical direct mechanism, the first example of what Milgrom would later call a “core selecting auction.”

They prove that, with respect to any reported set of values, the ascending proxy auction always generates a core outcome, i.e. an outcome that is feasible and unblocked. Moreover, if bidders’ values satisfy the substitutes condition, then truthful bidding is a Nash equilibrium of the ascending proxy auction and yields the same outcome as the Vickrey-Clarke-Groves (VCG) mechanism. However, the substitutes condition is robustly a necessary as well as a sufficient condition: if just one bidder’s values violate the substitutes condition, then with appropriate choice of three other bidders with additively-separable values, the outcome of the VCG mechanism lies outside the core; and so the ascending proxy auction cannot coincide with the VCG mechanism and truthful bidding cannot be a Nash equilibrium. They also provide a complete characterization of substitutes preferences: Goods are substitutes if and only if the indirect utility function is submodular.

Ausubel and Milgrom (2006a, 2006b) exposit and elaborate on these ideas. The first of these articles, entitled “The Lovely but Lonely Vickrey Auction,” made an important point in market design. The VCG mechanism, while highly attractive in theory, suffers from a number of possible weaknesses when the substitutes condition is violated, making it a poor candidate for empirical applications. In particular, the VCG mechanism may exhibit: low (or zero) seller revenues; non-monotonicity of the seller’s revenues in the set of bidders and the amounts bid; vulnerability to collusion by a coalition of losing bidders; and vulnerability to the use of multiple bidding identities by a single bidder. This may explain why the VCG auction design, while so lovely in theory, is so lonely in practice.

Additional work in this area by Milgrom together with Larry Ausubel and Peter Cramton has been particularly influential in practical market design. Ausubel, Cramton and Milgrom (2006) together proposed a new auction format that is now called the “combinatorial clock auction” (CCA), which consists of a clock auction stage followed by a sealed-bid supplementary round. All of the bids are interpreted as package bids; and the final auction outcome is determined using a core selecting mechanism. The CCA was first used in the United Kingdom’s 10–40 GHz spectrum auction of 2008. Since then, it has become a new standard for spectrum auctions: it has been utilized for major spectrum auctions in Austria, Denmark, Ireland, the Netherlands, Switzerland and the UK; and it is slated to be used in forthcoming auctions in Australia and Canada.

At the 2008 Nemmers Prize conference Vijay Krishna[17] and Larry Ausubel[18] highlighted Milgrom's contributions to auction theory and their subsequent impact on auction design.

Matching Theory

Milgrom has also contributed to the understanding of matching market design. In work with John Hatfield (Hatfield and Milgrom, 2005), he shows how to generalize the stable marriage matching problem to allow for “matching with contracts”, where the terms of the match between agents on either side of the market arise endogenously through the matching process. They show that a suitable generalization of the deferred acceptance algorithm of David Gale and Lloyd Shapley finds a stable matching in their setting; moreover, the set of stable matchings forms a lattice, and similar vacancy chain dynamics are present.

The observation that stable matchings are a lattice was a well known result that provided the key to their insight into generalizing the matching model. They observed (as did some other contemporary authors) that the lattice of stable matchings was reminiscent of the conclusion of Tarski’s fixed point theorem, which states that an increasing function from a complete lattice to itself has a nonempty set of fixed points that form a complete lattice. But it wasn’t apparent what was the lattice, and what was the increasing function. Hatfield and Milgrom observed that the accumulated offers and rejections formed a lattice, and that the bidding process in an auction and the deferred acceptance algorithm were examples of a cumulative offer process that was an increasing function in this lattice.

Their generalization also shows that certain package auctions (see the Policy section) can be thought of as a special case of matching with contracts, where there is only one agent (the auctioneer) on one side of the market and contracts include both the items to be transferred and the total transfer price as terms. Thus, two of market design's great success stories, the deferred acceptance algorithm as applied to the medical match, and the simultaneous ascending auction as applied to the FCC spectrum auctions, have a deep mathematical connection. In addition, this work (in particular, the "cumulative offer" variation of the deferred acceptance algorithm) has formed the basis of recently proposed redesigns of the mechanisms used to match residents to hospitals in Japan[19] and cadets to branches in the US Army.[20]

Simplifying Participants’ Messages

Milgrom has also contributed to the understanding of the effect of simplifying the message space in practical market design. He observed and developed as an important design element of many markets the notion of conflation---the idea of restricting a participant’s ability to convey rich preferences by forcing them to enter the same value for different preferences. An example of conflation arises in Gale and Shapley’s deferred acceptance algorithm for hospital and doctors matching when hospitals are allowed to submit only responsive preferences (i.e., the ranking of doctors and capacities) even though they could be conceivably asked to submit general substitutes preferences. In the Internet sponsored-search auctions, advertisers are allowed to submit a single per-click bid, regardless of which ad positions they win. A similar, earlier idea of a conflated generic-item auction is an important component of the Combinatorial Clock Auction (Ausubel, Cramton and Milgrom, 2006), widely used in spectrum auctions including the UK's recent 800 MHz / 2.6 GHz auction, and has also been proposed for Incentive Auctions.[21] Bidders are allowed to express only the quantity of frequencies in the allocation stage of the auction without regard to the specific assignment (which is decided in a later assignment stage). Milgrom (2010) shows that with a certain “outcome closure property,” conflation adds no new unintended outcome as equilibrium and argued that, by thickening the markets, may intensify price competition and increase revenue.

As a concrete application of the idea of simplifying messages, Milgrom (2009) defines assignment messages of preferences. In assignment messages, an agent can encode certain nonlinear preferences involving various substitution possibilities into linear objectives by allowing agents to describe multiple “roles” that objects can play in generating utility, with utility thus generated being added up. The valuation over a set of objects is the maximum value that can be achieved by optimally assigning them to various roles. Assignment messages can also be applied to resource allocation without money; see, for example, the problem of course allocation in schools, as analyzed by Budish, Che, Kojima, and Milgrom (2013). In doing so, the paper has provided a generalization of the Birkhoff-von Neumann Theorem (a mathematical property about Doubly Stochastic Matrices) and applied it to analyze when a given random assignment can be "implemented" as a lottery over feasible deterministic outcomes.

A more general language, named the endowed assignment message, is studied by Hatfield and Milgrom (2005). Milgrom provides an overview of these issues in Milgrom (2011).

Organizational and Information Economics

Agency Theory

Before 1987, the canonical treatment of the principal-agent problem focused on the informativeness of performance measures. It gave rise to an intuitive solution based on, say, a sufficient statistic but was overly sensitive to likelihoods (as demonstrated by the 'knife-edge' example derived by James Mirrlees). Milgrom, together with Bengt Holmstrom, asked what features of a contracting problem would give rise to a simpler, say, linear, incentive scheme (that is, a scheme in which the wage consisted of a base amount plus amounts that were directly proportional to specific performance measures).

Previously, most theoretical papers in agency theory assumed that the main problem was to provide an incentive for an agent to exert more effort on just one activity. But in many situations, agents can actually exert unobservable efforts on several different activities. In such contexts, new kinds of incentive problems can arise, since giving an agent more incentive to exert effort on one dimension could cause the agent to neglect other important dimensions. Holmstrom and Milgrom believed that incorporating this multi-dimensional feature of incentive problems would generate implications for optimal incentive design that were more relevant for real world contracting problems.

In their 1987 paper, Holmstrom and Milgrom introduced new techniques for studying multidimensional agency problems. The key insight in the Holmstrom-Milgrom paper is that simple linear incentive schemes can become optimal when the agent can monitor the evolution over time of the performance measures on which his compensation will be based. In that paper, an agent continuously chooses the drift of an N-dimensional Brownian motion, contingent on observing the whole history of the process. Under some assumptions on the agent's utility function, it is shown that the optimal compensation scheme for the principal specifies a payment to the agent that is a linear function of the time-aggregates of the performance measures. Such a linear compensation scheme imposes a "uniform incentive pressure" on the agent, leading him to choose a constant drift for each dimension of the Brownian process.

Holmstrom and Milgrom (1991) anticipated an important aspect of the debate in education on the issue of teacher pay and incentives. In considering incentive pay for teachers based on student test scores, they wrote:

Proponents of the system, guided by a conception very like the standard one-dimensional incentive model, argue that these incentives will lead teachers to work harder at teaching and to take greater interest in their students' success. Opponents counter that the principal effect of the proposed reform would be that teachers would sacrifice such activities as promoting curiosity and creative thinking and refining students' oral and written communication skills in order to teach the narrowly defined basic skills that are tested on standardised exams. It would be better, these critics argue, to pay a fixed what without any incentive scheme than to base teachers' compensation only on the limited dimensions of student achievement that can be effectively measured. (Emphasis in original).

This work was mentioned in the New York Times in 2011[24]

Too much pressure to improve students’ test scores can reduce attention to other aspects of the curriculum and discourage cultivation of broader problem-solving skills, also known as “teaching to the test.” The economists Bengt Holmstrom and Paul Milgrom describe the general problem of misaligned incentives in more formal terms – workers who are rewarded only for accomplishment of easily measurable tasks reduce the effort devoted to other tasks.

Information Economics

In Milgrom (1981), Milgrom introduced into economics a new notion of "favorableness" for information; namely, that one observation x is more favorable than another observation y, if, for all prior beliefs about the variable of interest, the posterior belief conditional on x first-order stochastically dominates the posterior conditional on y. Milgrom and others have used this notion of favorableness and the associated "monotone likelihood ratio property" of information structures to derive a range of important results in information economics, from properties of the optimal incentive contract in a principal-agent problem, to the notion of the winner's curse in auction theory.

In the same paper, Milgrom introduced a novel "persuasion game", in which a salesperson has private information about a product, which he can, if he chooses, verifiably report to a potential buyer. (That is, the salesperson can, if he wishes, conceal his information, but he cannot misreport it if he reveals it.) Milgrom demonstrates that, with substantial generality, at every sequential equilibrium of the sales encounter game, the salesperson employs a strategy of full disclosure. This result has come to be known as the "unraveling result," because Milgrom shows that, in any candidate equilibrium in which the buyer expects the salesperson to conceal some observations, the salesperson will have an incentive to reveal the most favorable (to himself) of those observations---thus, any strategy of concealment will "unravel". In a subsequent paper (1986), Milgrom and John Roberts observed that when there is competition among informed, self-interested agents to persuade an uninformed party, all of the relevant information may be disclosed in equilibrium even if the uninformed party (e.g. the buyer) is not as sophisticated as was assumed in the analysis with a single informed agent (e.g. the salesperson). The unraveling result has implications for a wide variety of situations in which individuals can strategically choose whether to conceal information, but in which lying carries substantial penalties. These situations include courtroom battles, regulation of product testing, and financial disclosure. Milgrom's persuasion game has been hugely influential in the study of financial accounting as a tool for understanding the strategic response of management to changes in disclosure regulation. This work has led to a large literature on strategic communication and information revelation.

Organizational Economics

In the late 1980s, Milgrom began working with John Roberts to apply ideas from game theory and incentive theory to the study of organizations. Early on in this research, they focused on the importance of complementarities in organizational design. Activities in an organization are complementary, or synergistic, when there is a return to coordination. For example, a company that wants to make frequent changes in its production process will benefit from training workers in a flexible manner that allows them to adapt to these changes.

Milgrom and Roberts first came on the ideas and applicability of complements when studying an enriched version of the classic news vendor problem of how to organize production that allowed both make to order after learning demand and make to stock (Milgrom and Roberts, 1988). The problem they formulated turned out to be a convex maximization problem, so the solutions were end points, not interior optima where first derivatives were zero. So the Hicks-Samuelson methods for comparative statics were not applicable. Yet they got rich comparative statics results. This led Milgrom to recall the work of Topkis (1968), particularly Topkis's theorem, which led to their development and application of complementarity ideas in many spheres. The incorporation of these methods into economics, discussed below, has proved very influential.

In perhaps their most famous paper on organizations, (Milgrom and Roberts, 1990b) Milgrom and Roberts used comparative statics methods to describe the development of "modern manufacturing," characterized by frequent product redesigns and improvements, higher production quality, speedier communication and order processing, smaller batch sizes, and lower inventories. Subsequently, Milgrom and Bengt Holmstrom (1994), used similar methods to identify complementarities in incentive design. They argued that the use of high-intensity performance incentives would be complementary to placing relatively few restrictions on workers and decentralizing asset ownership.

In an influential paper, Milgrom and Roberts (1994) applied the framework of thinking about change of a system of complements to tackle some key issues in organizational economics. They noted that when organizations adapt by changing one element in a complementary system, it can often be the case that performance will degrade. This will make change a hard sell within organizations. Milgrom and Roberts suggested that this is why businesses had been unable to replicate Lincoln Electric's performance incentive system because the classic piece rate contract was supported by a string of human resource policies (e.g., subjective bonuses, lifetime employment) as well as production management policies (including organizational slack on delivery), and, perhaps most importantly, deep trust between workers and management. Thus, successful replication would require getting all of these elements in place. Milgrom and Roberts used the same theory to forecast the difficulties Japanese businesses would have in adjusting to change in the decade and a half following the recession that began in the early 1990s; a prediction that was borne out by subsequent experience.

In a series of papers, Milgrom studied the problem of lobbying and politicking, or "influence activities" that occur in large organizations. These papers considered models in which employees are affected by post-hiring decisions. When managers have discretion over these decisions, employees have incentives to spend time attempting to influence the outcomes. Since this time could instead be spent on productive tasks, influence activities are costly for the firm. Milgrom shows that firms may limit the discretion of managers in order to avoid these costs (Milgrom, 1988). In a paper with John Roberts, Milgrom also studied a model in which employees have information that is valuable to the decision maker. As a result, allowing some degree of influence is beneficial, but excessive influence is costly. Milgrom and Roberts compare various strategies that firms might use to discourage excessive influence activities, and they show that typically, limiting employees' access to decision makers and altering decision-making criteria are preferable to the use of explicit financial incentives (Milgrom and Roberts, 1988). In another paper, with Margaret Meyer and Roberts (1992), Milgrom studied the influence costs that arise in multiunit firms. They demonstrate that managers of underperforming units have incentives to exaggerate the prospects of their unit in order to protect their jobs. If the unit were embedded in a firm whose other units were more closely related, there would be a lower threat of layoffs, because reassignment of workers could occur instead. Similarly, if the unit were independent, there would many fewer opportunities to misrepresent its prospects. These arguments help explain why divestitures of underperforming units occur so frequently and why, when such units do not become stand-alone firms, they are often purchased by buyers operating in related lines of business.

In 1992, Milgrom and Roberts published their textbook on organizations, Economics, Organization and Management. The book covers a wide range of topics in the theory of organizations using modern economic theory. It is Milgrom's most cited work, a remarkable fact, given that it is a textbook aimed at undergraduates and masters students, while Milgrom has so many highly influential, widely cited research papers. In addition to discussing incentive design and complementarities, the book discusses some of the inefficiencies that can arise in large organizations, including the problem of lobbying or "influence costs." In the 2008 Nemmers Prize conference, Roberts commented[25] that the impact of the work on influence on management scholarship had exceeded its impact on economic scholarship.

Industrial Organization

In a series of three seminal papers, Milgrom, together with John Roberts, developed some of the central ideas regarding asymmetric information in the context of industrial organization. The work of George Akerlof, Joseph Stiglitz, and especially Michael Spence, mostly developed in the 1970s, provides some of the conceptual and methodological background. However, it was primarily in the 1980s and largely due to the Milgrom-Roberts contributions in applying incomplete information game theory to industrial organization problems that these ideas were adopted into the mainstream of the field.

Consider first the case of predatory pricing. For a long time, McGee's (1958)[26] analysis, frequently associated with the Chicago school, provided the only coherent economic perspective regarding the main issues. McGee (1958) argued that the concept of predatory pricing lacks logical consistency. His idea is that, in addition to the prey, the predator too suffers from predatory pricing. If the prey resists predation and remains active, then the predator eventually will give up its efforts. Anticipating this outcome, the prey is indeed better off by resisting predatory efforts. Anticipating this outcome, in turn, the alleged predator is better off by refraining from its predatory strategy. Even if the alleged prey were short of cash, it could always borrow from a bank with the (correct) promise that its losses are only temporary. Further, supposing the predation were successful in inducing exit, if the predator subsequently raised prices to enjoy the fruits of its victory, new entry could be attracted, and the problem starts all over.

Milgrom and Roberts (1982a), as well as Kreps and Wilson (1982),[27] provide a novel perspective on the issue. Methodologically, this perspective is based on the concept of reputation developed by Kreps, Milgrom, Roberts and Wilson (1982), where reputation is understood as the Bayesian posterior that uninformed agents (e.g., an entrant) hold about the type of an informed agent (e.g., an incumbent). Suppose that, with some small probability, an incumbent may be “irrational” to the point of always fighting entry (even if this is not a profit maximizing reaction to entry). In this context, by repeatedly fighting rivals with low prices, a predator increases its reputation for “toughness”; and thus encourages exit and discourages future entry.

If Kreps, Milgrom, Roberts and Wilson (1982) effectively created a novel economic theory of reputation, Milgrom and Roberts (1982a), as well as Kreps and Wilson (1982), provided a first application to an outstanding issue of central importance in industrial organization theory and policy (predatory pricing).

Appendix A in Milgrom and Roberts (1982a) proposes an alternative theory for equilibrium predatory pricing, that is, an alternative response to McGee’s (1958) Chicago school criticism. In this appendix, Milgrom and Roberts examine an infinite horizon version of Selten’s chain-store model (with complete information) and demonstrate the existence of an equilibrium where any attempted entry is met by predation — and thus entry does not take place in equilibrium.

Returning to the issue of information asymmetry between incumbent and entrant, Milgrom and Roberts (1982b) consider the alternative case when the entrant is uncertain about the incumbent’s costs. In this case, they show that the incumbent’s low prices signal that its costs are low too, and so are the target’s long term prospects from entry. Like Milgrom and Roberts (1982a), this paper brought formal understanding to an old idea in industrial organization, this time the concept of limit pricing. In the process of doing so, the paper also uncovered new results of interest. In particular, Milgrom and Roberts (1982b) show that the equilibrium entry rate may actually increase when asymmetric information is introduced.

Finally, Milgrom and Roberts (1986) bring the asymmetric information framework to bear in analyzing the issue of advertising and pricing. Traditionally, economists have thought of advertising as being either informative (as for example classified ads, which describe the characteristics of the product for sale), or persuasive (as for example many television commercials which seem to provide little or no information about a product’s characteristics). Following earlier ideas by Nelson (1970,[28] 1974[29]), Milgrom and Roberts (1986) show that even “uninformative” advertising, that is, advertising expenditures that provide no direct information about a product’s characteristics, may be informative in equilibrium to the extent that they work as a signal of the advertiser’s quality level. Methodologically, Milgrom and Roberts (1986) also make an important contribution: the study of signaling equilibria when the informed party has more than one available signal (price and advertising, in the present case).

Law, Institutions and Economic History

Milgrom made early contributions to the growing literature applying game theoretic models to our understanding of the evolution of the legal institutions of the market economy. Milgrom, Douglass North and Barry Weingast (1990) presents a repeated game model that shows the role for a formal institution that serves as a repository of judgments about contract behavior to coordinate a multilateral reputation mechanism. Milgrom and his co-authors argued that this model sheds light on the development of the Law Merchant, an institution of late medieval trade in Europe, whereby merchants looked to the judgments of the Law Merchant to decide what counted as "cheating." In their model, merchants query the Law Merchant to determine whether a potential trading partner has cheated on prior contracts, triggering the application of punishment by other merchants. The incentive to punish in this model arises from the structure of the repeated game, assumed to be a prisoners' dilemma, where cheating is the dominant strategy and the only incentive not to cheat is because future partners can learn of this and cheating a cheater is not punishable; this makes the equilibrium sub-game perfect. Understanding the merchants' incentives to create an institution to support decentralized contract enforcement like this helps to overcome the tendency in the law and economics and positive political theory literatures to assume that the role of law is exclusively attributable to the capacity to take advantage of centralized enforcement mechanisms such as state courts and police power. '

In a further contribution in this area, Milgrom, together with Barry Weingast and Avner Greif, applied a repeated game model to explain the role of merchant guilds in the medieval period (Greif, Milgrom and Weingast, 1994). The paper beings with the observation that long-distance trade in the somewhat chaotic environment of the Middle Ages exposed traveling merchants to the risk of attack, confiscation of goods and unenforced agreements. Merchants thus required the assistance of local rulers for protection of person, property and contract. But what reason did rulers have to provide this assistance? A key insight from the paper is that neither bilateral nor multilateral reputation mechanisms can support the incentives of a ruler to protect foreign merchants as trade reaches an efficient level. The reason is that at the efficient level the marginal value of losing the trade of a single or even a subset of merchants—in their attempt to punish a defaulting ruler—approaches zero. The threat is, thus, insufficient to deter a ruler from confiscating goods or to encourage their expenditure of resources or political capital to defend foreign merchants against local citizens. Effective punishment that will deter rulers' bad behavior requires more extensive coordination of effectively all the merchants who provide value for the ruler. The question then becomes, what incentives do the merchants have to participate in the collective boycott? Here is the role for the Merchant Guild, an organization that has the power to punish its own members for failure to abide by a boycott announced by the guild.

These insights have been built on to explore more generally the role of legal institutions in coordinating and incentivizing decentralized enforcement mechanisms like the multilateral reputation system.[30][31]

Finance and Macroeconomics

Securities Markets

Why do traders bother to gather information if they cannot profit from it? How does information come to be reflected in prices if informed traders do not trade or if they ignore their private information in making inferences?" These questions, asked at the end of Milgrom and Stokey (1982), were addressed in Glosten and Milgrom (1985). In this seminal paper, the authors provided a dynamic model of the price formation process in securities markets and an information-based explanation for the spread between the bid and ask prices. Because informed traders have better information than market-makers, market-makers incur a loss when trading with informed traders. Market-makers use the bid-ask spread to recoup this loss from uninformed traders, who have private reasons for trading, for example, because of liquidity needs. This dynamic trading model with asymmetric information has been one of the workhorse models in the literature on market microstructure.

Trading on stock exchanges had been growing at a growing rate in 1960s, 70s and 80s, which led Milgrom and coauthors (Bresnahan, Milgrom and Paul 1992) to ask whether the rapid increase of trading volume also brings rapid increase of the real output of stock exchanges. Traders in this model make profit by gathering information of the value of the firm and trading its stocks. However, information valuable for making a real decision on the firm is the value added rather than the value of the firm. Their analysis suggests that the increased trading activity increased the resources devoted to rent-seeking, without improving real investment decisions.

At the 2008 Nemmers Prize conference, Stephen Morris[38] provided an explanation of Milgrom's contributions to the understanding of financial markets as well as of the impact that they have had on financial analysis.

Labor Markets

In 1987, Milgrom with Sharon Oster examined imperfections in labor markets. They evaluated the "Invisibility Hypothesis" which held that disadvantaged workers had difficult signalling their job skills to potential new employers because their existing employers denied them promotions that would improve visibility. Milgrom and Oster found that, in a competitive equilibrium, such invisibility could be profitable for firms. This led to less pay to disadvantaged workers in lower-level positions even when they otherwise had the same education and ability as their more advantaged co-workers. Not surprisingly, the returns to investing in education and human capital were reduced for those in disadvantaged groups; reinforcing discriminatory outcomes in labor markets.

Two decades later, Milgrom, in a paper with Bob Hall (Hall and Milgrom, 2008), contributed to macroeconomics directly. Macroeconomic models, including real business cycle models, efficiency wage models and search/matching models, have long had difficulty accounting for the observed volatility in labor market variables. In an influential paper,[39] Shimer explained the problem as it appears in the standard search/matching model, an important macroeconomic model for which the Nobel prize was recently granted to Diamond, Mortensen and Pissarides (DMP). Shimer explained that in the standard DMP model, a shock that raises the value of what firms sell – other things the same – increases their incentive to hire workers by raising profits per worker. The problem, according to Shimer is that this mechanism sets into motion a negative feedback loop which in the end largely cancels firms’ incentive to expand employment. In particular, as employment expands, labor market conditions in general begin to improve for workers and this puts them in a stronger position as they negotiate wages with employers. But, the resulting rise in the wage then cuts into the profits earned by firms and thus limits their incentive to hire workers. The problem has come to be known as the ‘Shimer puzzle’. That puzzle can loosely be paraphrased as follows: “what modification to the DMP framework is needed to put it in line with the empirical evidence that employment rises sharply during a business cycle expansion?” Although enormous efforts have been made, the puzzle has largely resisted a solution, until the Milgrom paper. Milgrom (with Hall), argued that the bargaining framework used in the standard DMP model does not correspond well to the way wages are actually negotiated. They argue that, by the time workers and firms sit down to bargain, they know that there is a substantial amount to be gained if they makea deal. The firm’s human resources department has most likely already checked out the worker to verify that they are suitable. Most likely, the worker has done a similar preliminary check to verify that they could make a useful contribution to the firm. A consequence of this is that if, during the negotiations, the firm and worker disagree, they are very unlikely to simply part ways. Instead, it is more likely that they continue negotiating until they do reach agreement. It follows that as they make proposals and counterproposals, bargaining worker/firm pairs are mindful of the various costs associated with delay and the making of counterproposals. They are not so concerned about the consequences of a total breakdown in negotiations and of having to go back to the general labor market to search for another worker or job. Milgrom stresses that with this shift in perspective on bargaining, the impact of improved general conditions on the wage bargain is weakened as long as costs of delay and renegotiation are not very sensitive to broader economic conditions. In particular, the approach provides a potential resolution to the Shimer puzzle, a puzzle that has confounded macroeconomists generally.

Milgrom’s paper raises important questions from the point of view of quantitative macroeconomics as well as data. Milgrom showed that the idea is quantitatively important in the context of a very simple macroeconomic model. But, macroeconomic models that are used to confront data have many moving parts and it remains to see how well the Milgrom idea works in the context of such a model. From an empirical perspective, the paper raises questions about how bargaining is actually done in practice. Does the approach to bargaining (a version of the classic alternating offer bargaining proposed by Ariel Rubinstein) match the way employers and workers actually interact? Are the costs of delay and renegotiation in fact sufficiently insensitive to aggregate economic conditions so that the Milgrom idea is quantitatively important? There is already evidence that the Milgrom work will spawn a literature to investigate these questions. The preliminary evidence provided in the Hall and Milgrom paper provides grounds for optimism that the Milgrom contribution will in the end be viewed as a fundamental contribution to macroeconomics.

Policy

FCC Spectrum Auction 1993

The U.S. Federal Communications Commission (FCC) has responsibility for allocating licenses for the use of electromagnetic spectrum to television broadcasters, mobile wireless services providers, satellite service providers, and others. Prior to 1993, the FCC's authorization from the U.S. Congress only allowed it to allocate licenses through an administrative process referred to as "comparative hearings" or by holding a lottery. Comparative hearings were extremely time consuming and costly, and there were concerns about the ability of such a process to identify the 'best' owners for licenses. Lotteries were fast, but clearly a random allocation of licenses left much to be desired in terms of efficiency. Neither of these methods offered any ability for the FCC to capture some of the value of the spectrum licenses for the U.S. taxpayers.

Then in 1993, Congress authorized the FCC to hold auctions to allocate spectrum licenses. Auctions offered great potential in terms of obtaining an efficient allocation of licenses and also capturing some of the value of the licenses to be returned to the U.S. taxpayers. However, the FCC was directed to hold the auction within a year, and at that time no suitable auction design existed, either in theory or in practice.

It was Milgrom, together with other economists including Robert Wilson, Preston McAfee, and John McMillan, who played a key role in designing the simultaneous multiple round auction that was adopted and implemented by the FCC. Milgrom's auction theory research provided foundations that guided economists' thinking on auction design and ultimately the FCC's auction design choices.

The FCC needed an auction design suited to the sale of multiple licenses with potentially highly interdependent values. The FCC's goals included economic efficiency and revenue (although the legislation suggests an emphasis on efficiency over revenue) as well as operational simplicity and reasonable speed.

According to FCC economist Evan Kwerel, who was given the task of developing the FCCís auction design, Milgrom's proposals, analysis, and research were hugely influential in the auction design. Milgrom and Wilson proposed a simultaneous ascending bid auction with discrete bidding rounds, which "promised to provide much of the operational simplicity of sealed-bid auctions with the economic efficiency of an ascending auction."[40] Milgrom argued successfully for a simultaneous closing rule, as opposed to a market-by-market closing rule advocated by others because the latter might foreclose efficient backup strategies.[41]

Describing the Milgrom-Wilson auction design, Kwerel states:

It seemed to provide bidders sufficient information and flexibility to pursue backup strategies to promote a reasonably efficient assignment of licenses, without so much complexity that the FCC could not successfully implement it and bidders could not understand it. Just having a good idea, though, is not enough. Good ideas need good advocates if they are to be adopted. No advocate was more persuasive than Paul Milgrom. He was so persuasive because of his vision, clarity and economy of expression, ability to understand and address FCC needs, integrity, and passion for getting things right.[42]

Milgrom’s proposed design was adopted in large part by the Commission. Called the simultaneous multiple round (SMR) auction, this design introduced several new features, mostly importantly an “activity rule” to ensure active bidding. Milgrom and Weber developed an activity rule to accompany their simultaneous closing rule to ensure that bidders could not hold back while observing the bids of others. The activity rule required that bidders maintain a certain level of activity, either by being the current high bidder or by submitting a new bid, in each round or else forfeit all or part of its eligibility to submit bids in future rounds. "Milgrom and Weber developed this insight into the activity rule that the FCC has used in all its simultaneous multiple round auctions. The Milgrom-Wilson activity rule was an elegant, novel solution to a difficult practical auction design issue."[43] Activity rules are now a nearly universal feature in dynamic multi-item auctions.

Milgrom’s singular role in creating the FCC design is celebrated in an account by the US National Science Foundation (America’s Investment in the Future), which identifies this auction design as one of the main practical contributions of 20th century research in microeconomic theory. The same invention and Milgrom’s role in creating it was celebrated again by the prestigious National Academy of Sciences (Beyond Discovery), which is the main scientific advisor to the US government. The SMR design has been copied and adapted worldwide for auctions of radio spectrum, electricity, natural gas, etc. involving hundreds of billion dollars.

In the words of Evan Kwerel, "In the end, the FCC chose an ascending bid mechanism, largely because we believed that providing bidders with more information would likely increase efficiency and, as shown by Milgrom and Weber (1982), mitigate the winner's curse."[44] The result alluded to by Kwerel is known as the Linkage principle and was developed by Milgrom and Weber (1982). (Milgrom (2004) recasts the linkage principle as the 'publicity effect.') It provided a theoretical foundation for the intuition driving the major design choice by the FCC between an ascending bid and sealed bid auction.

FCC Incentive Auctions

In 2012, the US Congress authorized the FCC to conduct the first spectrum incentive auctions.[45] As envisioned by the FCC, the incentive auctions will enable television broadcast stations to submit bids to relinquish existing spectrum rights. Broadcast stations that opt to stay on-air will be reassigned to channels in a way that frees up a contiguous block of spectrum to be repurposed for wireless broadband, with licenses sold to telecommunications firms. Relative to prior spectrum auctions run in the United States and around the world, the incentive auctions will have the novel feature that they are a double auction: the proceeds from selling wireless broadband licenses will be used to compensate broadcasters who relinquish rights, or who must be re-located to new channels. Any further revenue will go to the Treasury.

Subsequent to receiving Congressional authorization, the FCC announced in March 2012 that Milgrom had been retained to lead a team of economists advising the FCC on the design of the incentive auctions.[46] In September 2012, the FCC released Milgrom's preliminary report on the possible auction design.[47]

Teaching

Milgrom has taught a variety of courses in Economics. In the 1990s, he has developed a popular undergraduate course on The Modern Firm in Theory and Practice, based on his 1992 book with John Roberts. In the early 2000s, together with Alvin E. Roth, Milgrom taught the first graduate course on Market Design, which brought together topics on auctions, matching, and other related areas. The market design course has served as a basis for many similar graduate courses across the US and around the world, and has helped jump-start the field of Market Design.

In his teaching, Milgrom was always cognisant of what economic models could and could not do. He stressed the assumptions that made them useful in generating robust empirical predictions as well as the core assumptions upon which those predictions relied. This philosophy is perhaps exemplified in this reflect on the assumption of rational choice (with Jonathan Levin).[48]

... it is worth emphasizing that despite the shortcomings of the rational choice model, it remains a remarkably powerful tool for policy analysis. To see why, imagine conducting a welfare analysis of alternative policies. Under the rational choice approach, one would begin by specifying the relevant preferences over economic outcomes (e.g. everyone likes to consume more, some people might not like inequality, and so on), then model the allocation of resources under alternative policies and ﬁnally compare policies by looking at preferences over the alternative outcomes.

Many of the “objectionable” simplifying features of the rational choice model combine to make such an analysis feasible. By taking preferences over economic outcomes as the starting point, the approach abstracts from the idea that preferences might be inﬂuenced by contextual details, by the policies themselves, or by the political process. Moreover, rational choice approaches to policy evaluation typically assume people will act in a way that maximizes these preferences – this is the justiﬁcation for leaving choices in the hands of individuals whenever possible. Often, it is precisely these simpliﬁcations – that preferences are fundamental, focused on outcomes, and not too easily inﬂuenced by one’s environment and that people are generally to reason through choices and act according to their preferences – that allow economic analysis to yield sharp answers to a broad range of interesting public policy questions.

The behavioral critiques we have just discussed put these features of the rational choice approach to policy evaluation into question. Of course institutions aﬀect preferences and some people are willing to exchange worse economic outcomes fora sense of control. Preferences may even be aﬀected by much smaller contextual details. Moreover, even if people have well-deﬁned preferences, they may not act to maximize them. A crucial question then is whether an alternative model - for example an extension of the rational choice framework that incorporates some of these realistic features – would be a better tool for policy analysis. Developing equally powerful alternatives is an important unresolved question for future generations of economists.

Milgrom has been involved for at least two decades in the design and practice of large-scale auctions. Working with Bob Wilson on behalf of Pacific Bell, he proposed the simultaneous multiple round auction that was adopted by the FCC to run the initial auctions for radio spectrum in the 1990s. He has also advised regulators in the US, UK, Canada, Australia, Germany, Sweden and Mexico on spectrum auctions, Microsoft on search advertising auctions and Google on the auction at the basis of their IPO.

In 2006, Milgrom advised, with Jeremy Bulow and Jonathan Levin, Comcast in bidding on FCC Auction 66 including a rarely successfully implemented "jump bid."[49] In the words of the Economist[50]:

In the run-up to an online auction in 2006 of radio-spectrum licences by America’s Federal Communications Commission, Paul Milgrom, a consultant and Stanford University professor, customised his game-theory software to assist a consortium of bidders. The result was a triumph.

When the auction began, Dr Milgrom’s software tracked competitors’ bids to estimate their budgets for the 1,132 licences on offer. Crucially, the software estimated the secret values bidders placed on specific licences and determined that certain big licences were being overvalued. It directed Dr Milgrom’s clients to obtain a patchwork of smaller, less expensive licences instead. Two of his clients, Time Warner and Comcast, paid about a third less than their competitors for equivalent spectrum, saving almost \$1.2 billion.

In 2007, Milgrom co-founded Auctionomics,[51] with Silvia Console Battilana,[52] to design auctions and advise bidders in different industries.

In 2009, Milgrom was responsible for the development of assignment auctions and exchanges.[53] This was a mechanism that allowed for arbitrage possibilities and retained some of the flexibility of the simultaneous ascending bid auction but could be achieved instantaneously. The speed was an important attribute along with the potential to extend the auction design to consider bidding with non-price attributes.

In 2011, the FCC hired Auctionomics to tackle one of the most complex auction problems ever, the incentive auction. FCC Chairman Julius Genachowski said,[54]

I am delighted to have this world-class team of experts advising the Commission on this historic undertaking. Our plan is to ensure that incentive auctions serve as an effective market mechanism to unleash more spectrum for mobile broadband and help address the looming spectrum crunch. Our implementation of this new Congressional mandate will be guided by the economics, and will seek to maximize the opportunity to unleash investment and innovation, benefit consumers, drive economic growth, and enhance our global competitiveness. The knowledge and experience of this team will complement the substantial expertise of agency staff to meet these goals.

In 2012, Auctionomics and Power Auctions were hired to design the FCC's first Incentive Auction, with the goal of creating a market for repurposing television broadcast spectrum to wireless broadband. The design team was led by Milgrom and includes Larry Ausubel, Kevin Leyton-Brown, Jon Levin and Ilya Segal.

Over the years, Milgrom has been active as an innovator and has been awarded four patents relating to auction design.[55]

Publications (Selected)

• {{#invoke:citation/CS1|citation

|CitationClass=book }} (Ph.D. Dissertation)

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

References

1. Auctionomics
2. Curriculum Vitae
3. Template:Cite web
4. Procuring Universal Service: Putting Auction Theory to Work, in Le Prix Nobel: The Nobel Prizes, 1996, Nobel Foundation, 1997, 382-392
5. Nemmers Prize Press Release, 2008
6. BBVA Foundation Frontiers of Knowledge Award Citation, 2012
7. Fudenberg, D. D. Levine, and E. Maskin (1994). “The Folk Theorem with Imperfect Public Information”. Econometrica 62 (5): 997–1039.
8. Sugaya, T. (2013), “The Folk Theorem in Repeated Games with Private Monitoring,” mimeo, Stanford GSB.
9. Vives, Xavier (1990). “Nash Equilibrium with Strategic Complementarities”. Journal of Mathematical Economics 19 (3): 305–321.
10. See also Gans, J.S. “Best Replies and Adaptive Learning,” Mathematical Social Sciences, Vol.30, No.3, 1995, pp.221-234.
11. Topkis, D. (1968). Ordered Optimal Decisions. Ph.D. Dissertation, Stanford University.
12. Veinott, A. F. (1989). Lattice programming. Unpublished lectures.
13. Gans, J.S. and M. Smart (1996), “Majority Voting With Single-Crossing Preferences,” (with Michael Smart) Journal of Public Economics, 58 (1), pp.219-238.
14. Athey, S.C. (2002), "Monotone Comparative Statics under Uncertainty," Quarterly Journal of Economics, 117 (1), pp.187-223.
15. Milgrom Nemmers Prize Presentation Slides, 2008
16. Milgrom, Paul and Robert Weber (1982). "A Theory of Auctions and Competitive Bidding". Econometrica (Econometrica, Vol. 50, No. 5) 50 (5): 1089–1122
17. Krishna's Nemmers Presentation, 2008
18. Ausubel's Nemmers Presentation, 2008
19. Yuichiro Kamada and Fuhito Kojima (2010). "Improving Efficiency in Matching Markets with Regional Caps: The Case of the Japan Residency Matching Program". Stanford Institute for Economic Policy Discussion Paper and Kamada, Y., & Kojima, F. (2012). "Stability and Strategy-Proofness for Matching with Constraints: A Problem in the Japanese Medical Match and Its Solution". American Economic Review 102(3): 366–370. doi:10.1257/aer.102.3.366.
20. Sönmez, Tayfun (2013). "Bidding for Army Career Specialties: Improving the ROTC Branching Mechanism". Journal of Political Economy 121 (1): 186–219.
21. FCC, Notice of Proposed Rulemaking 12-118, September 28, 2012.
22. Francis Woolley also relates how the notation in that paper represented best practice in economic theory. Notation: A Beginner's Guide, Worthwhile Canadian Initiative, 17 April 2013.
23. Holmstrom Nemmers Presentation, 2008
24. Folbre, Nancy "What Makes Teachers Productive?" New York Times, 19 Sept 2011 [1]
25. Roberts' Nemmers Presentation, 2008
26. McGee, John S. (1958). “Predatory Price Cutting: The Standard Oil (N. J.) Case”. Journal of Law and Economics 1: 137–169.
27. Kreps, David M. and Robert Wilson (1982). “Reputation and Imperfect Information”. Journal of Economic Theory 27 (2): 253–279.
28. Nelson, Phillip (1970). “Information and Consumer Behavior”. Journal of Political Economy 78 (2): 311–329.
29. Nelson, Phillip (1974). “Advertising as Information”. Journal of Political Economy 81 (4): 729–754.
30. Gillian K. Hadfield and Barry R. Weingast "What is Law? A Coordination Model of the Characteristics of Legal Order" Journal of Legal Analysis 4 (Winter 2012) 471-514; Gillian K. Hadfield and Barry R. Weingast "Law without the State: Legal Attributes and the Coordination of Decentralized Collective Punishment" Journal of Law and Courts 1 (Winter 2013) 1-23.
31. Gillian K. Hadfield and Barry R. Weingast "Law without the State: Legal Attributes and the Coordination of Decentralized Collective Punishment" Journal of Law and Courts 1 (Winter 2013) 1-23.
32. Sanford Grossman "The Informational Role of Warranties and Private Disclosure about Product Quality" Journal of Law and Economics 24: 461-483.
33. Shin, Hyun Song (1998) “Adversarial and Inquisitorial Procedures in Arbitration” RAND Journal of Economics 29: 378-405.
34. Daughety, Andrew F. and Jennifer Reinganum F. (2000) “On the Economics of Trials: Adversarial Process, Evidence and Equilibrium Bias” Journal of Law, Economics and Organization 16: 365-394.
35. Froeb, Luke M. and Bruce H. Kobayashi (1996) “Naïve, Biased, yet Bayesian: Can Juries Interpret Selectively Produced Evidence?” Journal of Law, Economics and Organization 12: 257-170.
36. Amy Farmer and Paul Pecorino "Does jury bias matter?" International Review of Law and Economics 20: 315-328.
38. Morris Nemmers Presentation, 2008
39. Shimer, Robert (2005). "The Cyclical Behavior of Equilibrium Unemployment and Vacancies". The American Economic Review 95 (1): 25–49.
40. Kwerel, Evan (2004), 'Foreword' in Paul Milgrom's Putting Auction Theory to Work,New York: Cambridge University Press, p.xviii.
41. Kwerel, 2004, op.cit., p.xix.
42. Kwerel, op.cit.,2004, p.xxi.
43. Kwerel, op.cit.,2004, p.xx.
44. Kwerel, op.cit.,2004, p.xvii.
45. http://www.fcc.gov/incentiveauctions