Kruskal–Katona theorem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Yobot
m WP:CHECKWIKI error fixes - Replaced endash with hyphen in sortkey per WP:MCSTJR using AWB (9100)
en>Yobot
m →‎Ingredients of the proof: WP:CHECKWIKI error fixes using AWB (10093)
 
Line 1: Line 1:
'''Matrix chain multiplication''' is an optimization problem that can be solved using [[dynamic programming]]. Given a sequence of matrices, we want to find the most efficient way to [[matrix multiplication|multiply these matrices]] together. The problem is not actually to ''perform'' the multiplications, but merely to decide in which order to perform the multiplications.
Hello. Allow me introduce the writer. Her title is Emilia Shroyer but it's not the most feminine name out there. Hiring is my occupation. What I adore doing is to gather badges but I've been taking on new things lately. Years in the past he moved to North Dakota and his family members loves it.<br><br>Review my homepage; meal delivery service ([http://L1Nk.net/diettogoreviews74620 hop over to this website])
 
We have many options because matrix multiplication is [[associativity|associative]]. In other words, no matter how we parenthesize the product, the result will be the same. For example, if we had four matrices ''A'', ''B'', ''C'', and ''D'', we would have:
 
:(''ABC'')''D'' = (''AB'')(''CD'') = ''A''(''BCD'') = ''A''(''BC'')''D'' = ....
 
However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the ''efficiency''.
For example, suppose A is a 10 &times; 30 matrix, B is a 30 &times; 5 matrix, and C is a 5 &times; 60 matrix. Then,
 
:(''AB'')''C'' = (10&times;30&times;5) + (10&times;5&times;60)  = 1500 + 3000 = 4500 operations
:''A''(''BC'') = (30&times;5&times;60) + (10&times;30&times;60) = 9000 + 18000 = 27000 operations.
 
Clearly the first method is the more efficient. Now that we have identified the problem, how do we determine the optimal parenthesization of a product of ''n'' matrices? We could go through each possible parenthesization (brute force), but this would require time exponential in the number of matrices, which is very slow and impractical for large ''n''. The solution, as we will see, is to break up the problem into a set of related subproblems. By solving subproblems one time and reusing these solutions many times, we can drastically reduce the time required. This is known as [[dynamic programming]].
 
== A Dynamic Programming Algorithm ==
 
To begin, let us assume that all we really want to know is the minimum cost, or minimum number of arithmetic operations, needed to multiply out the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so the minimum cost is the cost of doing this. In general, we can find the minimum cost using the following [[recursion|recursive algorithm]]:
 
* Take the sequence of matrices and separate it into two subsequences.
* Find the minimum cost of multiplying out each subsequence.
* Add these costs together, and add in the cost of multiplying the two result matrices.
* Do this for each possible position at which the sequence of matrices can be split, and take the minimum over all of them.
 
For example, if we have four matrices ''ABCD'', we compute the cost required to find each of (''A'')(''BCD''), (''AB'')(''CD''), and (''ABC'')(''D''), making recursive calls to find the minimum cost to compute ''ABC'', ''AB'', ''CD'', and ''BCD''. We then choose the best one. Better still, this yields not only the minimum cost, but also demonstrates the best way of doing the multiplication: group it the way that yields the lowest total cost, and do the same for each factor.
 
Unfortunately, if we implement this algorithm we discover that it is just as slow as the naive way of trying all permutations! What went wrong? The answer is that we're doing a lot of redundant work. For example, above we made a recursive call to find the best cost for computing both ''ABC'' and ''AB''. But finding the best cost for computing ABC also requires finding the best cost for ''AB''. As the recursion grows deeper, more and more of this type of unnecessary repetition occurs.
 
One simple solution is called [[memoization]]: each time we compute the minimum cost needed to multiply out a specific subsequence, we save it. If we are ever asked to compute it again, we simply give the saved answer, and do not recompute it. Since there are about ''n''<sup>2</sup>/2 different subsequences, where ''n'' is the number of matrices, the space required to do this is reasonable. It can be shown that this simple trick brings the runtime down to O(''n''<sup>3</sup>) from O(2<sup>''n''</sup>), which is more than efficient enough for real applications. This is [[Top-down and bottom-up design|top-down]] dynamic programming.
 
From <ref name="Cormen">
{{cite book
  | last = Cormen
  | first =  Thomas H.
  | coauthors = Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein
  | title = Introduction to Algorithms
  | volume = Second Edition
  | chapter = 15.2: Matrix-chain multiplication
  | pages = 331–338
  | publisher = MIT Press and McGraw-Hill
  | year = 2001
  | isbn = 0-262-03293-7
}} Cormen et. al 2001
</ref> Pseudocode:
<source lang="java">
// Matrix Ai has dimension p[i-1] x p[i] for i = 1..n
MatrixChainOrder(int p[])
{
    // length[p] = n + 1
    n = p.length - 1;
    // m[i,j] = Minimum number of scalar multiplications (i.e., cost)
    // needed to compute the matrix A[i]A[i+1]...A[j] = A[i..j]
    // cost is zero when multiplying one matrix
    for (i = 1; i <= n; i++)
      m[i,i] = 0;
 
    for (L=2; L<=n; L++) { // L is chain length
        for (i=1; i<=n-L+1; i++) {
            j = i+L-1;
            m[i,j] = MAXINT;
            for (k=i; k<=j-1; k++) {
                // q = cost/scalar multiplications
                q = m[i,k] + m[k+1,j] + p[i-1]*p[k]*p[j];
                if (q < m[i,j]) {
                    m[i,j] = q;
                    s[i,j]=k// s[i,j] = Second auxiliary table that stores k
                              // k      = Index that achieved optimal cost
             
                }
            }
        }
    }
}
</source>
*Note : The first index for p is 0 and the first index for m and s is 1
Another solution is to anticipate which costs we will need and precompute them. It works like this:
* For each ''k'' from 2 to ''n'', the number of matrices:
** Compute the minimum costs of each subsequence of length ''k'', using the costs already computed.
 
The code in java using zero based array indexes along with a convenience method for printing the solved order of operations:
<source lang="java">
public class MatrixOrderOptimization {
    protected int[][]m;
    protected int[][]s;
    public void matrixChainOrder(int[] p) {
        int n = p.length - 1;
        m = new int[n][n];
        s = new int[n][n];
        for (int i = 0; i < n; i++) {
            m[i] = new int[n];
            m[i][i] = 0;
            s[i] = new int[n];
        }
        for (int ii = 1; ii < n; ii++) {
            for (int i = 0; i < n - ii; i++) {
                int j = i + ii;
                m[i][j] = Integer.MAX_VALUE;
                for (int k = i; k < j; k++) {
                    int q = m[i][k] + m[k+1][j] + p[i]*p[k+1]*p[j+1];
                    if (q < m[i][j]) {
                        m[i][j] = q;
                        s[i][j] = k;
                    }
                }
            }
        }
    }
    public void printOptimalParenthesizations() {
        boolean[] inAResult = new boolean[s.length];
        printOptimalParenthesizations(s, 0, s.length - 1, inAResult);
    }
    void printOptimalParenthesizations(int[][]s, int i, int j,  /* for pretty printing: */ boolean[] inAResult) {
        if (i != j) {
            printOptimalParenthesizations(s, i, s[i][j], inAResult);
            printOptimalParenthesizations(s, s[i][j] + 1, j, inAResult);
            String istr = inAResult[i] ? "_result " : " ";
            String jstr = inAResult[j] ? "_result " : " ";
            System.out.println(" A_" + i + istr + "* A_" + j + jstr);
            inAResult[i] = true;
            inAResult[j] = true;
        }
    }
}
</source>
 
When we're done, we have the minimum cost for the full sequence. Although it also requires O(''n''<sup>3</sup>) time, this approach has the practical advantages that it requires no recursion, no testing if a value has already been computed, and we can save space by throwing away some of the subresults that are no longer needed. This is bottom-up dynamic programming: a second way by which this problem can be solved.
#cr10
 
== An Even More Efficient Algorithm ==
 
An algorithm published in 1984 by Hu and Shing achieves <math>n \log n</math> complexity. They showed how the matrix chain multiplication problem can be transformed (or [[Reduction (complexity)|reduced]]) into the problem of [[Noncrossing partition|partitioning]] a [[convex polygon]] into non-intersecting [[Polygon triangulation|triangles]].
<ref>
{{cite journal
  | last = Hu
  | first =  T C.
  | coauthors = M T. Shing
  | title = Computation of matrix chain products. Part II
  | journal = SIAM Journal on Computing
  | volume = 13
  | issue = 2
  | pages = 228–251
  | publisher = Springer-Verlag
  | location = Univ. of California at San Diego
  | year = 1984
  | url = ftp://reports.stanford.edu/pub/cstr/reports/cs/tr/81/875/CS-TR-81-875.pdf
  | format = [[Portable Document Format|PDF]]
  | issn = 0097-5397
  | doi = 10.1137/0213017
}}
</ref>
This image illustrates possible triangulations of a hexagon:
[[Image:Catalan-Hexagons-example.svg|400px|center]]
 
Secondly, they developed an algorithm that finds an optimum solution for the partition problem in <math>O(n \log n)</math> time.
 
{{Expand section|date=April 2009}}
With n+1 matrices in the multiplication chain there are n [[binary operation]]s and ''C''<sub>''n''</sub> ways of placing parenthesizes, where ''C''<sub>''n''</sub> is the ''n''th [[Catalan number]].
 
== Generalizations ==
 
Although not this algorithm applies well to the problem of matrix chain multiplication, it has been noted that it generalizes well to solving a more abstract problem: given a linear sequence of objects, an associative binary operation on those objects, and a way to compute the cost of performing that operation on any two given objects (as well as all partial results), compute the minimum cost way to group the objects to apply the operation over the sequence.<ref>G. Baumgartner, D. Bernholdt, D. Cociorva, R. Harrison, M. Nooijen, J. Ramanujam and P. Sadayappan. A Performance Optimization Framework for Compilation of Tensor Contraction Expressions into Parallel Programs. 7th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS '02). Fort Lauderdale, Florida. 2002 available at http://citeseer.ist.psu.edu/610463.html</ref>
 
One common special case of this is [[string concatenation]]. Say we have a list of strings. In [[C (programming language)|C]], for example, the cost of concatenating two strings of length ''m'' and ''n'' using ''strcat'' is O(''m''&nbsp;+&nbsp;''n''), since we need O(''m'') time to find the end of the first string and O(''n'') time to copy the second string onto the end of it. Using this cost function, we can write a dynamic programming algorithm to find the fastest way to concatenate a sequence of strings (although this is rather useless, since we can concatenate them all in time proportional to the sum of their lengths). A similar problem exists for singly [[linked lists]].
 
Another generalization is to solve the problem when two parallel processors are available. In this case, instead of adding the costs of computing each subsequence, we just take the ''maximum'', because we can do them both simultaneously. This can drastically affect both the minimum cost and the final optimal grouping; more "balanced" groupings that keep all the processors busy are favored. Heejo Lee et al. describe even more sophisticated approaches.
 
==Implementations==
* A [[Javascript]] implementation is available at [http://alexle.net/stuff/dynamic-programming-matrix-multiplication/ Alex Le's Blog]
* A [[Javascript]] implementation is available at [http://www.ateji.com/px/whitepapers.html Ateji PX]
* A Javascript implementation showing final ''m'' and ''s'' tables, and intermediate calculations by [http://modoogle.com/matrixchainorder/ Mikhail Simin]
* A [[Java (programming language)|Java]] implementation is available at [http://www.brian-borowski.com/Matrix/ Brian's Project Gallery]
* A [[MATLAB]] implementation is available from [http://www.mathworks.com/matlabcentral/fileexchange/27950 MATLAB Central]
 
== References ==
 
<references/>
 
* [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]], and [[Clifford Stein]]. ''[[Introduction to Algorithms]]'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 15.2: Matrix-chain multiplication, pp.&nbsp;331&ndash;339.
* Viv. [http://citeseer.ist.psu.edu/268391.html Dynamic Programming]. A 1995 introductory article on dynamic programming.
* Heejo Lee, Jong Kim, Sungje Hong, and Sunggu Lee. [http://ccs.korea.ac.kr/pds/tpds03.pdf Processor Allocation and Task Scheduling of Matrix Chain Products on Parallel Systems]. ''IEEE Trans. on Parallel and Distributed Systems,'' Vol. 14, No. 4, pp.&nbsp;394–407, Apr. 2003
 
{{DEFAULTSORT:Matrix Chain Multiplication}}
[[Category:Optimization algorithms and methods]]
[[Category:Matrices]]
[[Category:Dynamic programming]]

Latest revision as of 13:21, 5 May 2014

Hello. Allow me introduce the writer. Her title is Emilia Shroyer but it's not the most feminine name out there. Hiring is my occupation. What I adore doing is to gather badges but I've been taking on new things lately. Years in the past he moved to North Dakota and his family members loves it.

Review my homepage; meal delivery service (hop over to this website)