Free entropy

From formulasearchengine
Revision as of 12:38, 19 October 2013 by en>Christian75 (→{{Thermodynamics|expanded=Potentials}})
Jump to navigation Jump to search

In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. A tridiagonal system for n unknowns may be written as

where and .

For such systems, the solution can be obtained in operations instead of required by Gaussian elimination. A first sweep eliminates the 's, and then an (abbreviated) backward substitution produces the solution. Examples of such matrices commonly arise from the discretization of 1D Poisson equation (e.g., the 1D diffusion problem) and natural cubic spline interpolation; similar systems of matrices arise in tight binding physics or nearest neighbor effects models.

Method

The forward sweep consists of modifying the coefficients as follows, denoting the new modified coefficients with primes:

and

The solution is then obtained by back substitution:

Implementations

All the provided implementations assume that the three diagonals, a (below), b (main), and c (above), are passed as arguments.

C

The following C function will solve a general tridiagonal system (though it will destroy the input vector c in the process). Note that the index here is zero based, in other words where is the number of unknowns.

void solve_tridiagonal_in_place_destructive(double x[], const size_t N, const double a[], const double b[], double c[]) {
    /* unsigned integer of same size as pointer */
    size_t in;
    
    /*
     solves Ax = v where A is a tridiagonal matrix consisting of vectors a, b, c
     note that contents of input vector c will be modified, making this a one-time-use function
     x[] - initially contains the input vector v, and returns the solution x. indexed from [0, ..., N - 1]
     N — number of equations
     a[] - subdiagonal (means it is the diagonal below the main diagonal) -- indexed from [1, ..., N - 1]
     b[] - the main diagonal, indexed from [0, ..., N - 1]
     c[] - superdiagonal (means it is the diagonal above the main diagonal) -- indexed from [0, ..., N - 2]
     */
    
    c[0] = c[0] / b[0];
    x[0] = x[0] / b[0];
    
    /* loop from 1 to N - 1 inclusive */
    for (in = 1; in < N; in++) {
        double m = 1.0 / (b[in] - a[in] * c[in - 1]);
        c[in] = c[in] * m;
        x[in] = (x[in] - a[in] * x[in - 1]) * m;
    }
    
    /* loop from N - 2 to 0 inclusive, safely testing loop end condition */
    for (in = N - 1; in-- > 0; )
        x[in] = x[in] - c[in] * x[in + 1];
}

The following variant preserves the system of equations for reuse on other inputs. Note the necessity of library calls to allocate and free scratch space - a more efficient implementation for solving the same tridiagonal system on many inputs would rely on the calling function to provide a pointer to the scratch space.

void solve_tridiagonal_in_place_reusable(double x[], const size_t N, const double a[], const double b[], const double c[]) {
    size_t in;
    
    /* Allocate scratch space. */
    double * cprime = malloc(sizeof(double) * N);
    
    if (!cprime) {
        /* do something to handle error */
    }
    
    cprime[0] = c[0] / b[0];
    x[0] = x[0] / b[0];
    
    /* loop from 1 to N - 1 inclusive */
    for (in = 1; in < N; in++) {
        double m = 1.0 / (b[in] - a[in] * cprime[in - 1]);
        cprime[in] = c[in] * m;
        x[in] = (x[in] - a[in] * x[in - 1]) * m;
    }
    
    /* loop from N - 2 to 0 inclusive, safely testing loop end condition */
    for (in = N - 1; in-- > 0; )
        x[in] = x[in] - cprime[in] * x[in + 1];
    
    /* free scratch space */
    free(cprime);
}

Python

Note that the index here is zero-based, in other words where is the number of unknowns.

# note: function also modifies b[] and d[] params while solving
def TDMASolve(a, b, c, d):
    n = len(d) # n is the numbers of rows, a and c has length n-1
    for i in xrange(n-1):
        d[i+1] -= d[i] * a[i] / b[i]
        b[i+1] -= c[i] * a[i] / b[i]
    for i in reversed(xrange(n-1)):
        d[i] -= d[i+1] * c[i] / b[i+1]
    return [d[i] / b[i] for i in xrange(n)] # return the solution

MATLAB

function x = TDMAsolver(a,b,c,d)
%a, b, c are the column vectors for the compressed tridiagonal matrix, d is the right vector
n = length(d); % n is the number of rows
 
% Modify the first-row coefficients
c(1) = c(1) / b(1);    % Division by zero risk.
d(1) = d(1) / b(1);   
 
for i = 2:n-1
    temp = b(i) - a(i) * c(i-1);
    c(i) = c(i) / temp;
    d(i) = (d(i) - a(i) * d(i-1))/temp;
end
 
d(n) = (d(n) - a(n) * d(n-1))/( b(n) - a(n) * c(n-1));
 
% Now back substitute.
x(n) = d(n);
for i = n-1:-1:1
    x(i) = d(i) - c(i) * x(i + 1);
end

Fortran 90

Note that the index here is one based, in other words where is the number of unknowns.

Sometimes it is undesirable to have the solver routine overwrite the tridiagonal coefficients (e.g. for solving multiple systems of equations where only the right side of the system changes), so this implementation gives an example of a relatively inexpensive method of preserving the coefficients.

      subroutine solve_tridiag(a,b,c,d,x,n)
      implicit none
!	 a - sub-diagonal (means it is the diagonal below the main diagonal)
!	 b - the main diagonal
!	 c - sup-diagonal (means it is the diagonal above the main diagonal)
!	 d - right part
!	 x - the answer
!	 n - number of equations

        integer,intent(in) :: n
        real(8),dimension(n),intent(in) :: a,b,c,d
        real(8),dimension(n),intent(out) :: x
        real(8),dimension(n) :: cp,dp
        real(8) :: m
        integer i

! initialize c-prime and d-prime
        cp(1) = c(1)/b(1)
        dp(1) = d(1)/b(1)
! solve for vectors c-prime and d-prime
         do i = 2,n
           m = b(i)-cp(i-1)*a(i)
           cp(i) = c(i)/m
           dp(i) = (d(i)-dp(i-1)*a(i))/m
         enddo
! initialize x
         x(n) = dp(n)
! solve for x from the vectors c-prime and d-prime
        do i = n-1, 1, -1
          x(i) = dp(i)-cp(i)*x(i+1)
        end do

    end subroutine solve_tridiag

This subroutine offers an option of overwriting d or not.[1]

      subroutine tdma(n,a,b,c,d,x)
	  implicit none
      integer, intent(in) :: n
      real, intent(in) :: a(n), c(n)
      real, intent(inout), dimension(n) :: b, d
	  real, intent(out) :: x(n)
	  !  --- Local variables ---
	  integer :: i
	  real :: q
      !  --- Elimination ---
      do i = 2,n
         q = a(i)/b(i - 1)
         b(i) = b(i) - c(i - 1)*q
         d(i) = d(i) - d(i - 1)*q
      end do
      ! --- Backsubstitution ---
      q = d(n)/b(n)
      x(n) = q
      do i = n - 1,1,-1
         q = (d(i) - c(i)*q)/b(i)
         x(i) = q
      end do
      return
      end

Derivation

The derivation of the tridiagonal matrix algorithm involves manually performing some specialized Gaussian elimination in a generic manner.

Suppose that the unknowns are , and that the equations to be solved are:

Consider modifying the second () equation with the first equation as follows:

which would give:

and the effect is that has been eliminated from the second equation. Using a similar tactic with the modified second equation on the third equation yields:

This time was eliminated. If this procedure is repeated until the row; the (modified) equation will involve only one unknown, . This may be solved for and then used to solve the equation, and so on until all of the unknowns are solved for.

Clearly, the coefficients on the modified equations get more and more complicated if stated explicitly. By examining the procedure, the modified coefficients (notated with tildes) may instead be defined recursively:

To further hasten the solution process, may be divided out (if there's no division by zero risk), the newer modified coefficients notated with a prime will be:

This gives the following system with the same unknowns and coefficients defined in terms of the original ones above:

The last equation involves only one unknown. Solving it in turn reduces the next last equation to one unknown, so that this backward substitution can be used to find all of the unknowns:

Variants

In some situations, particularly those involving periodic boundary conditions, a slightly perturbed form of the tridiagonal system may need to be solved:

In this case, we can make use of the Sherman-Morrison formula to avoid the additional operations of Gaussian elimination and still use the Thomas algorithm. The method requires solving a modified non-cyclic version of the system for both the input and a sparse corrective vector, and then combining the solutions. This can be done efficiently if both solutions are computed at once, as the forward portion of the pure tridiagonal matrix algorithm can be shared.

In other situations, the system of equations may be block tridiagonal (see block matrix), with smaller submatrices arranged as the individual elements in the above matrix system(e.g., the 2D Poisson problem). Simplified forms of Gaussian elimination have been developed for these situationsPotter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park..

The textbook Numerical Mathematics by Quarteroni, Sacco and Saleri, lists a modified version of the algorithm which avoids some of the divisions (using instead multiplications), which is beneficial on some computer architectures.

References

External links

Template:Numerical linear algebra