Gas constant: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>AllHailZeppelin
This value was repetitive. Higher in the table it was already given in the units L-atm / K-mol so there is no need for a second copy.
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
A '''discrete cosine transform''' ('''DCT''') expresses a finite sequence of [[data points]] in terms of a sum of [[cosine]] functions oscillating at different [[frequency|frequencies]].  DCTs are important to numerous applications in science and engineering, from [[lossy compression]] of [[audio compression (data)|audio]] (e.g. [[MP3]]) and [[image compression|images]] (e.g. [[JPEG]]) (where small high-frequency components can be discarded), to [[spectral method]]s for the numerical solution of [[partial differential equations]].  The use of [[cosine]] rather than [[sine]] functions is critical in these applications: for compression, it turns out that cosine functions are much more efficient (as described below, fewer functions are needed to approximate a typical [[signal (electrical engineering)|signal]]), whereas for differential equations the cosines express a particular choice of [[boundary condition]]s.
Are we constantly having problems with your PC? Are we always seeking techniques to heighten PC performance? Then this is the post you have been shopping for. Here we will discuss a few of the many asked questions with regards to having we PC serve we well; how may I create my computer faster for free? How to make my computer run faster?<br><br>Firstly, you should use your Antivirus or security tool and run a scan on your computer. It is possible that the computer is infected with virus or malware that slows down a computer. If there is nothing found in the scanning report, it might be your RAM that cause the issue.<br><br>Over time the disk will also get fragmented. Fragmentation causes the computer to slow down because it takes windows much longer to find a files location. Fortunately, a PC has a built inside disk defragmenter. You can run this system by clicking "Start" - "All Programs" - "Accessories" - "System Tools" - "Disk Defragmenter". We can have the way to choose that drives or partition we want to defragment. This action will take you several time thus it is advised to do this regularly thus as to avoid further fragmentation plus to speed up a windows XP computer.<br><br>Windows errors may be caused by any amount of reasons, however, there's almost usually 1 cause. There's a hidden part of the system which is responsible for making 90% of all Windows errors, plus it's called the 'registry'. This is the central database for your system plus is where a computer stores all its program files plus settings. It's a important piece of Windows, that is must be capable to function. However, it's additionally 1 of the biggest causes of issues on a PC.<br><br>To fix the problem that is caused by registry error, we should employ a [http://bestregistrycleanerfix.com/registry-mechanic pc tools registry mechanic]. That is the safest plus simplest method for average PC consumers. But there are thousands of registry products accessible available. You should discover a superior 1 which can really resolve your problem. If you use a terrible one, we may expect more problems.<br><br>Windows relies heavily on this database, storing everything from your newest emails to the Internet favorites in there. Because it's thus crucial, the computer is consistently adding plus updating the files inside it. This really is okay, however, it can create your computer run slow, when the computer accidentally breaks its crucial registry files. This really is a especially popular issue, plus actually makes a computer run slower each day. What occurs is the fact that since your computer is constantly using 100's of registry files at once, it sometimes gets confused plus create a few of them unreadable. This then makes the computer run slow, considering Windows takes longer to read the files it needs.<br><br>To speed up your computer, you simply have to be capable to do away with all these junk files, permitting a computer to find what it wants, when it wants. Luckily, there's a tool which allows you to do this easily plus quickly. It's a tool called a 'registry cleaner'.<br><br>If you wish to have a computer with fast running speed, you'd better install a advantageous registry cleaner to clean the useless files for you. As long as we take care of the computer, it can keep inside superior condition.
 
In particular, a DCT is a [[List of Fourier-related transforms|Fourier-related transform]] similar to the [[discrete Fourier transform]] (DFT), but using only [[real number]]s. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with [[even and odd functions|even]] symmetry (since the Fourier transform of a real and even function is real and even), where in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common.
 
The most common variant of discrete cosine transform is the type-II DCT, which is often called simply "the DCT",<ref name="pubDCT">{{Citation |first=N. |last=Ahmed |author1-link=N. Ahmed |first2=T. |last2=Natarajan |first3=K. R. |last3=Rao |title=Discrete Cosine Transform |journal=IEEE Transactions on Computers |date=January 1974 |volume=C-23 |issue=1 |pages=90–93 |doi=10.1109/T-C.1974.223784}}</ref><ref name="pubRaoYip">{{Citation |last1=Rao |first1=K |authorlink1=K. R. Rao |last2=Yip |first2=P |title=Discrete Cosine Transform: Algorithms, Advantages, Applications |publisher=Academic Press |location=Boston |year=1990 |isbn=0-12-580203-X}}</ref> its inverse, the type-III DCT, is correspondingly often called simply "the inverse DCT" or "the IDCT".  Two related transforms are the [[discrete sine transform]] (DST), which is equivalent to a DFT of real and ''odd'' functions, and the [[modified discrete cosine transform]] (MDCT), which is based on a DCT of ''overlapping'' data.
 
== Applications ==
The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy data compression, because it has a strong "energy compaction" property:<ref name="pubDCT"/><ref name="pubRaoYip"/> most of the signal information tends to be concentrated in a few low-frequency components of the DCT, approaching the [[Karhunen-Loève transform]] (which is optimal in the decorrelation sense) for signals based on certain limits of [[Markov process]]es.  As explained below, this stems from the boundary conditions implicit in the cosine functions.
 
[[Image:Example dft dct.svg|thumb|360px|right|DCT-II (bottom) compared to the [[discrete fourier transform|DFT]] (middle) of an input signal (top).]]
 
A related transform, the [[Modified discrete cosine transform|''modified'' discrete cosine transform]], or MDCT (based on the DCT-IV), is used in [[Advanced audio coding|AAC]], [[Vorbis]], [[Windows Media Audio|WMA]], and [[MP3]] audio compression.
 
DCTs are also widely employed in solving partial differential equations by spectral methods, where the different variants of the DCT correspond to slightly different even/odd boundary conditions at the two ends of the array.
 
DCTs are also closely related to [[Chebyshev polynomials]], and fast DCT algorithms (below) are used in [[Chebyshev approximation]] of arbitrary functions by series of Chebyshev polynomials, for example in [[Clenshaw–Curtis quadrature]].
 
===JPEG===
{{main|JPEG#Discrete cosine transform|l1=JPEG: Discrete cosine transform}}
 
The DCT is used in [[JPEG]] image compression, [[MJPEG]], [[MPEG]], [[DV]], [[Daala]], and [[Theora]] [[video compression]]. There, the two-dimensional DCT-II of <math>N \times N</math> blocks are computed and the results are [[Quantization (signal processing)|quantized]] and [[Entropy encoding|entropy coded]]. In this case, <math>N</math> is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8 × 8 transform coefficient array in which the <math>(0,0)</math> element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies.
 
==Informal overview==
 
Like any Fourier-related transform, discrete cosine transforms (DCTs) express a function or a signal in terms of a sum of [[trigonometric function|sinusoids]] with different [[frequencies]] and [[amplitude]]s.  Like the [[discrete Fourier transform]] (DFT), a DCT operates on a function at a finite number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines (in the form of [[complex exponential]]s).  However, this visible difference is merely a consequence of a deeper distinction: a DCT implies different [[boundary condition]]s than the DFT or other related transforms.
 
The Fourier-related transforms that operate on a function over a finite [[domain of a function|domain]], such as the DFT or DCT or a [[Fourier series]], can be thought of as implicitly defining an ''extension'' of that function outside the domain. That is, once you write a function <math>f(x)</math> as a sum of sinusoids, you can evaluate that sum at any <math>x</math>, even for <math>x</math> where the original <math>f(x)</math> was not specified. The DFT, like the Fourier series, implies a [[periodic function|periodic]] extension of the original function.  A DCT, like a [[Sine and cosine transforms|cosine transform]], implies an [[even and odd functions|even]] extension of the original function.
 
[[Image:DCT-symmetries.svg|thumb|right|350px|Illustration of the implicit even/odd extensions of DCT input data, for ''N''=11 data points (red dots), for the four most common types of DCT (types I-IV).]]
However, because DCTs operate on ''finite'', ''discrete'' sequences, two issues arise that do not apply for the continuous cosine transform.  First, one has to specify whether the function is even or odd at ''both'' the left and right boundaries of the domain (i.e. the min-''n'' and max-''n'' boundaries in the definitions below, respectively).  Second, one has to specify around ''what point'' the function is even or odd.  In particular, consider a sequence ''abcd'' of four equally spaced data points, and say that we specify an even ''left'' boundary. There are two sensible possibilities: either the data are even about the sample ''a'', in which case the even extension is ''dcbabcd'', or the data are even about the point ''halfway'' between ''a'' and the previous point, in which case the even extension is ''dcbaabcd'' (''a'' is repeated).
 
These choices lead to all the standard variations of DCTs and also [[discrete sine transform]]s (DSTs). 
Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of 2 × 2 × 2 × 2 = 16 possibilities.  Half of these possibilities, those where the ''left'' boundary is even, correspond to the 8 types of DCT; the other half are the 8 types of DST.
 
These different boundary conditions strongly affect the applications of the transform and lead to uniquely useful properties for the various DCT types.  Most directly, when using Fourier-related transforms to solve [[partial differential equation]]s by [[spectral method]]s, the boundary conditions are directly specified as a part of the problem being solved.  Or, for the [[Modified discrete cosine transform|MDCT]] (based on the type-IV DCT), the boundary conditions are intimately involved in the MDCT's critical property of time-domain aliasing cancellation.  In a more subtle fashion, the boundary conditions are responsible for the "energy compactification" properties that make DCTs useful for image and audio compression, because the boundaries affect the rate of convergence of any Fourier-like series.
 
In particular, it is well known that any [[Classification of discontinuities|discontinuities]] in a function reduce the [[rate of convergence]] of the Fourier series, so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression: the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed.  (Here, we think of the DFT or DCT as approximations for the [[Fourier series]] or [[cosine series]] of a function, respectively, in order to talk about its "smoothness".)  However, the implicit periodicity of the DFT means that discontinuities usually occur at the boundaries: any random segment of a signal is unlikely to have the same value at both the left and right boundaries.  (A similar problem arises for the DST, in which the odd left boundary condition implies a discontinuity for any function that does not happen to be zero at that boundary.)  In contrast, a DCT where ''both'' boundaries are even ''always'' yields a continuous extension at the boundaries (although the [[slope]] is generally discontinuous).  This is why DCTs, and in particular DCTs of types I, II, V, and VI (the types that have two even boundaries) generally perform better for signal compression than DFTs and DSTs.  In practice, a type-II DCT is usually preferred for such applications, in part for reasons of computational convenience.
 
== Formal definition ==
Formally, the discrete cosine transform is a [[linear]], invertible [[function (mathematics)|function]] <math>f:\mathbb{R}^{N}\to\mathbb{R}^{N}</math> (where <math>\mathbb{R}</math> denotes the set of [[real number]]s), or equivalently an invertible ''N'' × ''N'' [[square matrix]]. There are several variants of the DCT with slightly modified definitions. The ''N'' real numbers ''x''<sub>0</sub>, ..., ''x''<sub>''N''-1</sub> are transformed into the ''N'' real numbers ''X''<sub>0</sub>, ..., ''X''<sub>''N''-1</sub> according to one of the formulas:
 
=== DCT-I ===
:<math>X_k = \frac{1}{2} (x_0 + (-1)^k x_{N-1})
+ \sum_{n=1}^{N-2} x_n \cos \left[\frac{\pi}{N-1} n k \right] \quad \quad k = 0, \dots, N-1.</math>
 
Some authors further multiply the ''x''<sub>0</sub> and ''x''<sub>''N''-1</sub> terms by √2, and correspondingly multiply the ''X''<sub>0</sub> and ''X''<sub>''N''-1</sub> terms by 1/√2. This makes the DCT-I matrix [[orthogonal matrix|orthogonal]], if one further multiplies by an overall scale factor of <math>\sqrt{2/(N-1)}</math>, but breaks the direct correspondence with a real-even DFT.
 
The DCT-I is exactly equivalent (up to an overall scale factor of 2), to a DFT of <math>2N-2</math> real numbers with even symmetry. For example, a DCT-I of ''N''=5 real numbers ''abcde'' is exactly equivalent to a DFT of eight real numbers ''abcdedcb'' (even symmetry), divided by two. (In contrast, DCT types II-IV involve a half-sample shift in the equivalent DFT.)
 
Note, however, that the DCT-I is not defined for ''N'' less than 2. (All other DCT types are defined for any positive ''N''.)
 
Thus, the DCT-I corresponds to the boundary conditions: ''x''<sub>''n''</sub> is even around ''n''=0 and even around ''n''=''N''-1; similarly for ''X''<sub>''k''</sub>.
 
=== DCT-II ===
:<math>X_k =
\sum_{n=0}^{N-1} x_n \cos \left[\frac{\pi}{N} \left(n+\frac{1}{2}\right) k \right] \quad \quad k = 0, \dots, N-1.</math>
 
The DCT-II is probably the most commonly used form, and is often simply referred to as "the DCT".<ref name="pubDCT"/><ref name="pubRaoYip"/>
 
This transform is exactly equivalent (up to an overall scale factor of 2) to a DFT of <math>4N</math> real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the DFT of the <math>4N</math> inputs <math>y_n</math>, where <math>y_{2n}=0</math>, <math>y_{2n+1} = x_n</math> for <math>0 \leq n < N</math>, <math>y_{2N}=0</math>, and <math>y_{4N-n}=y_n</math> for <math>0 < n < 2N</math>.
 
Some authors further multiply the ''X''<sub>0</sub> term by 1/√2 and multiply the resulting matrix by an overall scale factor of <math>\sqrt{2/N}</math> (see below for the corresponding change in DCT-III). This makes the DCT-II matrix [[orthogonal matrix|orthogonal]], but breaks the direct correspondence with a real-even DFT of half-shifted input.
 
The DCT-II implies the boundary conditions: ''x''<sub>''n''</sub> is even around ''n''=-1/2 and even around ''n''=''N''-1/2; ''X''<sub>''k''</sub> is even around ''k''=0 and odd around ''k''=''N''.
 
=== DCT-III ===
:<math>X_k = \frac{1}{2} x_0 +
\sum_{n=1}^{N-1} x_n \cos \left[\frac{\pi}{N} n \left(k+\frac{1}{2}\right) \right] \quad \quad k = 0, \dots, N-1.</math>
 
Because it is the inverse of DCT-II (up to a scale factor, see below), this form is sometimes simply referred to as "the inverse DCT" ("IDCT").<ref name="pubRaoYip"/>
 
Some authors further multiply the ''x''<sub>0</sub> term by √2 and multiply the resulting matrix by an overall scale factor of <math>\sqrt{2/N}</math> (see above for the corresponding change in DCT-II), so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix [[orthogonal matrix|orthogonal]], but breaks the direct correspondence with a real-even DFT of half-shifted output.
 
The DCT-III implies the boundary conditions: ''x''<sub>''n''</sub> is even around ''n''=0 and odd around ''n''=''N''; ''X''<sub>''k''</sub> is even around ''k''=-1/2 and even around ''k''=''N''-1/2.
 
=== DCT-IV ===
:<math>X_k =
\sum_{n=0}^{N-1} x_n \cos \left[\frac{\pi}{N} \left(n+\frac{1}{2}\right) \left(k+\frac{1}{2}\right) \right] \quad \quad k = 0, \dots, N-1.</math>
 
The DCT-IV matrix becomes [[orthogonal matrix|orthogonal]] (and thus, being clearly symmetric, its own inverse) if one further multiplies by an overall scale factor of <math>\sqrt{2/N}</math>.
 
A variant of the DCT-IV, where data from different transforms are ''overlapped'', is called the [[modified discrete cosine transform]] (MDCT) (Malvar, 1992).
 
The DCT-IV implies the boundary conditions: ''x''<sub>''n''</sub> is even around ''n''=-1/2 and odd around ''n''=''N''-1/2; similarly for ''X''<sub>''k''</sub>.
 
=== DCT V-VIII ===
DCTs of types I-IV treat both boundaries consistently regarding the point of symmetry: they are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. By contrast, DCTs of types V-VIII imply boundaries that are even/odd around a data point for one boundary and halfway between two data points for the other boundary.
 
In other words,
DCT types I-IV are equivalent to real-even DFTs of even order (regardless of whether ''N'' is even or odd), since the corresponding DFT is of length 2(''N''&minus;1) (for DCT-I) or 4''N'' (for DCT-II/III) or 8''N'' (for DCT-IV). The four additional types of discrete cosine transform (Martucci, 1994) correspond essentially to real-even DFTs of logically odd order, which have factors of ''N''±½ in the denominators of the cosine arguments.
 
However, these variants seem to be rarely used in practice.  One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below.
 
(The trivial real-even array, a length-one DFT (odd length) of a single number ''a'', corresponds to a DCT-V of length ''N''=1.)
 
== Inverse transforms ==
Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(''N''-1). The inverse of DCT-IV is DCT-IV multiplied by 2/''N''. The inverse of DCT-II is DCT-III multiplied by 2/''N'' and vice versa.<ref name="pubRaoYip"/>
 
Like for the [[discrete Fourier transform|DFT]], the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by <math>\sqrt{2/N}</math> so that the inverse does not require any additional multiplicative factor.  Combined with appropriate factors of √2 (see above), this can be used to make the transform matrix [[orthogonal matrix|orthogonal]].
 
== Multidimensional DCTs ==
 
Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension.
 
For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2d DCT-II is given by the formula (omitting normalization and other scale factors, as above):
 
:<math>
\begin{align}
X_{k_1,k_2} &=&
\sum_{n_1=0}^{N_1-1}
\left( \sum_{n_2=0}^{N_2-1}
x_{n_1,n_2}
\cos \left[\frac{\pi}{N_2} \left(n_2+\frac{1}{2}\right) k_2 \right]\right)
\cos \left[\frac{\pi}{N_1} \left(n_1+\frac{1}{2}\right) k_1 \right]\\ &=&
\sum_{n_1=0}^{N_1-1}
\sum_{n_2=0}^{N_2-1}
x_{n_1,n_2}
\cos \left[\frac{\pi}{N_1} \left(n_1+\frac{1}{2}\right) k_1 \right]
\cos \left[\frac{\pi}{N_2} \left(n_2+\frac{1}{2}\right) k_2 \right] .
\end{align}
</math>
 
[[Image:Dctjpeg.png|thumb|250px|Two-dimensional DCT frequencies from the [[JPEG#Discrete cosine transform|JPEG DCT]]]]
 
Technically, computing a two- (or multi-) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a ''row-column'' algorithm (after the two-dimensional case).  As with [[fast Fourier transform#Multidimensional FFTs|multidimensional FFT algorithms]], however, there exist other methods to compute the same thing while performing the computations in a different order (i.e. interleaving/combining the algorithms for the different dimensions).
 
The inverse of a multi-dimensional DCT is just a separable product of the inverse(s) of the corresponding one-dimensional DCT(s) (see above), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm.
 
The image to the right shows combination of horizontal and vertical frequencies for an 8 x 8 (<math>N_1 = N_2 = 8</math>) two-dimensional DCT.
Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle.
For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data (8x8) is transformed to a [[linear combination]] of these 64 frequency squares.
 
== Computation ==
Although the direct application of these formulas would require O(''N''<sup>2</sup>) operations, it is possible to compute the same thing with only O(''N'' log ''N'') complexity by factorizing the computation similarly to the [[fast Fourier transform]] (FFT). One can also compute DCTs via FFTs combined with O(''N'') pre- and post-processing steps.  In general, O(''N'' log ''N'') methods to compute DCTs are known as '''fast cosine transform''' (FCT) algorithms.
 
The most efficient algorithms, in principle, are usually those that are specialized directly for the DCT, as opposed to using an ordinary FFT plus O(''N'') extra operations (see below for an exception). However, even "specialized" DCT algorithms (including all of those that achieve the lowest known arithmetic counts, at least for [[power of two|power-of-two]] sizes) are typically closely related to FFT algorithms—since DCTs are essentially DFTs of real-even data, one can design a fast DCT algorithm by taking an FFT and eliminating the redundant operations due to this symmetry. This can even be done automatically (Frigo & Johnson, 2005). Algorithms based on the [[Cooley–Tukey FFT algorithm]] are most common, but any other FFT algorithm is also applicable. For example, the [[Winograd FFT algorithm]] leads to minimal-multiplication algorithms for the DFT, albeit generally at the cost of more additions, and a similar algorithm was proposed by Feig & Winograd (1992) for the DCT. Because the algorithms for DFTs, DCTs, and similar transforms are all so closely related, any improvement in algorithms for one transform will theoretically lead to immediate gains for the other transforms as well (Duhamel & Vetterli, 1990).
 
While DCT algorithms that employ an unmodified FFT often have some theoretical overhead compared to the best specialized DCT algorithms, the former also have a distinct advantage: highly optimized FFT programs are widely available. Thus, in practice, it is often easier to obtain high performance for general lengths ''N'' with FFT-based algorithms. (Performance on modern hardware is typically not dominated simply by arithmetic counts, and optimization requires substantial engineering effort.) Specialized DCT algorithms, on the other hand, see widespread use for transforms of small, fixed sizes such as the <math>8 \times 8</math> DCT-II used in [[JPEG]] compression, or the small DCTs (or MDCTs) typically used in audio compression. (Reduced code size may also be a reason to use a specialized DCT for embedded-device applications.)
 
In fact, even the DCT algorithms using an ordinary FFT are sometimes equivalent to pruning the redundant operations from a larger FFT of real-symmetric data, and they can even be optimal from the perspective of arithmetic counts. For example, a type-II DCT is equivalent to a DFT of size <math>4N</math> with real-even symmetry whose even-indexed elements are zero. One of the most common methods for computing this via an FFT (e.g. the method used in [[FFTPACK]] and [[FFTW]]) was described by Narasimha & Peterson (1978) and Makhoul (1980), and this method in hindsight can be seen as one step of a radix-4 decimation-in-time Cooley–Tukey algorithm applied to the "logical" real-even DFT corresponding to the DCT II. (The radix-4 step reduces the size <math>4N</math> DFT to four size-<math>N</math> DFTs of real data, two of which are zero and two of which are equal to one another by the even symmetry, hence giving a single size-<math>N</math> FFT of real data plus <math>O(N)</math> [[butterfly (FFT algorithm)|butterflies]].) Because the even-indexed elements are zero, this radix-4 step is exactly the same as a split-radix step; if the subsequent size-<math>N</math> real-data FFT is also performed by a real-data [[split-radix FFT algorithm|split-radix algorithm]] (as in Sorensen et al., 1987), then the resulting algorithm actually matches what was long the lowest published arithmetic count for the power-of-two DCT-II (<math>2 N \log_2 N - N + 2</math> real-arithmetic operations<ref group=lower-alpha>The precise count of real arithmetic operations, and in particular the count of real multiplications, depends somewhat on the scaling of the transform definition. The <math>2 N \log_2 N - N + 2</math> count is for the DCT-II definition shown here; two multiplications can be saved if the transform is scaled by an overall <math>\sqrt2</math> factor. Additional multiplications can be saved if one permits the outputs of the transform to be rescaled individually, as was shown by Arai et al. (1988) for the size-8 case used in JPEG.</ref>). So, there is nothing intrinsically bad about computing the DCT via an FFT from an arithmetic perspective&mdash;it is sometimes merely a question of whether the corresponding FFT algorithm is optimal. (As a practical matter, the function-call overhead in invoking a separate FFT routine might be significant for small <math>N</math>, but this is an implementation rather than an algorithmic question since it can be solved by unrolling/inlining.)
 
==Example of IDCT==
Consider this 8x8 grayscale image of capital letter A.
 
[[File:letter-a-8x8.png|frame|center|Original size, scaled 10x (nearest neighbor), scaled 10x (bilinear).]]
 
DCT of the image.
::<math>
\begin{bmatrix}
6.1917 & -0.3411 & 1.2418  &  0.1492  &  0.1583  &  0.2742 &  -0.0724  &  0.0561 \\
0.2205 & 0.0214 & 0.4503  &  0.3947  & -0.7846 &  -0.4391  &  0.1001  & -0.2554 \\
1.0423 & 0.2214 & -1.0017 &  -0.2720  &  0.0789 &  -0.1952  &  0.2801  &  0.4713 \\
-0.2340 & -0.0392 & -0.2617 &  -0.2866 &  0.6351 &  0.3501 &  -0.1433  &  0.3550 \\
0.2750 & 0.0226 & 0.1229  &  0.2183  & -0.2583  & -0.0742  & -0.2042  & -0.5906 \\
0.0653 & 0.0428 & -0.4721 &  -0.2905  &  0.4745  &  0.2875  & -0.0284  & -0.1311 \\
0.3169 & 0.0541 & -0.1033 &  -0.0225  & -0.0056  &  0.1017  & -0.1650 &  -0.1500 \\
-0.2970 & -0.0627 & 0.1960 &  0.0644  & -0.1136 &  -0.1031 &  0.1887  &  0.1444 \\
\end{bmatrix}
</math>
 
[[File:dct-table.png|thumb|center|Basis functions of the discrete cosine transformation with corresponding coefficients (specific for our image).]]
 
Each basis function is multiplied by its coefficient and then this product is added to the final image.
 
[[File:idct-animation.gif|frame|center|On the left is final image. In the middle is weighted function (multiplied by coefficient) which is added to the final image. On the right is the current function and corresponding coefficient. Images are scaled (using bilinear interpolation) by factor 10x.]]
 
==Notes==
{{Reflist|group=lower-alpha}}
 
==Citations==
{{Reflist}}
 
==See also==
* [[Jpeg#Discrete cosine transform|JPEG]] - Contains an easier to understand example of DCT transformation
* [[Modified discrete cosine transform]]
* [[Discrete sine transform]]
* [[Discrete Fourier transform]]
* [[List of Fourier-related transforms]]
 
==References==
{{More footnotes|date=February 2008}}
* {{cite doi|10.1109/T-C.1974.223784}}
* {{cite doi|10.1109/TCOM.1977.1093941}}
* {{cite doi|10.1109/TCOM.1978.1094144}}
* {{cite doi|10.1109/TASSP.1980.1163351}}
* {{cite doi|10.1109/TASSP.1987.1165220}}
* {{Cite journal |last1=Arai |first1=Y.|last2=Agui |first2=T.|last3=Nakajima |first3=M. |title=A fast DCT-SQ scheme for images |journal=IEICE Transactions |volume=71 |issue=11 |pages=1095–1097 |date=November 1988 |url=http://search.ieice.org/bin/summary.php?id=e71-e_11_1095 |doi=}}
* {{cite doi|10.1016/0165-1684(90)90158-U}}
* {{cite doi|10.1016/1051-2004(91)90086-Z}}
* {{cite doi|10.1109/78.157218}}
* {{Citation |last1=Malvar |first1=Henrique |title=Signal Processing with Lapped Transforms |publisher=Artech House |location=Boston |year=1992 |isbn=0-89006-467-9}}
* {{cite doi|10.1109/78.295213}}
* {{Citation |last1=Oppenheim |first1=Alan |last2=Schafer |first2=Ronald |last3=Buck |first3=John |title=Discrete-Time Signal Processing |edition=2nd |publisher=Prentice Hall |location=Upper Saddle River, N.J |year=1999 |isbn=0-13-754920-2}}
* {{cite doi|10.1109/JPROC.2004.840301}}
* {{Citation |last1=Press |first1=WH |last2=Teukolsky |first2=SA |last3=Vetterling |first3=WT |last4=Flannery |first4=BP |year=2007 |title=Numerical Recipes: The Art of Scientific Computing |edition=3rd |publisher=Cambridge University Press |location=New York |chapter=Section 12.4.2. Cosine Transform |chapter-url=http://apps.nrbook.com/empanel/index.html#pg=624 |isbn=978-0-521-88068-8}}
 
==External links==
*{{planetmath reference|id=1469|title=discrete cosine transform}}
*Syed Ali Khayam: [http://wisnet.seecs.nust.edu.pk/publications/tech_reports/DCT_TR802.pdf The Discrete Cosine Transform (DCT): Theory and  Application]
*[http://www.reznik.org/software.html#IDCT Implementation of MPEG integer approximation of 8x8 IDCT (ISO/IEC 23002-2)]
*Matteo Frigo and Steven G. Johnson: ''FFTW'', http://www.fftw.org/. A free ([[GNU General Public License|GPL]]) C library that can compute fast DCTs (types I-IV) in one or more dimensions, of arbitrary size.
*Tim Kientzle: Fast algorithms for computing the 8-point DCT and IDCT, http://drdobbs.com/parallel/184410889.
 
{{Compression Methods}}
 
{{DEFAULTSORT:Discrete Cosine Transform}}
[[Category:Digital signal processing]]
[[Category:Fourier analysis]]
[[Category:Discrete transforms]]

Latest revision as of 19:29, 6 December 2014

Are we constantly having problems with your PC? Are we always seeking techniques to heighten PC performance? Then this is the post you have been shopping for. Here we will discuss a few of the many asked questions with regards to having we PC serve we well; how may I create my computer faster for free? How to make my computer run faster?

Firstly, you should use your Antivirus or security tool and run a scan on your computer. It is possible that the computer is infected with virus or malware that slows down a computer. If there is nothing found in the scanning report, it might be your RAM that cause the issue.

Over time the disk will also get fragmented. Fragmentation causes the computer to slow down because it takes windows much longer to find a files location. Fortunately, a PC has a built inside disk defragmenter. You can run this system by clicking "Start" - "All Programs" - "Accessories" - "System Tools" - "Disk Defragmenter". We can have the way to choose that drives or partition we want to defragment. This action will take you several time thus it is advised to do this regularly thus as to avoid further fragmentation plus to speed up a windows XP computer.

Windows errors may be caused by any amount of reasons, however, there's almost usually 1 cause. There's a hidden part of the system which is responsible for making 90% of all Windows errors, plus it's called the 'registry'. This is the central database for your system plus is where a computer stores all its program files plus settings. It's a important piece of Windows, that is must be capable to function. However, it's additionally 1 of the biggest causes of issues on a PC.

To fix the problem that is caused by registry error, we should employ a pc tools registry mechanic. That is the safest plus simplest method for average PC consumers. But there are thousands of registry products accessible available. You should discover a superior 1 which can really resolve your problem. If you use a terrible one, we may expect more problems.

Windows relies heavily on this database, storing everything from your newest emails to the Internet favorites in there. Because it's thus crucial, the computer is consistently adding plus updating the files inside it. This really is okay, however, it can create your computer run slow, when the computer accidentally breaks its crucial registry files. This really is a especially popular issue, plus actually makes a computer run slower each day. What occurs is the fact that since your computer is constantly using 100's of registry files at once, it sometimes gets confused plus create a few of them unreadable. This then makes the computer run slow, considering Windows takes longer to read the files it needs.

To speed up your computer, you simply have to be capable to do away with all these junk files, permitting a computer to find what it wants, when it wants. Luckily, there's a tool which allows you to do this easily plus quickly. It's a tool called a 'registry cleaner'.

If you wish to have a computer with fast running speed, you'd better install a advantageous registry cleaner to clean the useless files for you. As long as we take care of the computer, it can keep inside superior condition.