Divisia monetary aggregates index: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Econterms
 
link update
Line 1: Line 1:
Oscar is how he's known as and he totally enjoys this title. He is really fond of doing ceramics but he is having difficulties to discover time for it. Her husband and her reside in Puerto Rico but she will have to transfer 1 day or an additional. My working day occupation is a meter reader.<br><br>Here is my site - over the counter std test - [http://www.youporn-nederlandse.com/user/CMidgett view publisher site] -
The [[scale space representation]] of a signal obtained by [[Gaussian]] smoothing satisfies a number of special properties, [[scale-space axioms]], which make it into a special form of multi-scale representation. There are, however, also other types of ''''multi-scale approaches'''' in the areas of [[computer vision]], [[image processing]] and [[signal processing]], in particular the notion of [[wavelets]]. The purpose of this article is to describe a few of these approaches:
 
==Scale-space theory for one-dimensional signals==
 
For ''one-dimensional signals'', there exists quite a well-developed theory for continuous and discrete kernels that guarantee that new local extrema or zero-crossings cannot be created by a [[convolution]] operation.<ref>[http://www.nada.kth.se/~tony/abstracts/Lin90-PAMI.html Lindeberg, T., "Scale-space for discrete signals," PAMI(12), No. 3, March 1990, pp. 234-254.]</ref> For ''continuous signals'', it holds that all scale-space kernels can be decomposed into the following sets of primitive smoothing kernels:
<ul>
<li>
  the ''Gaussian kernel''
  :<math>g(x, t) = \frac{1}{\sqrt{2 \pi t}} \exp({-x^2/2 t})</math> where <math>t > 0</math>,
<li>
  ''truncated exponential'' kernels (filters with one real pole in the ''s''-plane):
:<math>h(x)= \exp({-a x})</math> if <math>x \geq 0</math> and 0 otherwise where <math>a > 0</math>
:<math>h(x)= \exp({b x})</math> if <math>x \leq 0</math> and 0 otherwise where <math>b > 0</math>,
<li>
  translations,
<li>
  rescalings.
</ul>
For ''discrete signals'', we can, up to trivial translations and rescalings, decompose any discrete scale-space kernel into the following primitive operations:
<li>
  the ''discrete Gaussian kernel''
:<math>T(n, t) = I_n(\alpha t) </math> where <math>\alpha, t > 0</math> where <math>I_n</math> are the modified Bessel functions of integer order,
<li>
''generalized binomial kernels'' corresponding to linear smoothing of the form
:<math>f_{out}(x) = p f_{in}(x) + q f_{in}(x-1)</math> where <math>p, q > 0</math>
:<math>f_{out}(x) = p f_{in}(x) + q f_{in}(x+1)</math> where <math>p, q > 0</math>,
<li>
''first-order recursive filters'' corresponding to linear smoothing of the form
:<math>f_{out}(x) = f_{in}(x) + \alpha f_{out}(x-1)</math> where <math>\alpha > 0</math>
:<math>f_{out}(x) = f_{in}(x) + \beta f_{out}(x+1)</math> where <math>\beta > 0</math>,
<li>the one-sided ''Poisson kernel''
:<math>p(n, t) = e^{-t} \frac{t^n}{n!}</math> for <math>n \geq 0</math> where <math>t\geq0</math>
:<math>p(n, t) = e^{-t} \frac{t^{-n}}{(-n)!}</math> for <math>n \leq 0</math> where <math>t\geq0</math>.
</ul>
From this classification, it is apparent that it we require a continuous semi-group structure, there are only three classes of scale-space kernels with a continuous scale parameter; the Gaussian kernel which forms the scale-space of continuous signals, the discrete Gaussian kernel which forms the scale-space of discrete signals and the time-causal Poisson kernel that forms a temporal scale-space over discrete time. If we on the other hand sacrifice the continuous semi-group structure, there are more options:
 
For discrete signals, the use of generalized binomial kernels provides a formal basis for defining the smoothing operation in a pyramid. For temporal data, the one-sided truncated exponential kernels and the first-order recursive filters provide a way to define ''time-causal scale-spaces'' <ref>[http://www.dicklyon.com/tech/Scans/ICASSP87_ScaleSpace-Lyon.pdf Richard F. Lyon. "Speech recognition in scale space," Proc. of 1987 ICASSP. San Diego, March, pp. 29.3.14, 1987.]</ref><ref>[http://www.nada.kth.se/cvap/abstracts/cvap189.html Lindeberg, T. and Fagerstrom, F.: Scale-space with causal time direction, Proc. 4th European Conference on Computer Vision, Cambridge, England, April 1996. Springer-Verlag LNCS Vol 1064, pages 229--240.]</ref> that allow for efficient numerical implementation and respect causality over time without access to the future. The first-order recursive filters also provide a framework for defining recursive approximations to the Gaussian kernel that in a weaker sense preserve some of the scale-space properties.<ref>[http://citeseer.ist.psu.edu/young95recursive.html Young, I.I., van Vliet, L.J.: Recursive implementation of the Gaussian filter, Signal Processing, vol. 44, no. 2, 1995, 139-151.]</ref><ref>[http://citeseer.ist.psu.edu/deriche93recursively.html Deriche, R: Recursively implementing the Gaussian and its derivatives, INRIA Research Report 1893, 1993.]</ref>
 
==See also==
 
* [[Scale space]]
* [[Scale space implementation]]
* [[Scale-space segmentation]]
 
==References==
 
<references/>
 
[[Category:Image processing]]
[[Category:Computer vision]]

Revision as of 14:27, 22 October 2013

The scale space representation of a signal obtained by Gaussian smoothing satisfies a number of special properties, scale-space axioms, which make it into a special form of multi-scale representation. There are, however, also other types of 'multi-scale approaches' in the areas of computer vision, image processing and signal processing, in particular the notion of wavelets. The purpose of this article is to describe a few of these approaches:

Scale-space theory for one-dimensional signals

For one-dimensional signals, there exists quite a well-developed theory for continuous and discrete kernels that guarantee that new local extrema or zero-crossings cannot be created by a convolution operation.[1] For continuous signals, it holds that all scale-space kernels can be decomposed into the following sets of primitive smoothing kernels:

For discrete signals, we can, up to trivial translations and rescalings, decompose any discrete scale-space kernel into the following primitive operations:

  • the discrete Gaussian kernel
    T(n,t)=In(αt) where α,t>0 where In are the modified Bessel functions of integer order,
  • generalized binomial kernels corresponding to linear smoothing of the form
    fout(x)=pfin(x)+qfin(x1) where p,q>0
    fout(x)=pfin(x)+qfin(x+1) where p,q>0,
  • first-order recursive filters corresponding to linear smoothing of the form
    fout(x)=fin(x)+αfout(x1) where α>0
    fout(x)=fin(x)+βfout(x+1) where β>0,
  • the one-sided Poisson kernel
    p(n,t)=ettnn! for n0 where t0
    p(n,t)=ettn(n)! for n0 where t0.

    From this classification, it is apparent that it we require a continuous semi-group structure, there are only three classes of scale-space kernels with a continuous scale parameter; the Gaussian kernel which forms the scale-space of continuous signals, the discrete Gaussian kernel which forms the scale-space of discrete signals and the time-causal Poisson kernel that forms a temporal scale-space over discrete time. If we on the other hand sacrifice the continuous semi-group structure, there are more options:

    For discrete signals, the use of generalized binomial kernels provides a formal basis for defining the smoothing operation in a pyramid. For temporal data, the one-sided truncated exponential kernels and the first-order recursive filters provide a way to define time-causal scale-spaces [2][3] that allow for efficient numerical implementation and respect causality over time without access to the future. The first-order recursive filters also provide a framework for defining recursive approximations to the Gaussian kernel that in a weaker sense preserve some of the scale-space properties.[4][5]

    See also

    References