# Hausdorff measure

In mathematics a Hausdorff measure is a type of outer measure, named for Felix Hausdorff, that assigns a number in [0,∞] to each set in Rn or, more generally, in any metric space. The zero-dimensional Hausdorff measure is the number of points in the set (if the set is finite) or ∞ if the set is infinite. The one-dimensional Hausdorff measure of a simple curve in Rn is equal to the length of the curve. Likewise, the two dimensional Hausdorff measure of a measurable subset of R2 is proportional to the area of the set. Thus, the concept of the Hausdorff measure generalizes counting, length, and area. It also generalizes volume. In fact, there are d-dimensional Hausdorff measures for any d ≥ 0, which is not necessarily an integer. These measures are fundamental in geometric measure theory. They appear naturally in harmonic analysis or potential theory.

## Definition

Let ${\displaystyle (X,\rho )}$ be a metric space. For any subset ${\displaystyle \scriptstyle U\subset X}$, let ${\displaystyle \mathrm {diam} \;U}$ denote its diameter, that is

${\displaystyle \mathrm {diam} \;U:=\sup\{\rho (x,y)|x,y\in U\},\quad \mathrm {diam} \;\emptyset :=0}$

Let ${\displaystyle S}$ be any subset of ${\displaystyle X}$, and ${\displaystyle \delta >0}$ a real number. Define

${\displaystyle H_{\delta }^{d}(S)=\inf {\Bigl \{}\sum _{i=1}^{\infty }(\operatorname {diam} \;U_{i})^{d}:\bigcup _{i=1}^{\infty }U_{i}\supseteq S,\,\operatorname {diam} \;U_{i}<\delta {\Bigr \}}.}$

(The infimum is over all countable covers of ${\displaystyle S}$ by sets ${\displaystyle \scriptstyle U_{i}\subset X}$ satisfying ${\displaystyle \scriptstyle \operatorname {diam} \;U_{i}<\delta }$.)

Note that ${\displaystyle \scriptstyle H_{\delta }^{d}(S)}$ is monotone decreasing in ${\displaystyle \delta }$ since the larger ${\displaystyle \delta }$ is, the more collections of sets are permitted, making the infimum smaller. Thus, the limit ${\displaystyle \scriptstyle \lim _{\delta \to 0}H_{\delta }^{d}(S)}$ exists but may be infinite. Let

${\displaystyle H^{d}(S):=\sup _{\delta >0}H_{\delta }^{d}(S)=\lim _{\delta \to 0}H_{\delta }^{d}(S).}$

It can be seen that ${\displaystyle H^{d}(S)}$ is an outer measure (more precisely, it is a metric outer measure). By general theory, its restriction to the σ-field of Carathéodory-measurable sets is a measure. It is called the ${\displaystyle d}$-dimensional Hausdorff measure of ${\displaystyle S}$. Due to the metric outer measure property, all Borel subsets of ${\displaystyle X}$ are ${\displaystyle H^{d}}$ measurable.

In the above definition the sets in the covering are arbitrary. However, they may be taken to be open or closed, and will yield the same measure, although the approximations ${\displaystyle \scriptstyle H_{\delta }^{d}(S)}$ may be different Template:Harv. If ${\displaystyle X}$ is a normed space the sets may be taken to be convex. However, the restriction of the covering families to balls gives a different measure.[1]

## Properties of Hausdorff measures

Note that if d is a positive integer, the d dimensional Hausdorff measure of Rd is a rescaling of usual d-dimensional Lebesgue measure ${\displaystyle \lambda _{d}}$ which is normalized so that the Lebesgue measure of the unit cube [0,1]d is 1. In fact, for any Borel set E,

${\displaystyle \lambda _{d}(E)=2^{-d}\alpha _{d}H^{d}(E)\,}$

where αd is the volume of the unit d-ball; it can be expressed using Euler's gamma function

${\displaystyle \alpha _{d}={\frac {\Gamma ({\frac {1}{2}})^{d}}{\Gamma ({\frac {d}{2}}+1)}}={\frac {\pi ^{d/2}}{\Gamma ({\frac {d}{2}}+1)}}.}$

Remark. Some authors adopt a definition of Hausdorff measure slightly different from the one chosen here, the difference being that it is normalized in such a way that Hausdorff d-dimensional measure in the case of Euclidean space coincides exactly with Lebesgue measure.

## Relation with Hausdorff dimension

One of several possible equivalent definitions of the Hausdorff dimension is

${\displaystyle \operatorname {dim} _{\mathrm {Haus} }(S)=\inf\{d\geq 0:H^{d}(S)=0\}=\sup {\bigl (}\{d\geq 0:H^{d}(S)=\infty \}\cup \{0\}{\bigr )},}$

where we take

${\displaystyle \inf \emptyset =\infty .\,}$

## Generalizations

In geometric measure theory and related fields, the Minkowski content is often used to measure the size of a subset of a metric measure space. For suitable domains in Euclidean space, the two notions of size coincide, up to overall normalizations depending on conventions. More precisely, a subset of ${\displaystyle \scriptstyle \mathbb {R} ^{n}}$ is said to be ${\displaystyle m}$-rectifiable if it is the image of a bounded set in ${\displaystyle \scriptstyle \mathbb {R} ^{n}}$ under a Lipschitz function. If ${\displaystyle m, then the ${\displaystyle m}$-dimensional Minkowski content of a closed ${\displaystyle m}$-rectifiable subset of ${\displaystyle \scriptstyle \mathbb {R} ^{n}}$ is equal to ${\displaystyle 2^{-m}\alpha _{m}}$ times the ${\displaystyle m}$-dimensional Hausdorff measure Template:Harv.

In fractal geometry, some fractals with Hausdorff dimension ${\displaystyle d}$ have zero or infinite ${\displaystyle d}$-dimensional Hausdorff measure. For example, almost surely the image of planar Brownian motion has Hausdorff dimension 2 and its two-dimensional Hausdoff measure is zero. In order to “measure” the “size” of such sets, mathematicians have considered the following variation on the notion of the Hausdorff measure:

In the definition of the measure ${\displaystyle |U_{i}|^{d}}$ is replaced with ${\displaystyle \phi (U_{i})}$, where ${\displaystyle \phi }$ is any monotone increasing set function satisfying ${\displaystyle \phi (\emptyset )=0}$.

This is the Hausdorff measure of ${\displaystyle S}$ with gauge function ${\displaystyle \phi }$, or ${\displaystyle \phi }$-Hausdorff measure. A ${\displaystyle d}$-dimensional set ${\displaystyle S}$ may satisfy ${\displaystyle H^{d}(S)=0}$, but ${\displaystyle \scriptstyle H^{\phi }(S)\in (0,\infty )}$ with an appropriate ${\displaystyle \phi .}$ Examples of gauge functions include ${\displaystyle \scriptstyle \phi (t)=t^{2}\,\log \log {\frac {1}{t}}}$ or ${\displaystyle \scriptstyle \phi (t)=t^{2}\log {\frac {1}{t}}\log \log \log {\frac {1}{t}}}$. The former gives almost surely positive and ${\displaystyle \sigma }$-finite measure to the Brownian path in ${\displaystyle \scriptstyle \mathbb {R} ^{n}}$ when ${\displaystyle n>2}$, and the latter when ${\displaystyle n=2}$.