# Bra-ket notation

{{#invoke: Sidebar | collapsible }}

In quantum mechanics, Bra-ket notation is a standard notation for describing quantum states, composed of angle brackets and vertical bars. It can also be used to denote abstract vectors and linear functionals in mathematics. It is so called because the inner product (or dot product) of two states is denoted by a Template:Langlebra|c|ketTemplate:Rangle;

${\displaystyle \langle \phi |\psi \rangle }$,

consisting of a left part, ${\displaystyle \langle \phi |}$, called the bra (Template:IPAc-en), and a right part, ${\displaystyle |\psi \rangle }$, called the ket (Template:IPAc-en). The notation was introduced in 1939 by Paul Dirac[1] and is also known as Dirac notation, though the notation has precursors in Grassmann's use of the notation ${\displaystyle [\phi |\psi ]}$ for his inner products nearly 100 years previously.[2]

Bra-ket notation is widespread in quantum mechanics: almost every phenomenon that is explained using quantum mechanics—including a large portion of modern physics — is usually explained with the help of bra-ket notation. The expression ${\displaystyle \langle \phi |\psi \rangle }$ is typically interpreted as the probability amplitude for the state ψ to collapse into the state ϕ.

## Vector spaces

### Background: Vector spaces

{{#invoke:main|main}} In physics, basis vectors allow any vector to be represented geometrically using angles and lengths, in different directions, i.e. in terms of the spatial orientations. It is simpler to see the notational equivalences between ordinary notation and bra-ket notation, so for now; consider a vector A as an element of 3-d Euclidean space using the field of real numbers, symbolically stated as ${\displaystyle {\mathbf {A}}\in \mathbb {R} ^{3}\,\!}$.

The vector A can be written using any set of basis vectors and corresponding coordinate system. Informally basis vectors are like "building blocks of a vector", they are added together to make a vector, and the coordinates are the number of basis vectors in each direction. Two useful representations of a vector are simply a linear combination of basis vectors, and column matrices. Using the familiar cartesian basis, a vector A is written;

Illustration of cartesian vectors, bases, coordinates and components. The coordinates of the vector are equal to the projections of the vector (yellow) onto the x-component basis vector (green) - using the dot product (a special case of an inner product, see below).
{\displaystyle {\begin{aligned}{\mathbf {A}}&=A_{x}{\mathbf {e}}_{x}+A_{y}{\mathbf {e}}_{y}+A_{z}{\mathbf {e}}_{z}\\&=A_{x}{\begin{pmatrix}1\\0\\0\end{pmatrix}}+A_{y}{\begin{pmatrix}0\\1\\0\end{pmatrix}}+A_{z}{\begin{pmatrix}0\\0\\1\end{pmatrix}}\\&={\begin{pmatrix}A_{x}\\0\\0\end{pmatrix}}+{\begin{pmatrix}0\\A_{y}\\0\end{pmatrix}}+{\begin{pmatrix}0\\0\\A_{z}\end{pmatrix}}\\&={\begin{pmatrix}A_{x}\\A_{y}\\A_{z}\\\end{pmatrix}}\end{aligned}}}

respectively, where ex, ey, ez denotes the cartesian basis vectors (all are orthogonal unit vectors) and Ax, Ay, Az are the corresponding coordinates, in the x, y, z directions. In a more general notation, for any basis in 3d space we write;

${\displaystyle {\mathbf {A}}=A_{1}{\mathbf {e}}_{1}+A_{2}{\mathbf {e}}_{2}+A_{3}{\mathbf {e}}_{3}={\begin{pmatrix}A_{1}\\A_{2}\\A_{3}\\\end{pmatrix}}}$

Generalizing further, consider a vector A in an N dimensional vector space over the field of complex numbers ${\displaystyle {\mathbb {C} }}$, symbolically stated as ${\displaystyle {\mathbf {A}}\in \mathbb {C} ^{N}}$. The vector A is still conventionally represented by a linear combination of basis vectors or a column matrix:

${\displaystyle {\mathbf {A}}=\sum _{n=1}^{N}A_{n}{\mathbf {e}}_{n}={\begin{pmatrix}A_{1}\\A_{2}\\\vdots \\A_{N}\\\end{pmatrix}}}$

though the coordinates and vectors are now all complex-valued.

Even more generally, A can be a vector in a complex Hilbert space. Some Hilbert spaces, like ${\displaystyle {\mathbb {C} }^{N}}$, have finite dimension, while others have infinite dimension. In an infinite-dimensional space, the column-vector representation of A would be a list of infinitely many complex numbers.

### Ket notation for vectors

Rather than boldtype, over/under-arrows, underscores etc. conventionally used elsewhere; ${\displaystyle {\mathbf {A}},\,{\underline {A}},\,{\vec {A}}}$, Dirac's notation for a vector uses vertical bars and angular brackets; ${\displaystyle |A\rangle }$. When this notation is used, these vectors are called ket, read as "ket-A".[3] This applies to all vectors, the resultant vector and the basis. The previous vectors are now written

Illustration of cartesian vectors, bases, coordinates and components. The coordinates of the vector are equal to the projections of the vector (yellow) onto the x-component basis vector (green) - using the inner product (see below).
${\displaystyle |A\rangle =A_{x}|e_{x}\rangle +A_{y}|e_{y}\rangle +A_{z}|e_{z}\rangle ={\begin{pmatrix}A_{x}\\A_{y}\\A_{z}\end{pmatrix}},}$

or in a more easily generalized notation,

${\displaystyle |A\rangle =A_{1}|e_{1}\rangle +A_{2}|e_{2}\rangle +A_{3}|e_{3}\rangle ={\begin{pmatrix}A_{1}\\A_{2}\\A_{3}\end{pmatrix}},}$

The last one may be written in short as

${\displaystyle |A\rangle =A_{1}|1\rangle +A_{2}|2\rangle +A_{3}|3\rangle }$

Notice how any symbols, letters, numbers, or even words — whatever serves as a convenient label — can be used as the label inside a ket. In other words, the symbol " ${\displaystyle |A\rangle }$ " has a specific and universal mathematical meaning, but just the "A" by itself does not. Nevertheless, for convenience, there is usually some logical scheme behind the labels inside kets, such as the common practice of labeling energy eigenkets in quantum mechanics with a list of their quantum numbers.

### Inner products and bras

{{#invoke:main|main}} An inner product is a generalization of the dot product. The inner product of two vectors is a complex number. Bra-ket notation uses a specific notation for inner products:

${\displaystyle \langle A|B\rangle ={\text{the inner product of ket }}|A\rangle {\text{ with ket }}|B\rangle }$

For example, in three-dimensional complex Euclidean space,

${\displaystyle \langle A|B\rangle =A_{x}^{*}B_{x}+A_{y}^{*}B_{y}+A_{z}^{*}B_{z}}$

where ${\displaystyle A_{i}^{*}}$ denotes the complex conjugate of ${\displaystyle A_{i}}$. A special case is the inner product of a vector with itself, which is square of its norm (magnitude):

${\displaystyle \langle A|A\rangle =|A_{x}|^{2}+|A_{y}|^{2}+|A_{z}|^{2}}$

Bra-ket notation splits this inner product (also called a "bracket") into two pieces, the "bra" and the "ket":

${\displaystyle \langle A|B\rangle =\left(\,\langle A|\,\right)\,\,\left(\,|B\rangle \,\right)}$

where ${\displaystyle \langle A|}$ is called a bra, read as "bra-A", and ${\displaystyle |B\rangle }$ is a ket as above.

The purpose of "splitting" the inner product into a bra and a ket is that both the bra ${\displaystyle \langle A|}$ and the ket ${\displaystyle |B\rangle }$ are meaningful on their own, and can be used in other contexts besides within an inner product. There are two main ways to think about the meanings of separate bras and kets:

#### Bras and kets as row and column vectors

For a finite-dimensional vector space, using a fixed orthonormal basis, the inner product can be written as a matrix multiplication of a row vector with a column vector:

${\displaystyle \langle A|B\rangle =A_{1}^{*}B_{1}+A_{2}^{*}B_{2}+\cdots +A_{N}^{*}B_{N}={\begin{pmatrix}A_{1}^{*}&A_{2}^{*}&\cdots &A_{N}^{*}\end{pmatrix}}{\begin{pmatrix}B_{1}\\B_{2}\\\vdots \\B_{N}\end{pmatrix}}}$

Based on this, the bras and kets can be defined as:

${\displaystyle \langle A|={\begin{pmatrix}A_{1}^{*}&A_{2}^{*}&\cdots &A_{N}^{*}\end{pmatrix}}}$
${\displaystyle |B\rangle ={\begin{pmatrix}B_{1}\\B_{2}\\\vdots \\B_{N}\end{pmatrix}}}$

and then it is understood that a bra next to a ket implies matrix multiplication.

The conjugate transpose (also called Hermitian conjugate) of a bra is the corresponding ket and vice-versa:

${\displaystyle \langle A|^{\dagger }=|A\rangle ,\quad |A\rangle ^{\dagger }=\langle A|}$

because if one starts with the bra

${\displaystyle {\begin{pmatrix}A_{1}^{*}&A_{2}^{*}&\cdots &A_{N}^{*}\end{pmatrix}},}$

then performs a complex conjugation, and then a matrix transpose, one ends up with the ket

${\displaystyle {\begin{pmatrix}A_{1}\\A_{2}\\\vdots \\A_{N}\end{pmatrix}}}$

#### Bras as linear operators on kets

{{#invoke:main|main}} A more abstract definition, which is equivalent but more easily generalized to infinite-dimensional spaces, is to say that bras are linear functionals on kets, i.e. operators that input a ket and output a complex number. The bra operators are defined to be consistent with the inner product.

In mathematics terminology, the vector space of bras is the dual space to the vector space of kets, and corresponding bras and kets are related by the Riesz representation theorem.

### Non-normalizable states and non-Hilbert spaces

Bra-ket notation can be used even if the vector space is not a Hilbert space.

In quantum mechanics, it is common practice to write down kets which have infinite norm, i.e. non-normalisable wavefunctions. Examples include states whose wavefunctions are Dirac delta functions or infinite plane waves. These do not, technically, belong to the Hilbert space itself. However, the definition of "Hilbert space" can be broadened to accommodate these states (see the Gelfand–Naimark–Segal construction or rigged Hilbert spaces). The bra-ket notation continues to work in an analogous way in this broader context.

For a rigorous treatment of the Dirac inner product of non-normalizable states see the definition given by D. Carfì in [4] and.[5] For a rigorous definition of basis with a continuous set of indices and consequently for a rigorous definition of position and momentum basis see.[6] For a rigorous statement of the expansion of an S-diagonalizable operator - observable - in its eigenbasis or in another basis see.[7]

Banach spaces are a different generalization of Hilbert spaces. In a Banach space B, the vectors may be notated by kets and the continuous linear functionals by bras. Over any vector space without topology, we may also notate the vectors by kets and the linear functionals by bras. In these more general contexts, the bracket does not have the meaning of an inner product, because the Riesz representation theorem does not apply.

## Usage in quantum mechanics

The mathematical structure of quantum mechanics is based in large part on linear algebra:

Since virtually every calculation in quantum mechanics involves vectors and linear operators, it can involve, and often does involve, bra-ket notation. A few examples follow:

### Position-space wave function

The Hilbert space of a spin-0 point particle is spanned by a "position basis" ${\displaystyle \{\,|{\mathbf {r} }\rangle \,\}}$, where the label r extends over the set of all points in space. Since there are infinitely many vectors in the basis, this is an infinite-dimensional Hilbert space.

Starting from any ket ${\displaystyle |\Psi \rangle }$ in this Hilbert space, we can define a complex scalar function of r, known as a wavefunction:

${\displaystyle \Psi (\mathbf {r} )\ {\stackrel {\text{def}}{=}}\ \langle \mathbf {r} |\Psi \rangle .}$

On the left side, ${\displaystyle \Psi }$ is a function mapping any point in space to a complex number; on the right side, ${\displaystyle |\Psi \rangle }$ is a ket.

It is then customary to define linear operators acting on wavefunctions in terms of linear operators acting on kets, by

${\displaystyle A\Psi (\mathbf {r} )\ {\stackrel {\text{def}}{=}}\ \langle \mathbf {r} |A|\Psi \rangle .}$

For instance, the momentum operator p has the following form:

${\displaystyle \mathbf {p} \Psi (\mathbf {r} )\ {\stackrel {\text{def}}{=}}\ \langle \mathbf {r} |\mathbf {p} |\Psi \rangle =-i\hbar \nabla \Psi (\mathbf {r} ).}$

One occasionally encounters an expression like

${\displaystyle \nabla |\Psi \rangle .}$

though is something of a (fairly common) abuse of notation. The differential operator must be understood to be an abstract operator, acting on kets, that has the effect of differentiating wavefunctions once the expression is projected into the position basis:

${\displaystyle \nabla \langle \mathbf {r} |\Psi \rangle .}$

### Overlap of states

In quantum mechanics the expression ${\displaystyle \langle \phi |\psi \rangle }$ is typically interpreted as the probability amplitude for the state ${\displaystyle \psi \!}$ to collapse into the state ${\displaystyle \phi \!}$. Mathematically, this means the coefficient for the projection of ${\displaystyle \psi \!}$ onto ${\displaystyle \phi \!}$.

### Changing basis for a spin-1/2 particle

A stationary spin-½ particle has a two-dimensional Hilbert space. One orthonormal basis is:

${\displaystyle |\uparrow _{z}\rangle ,\;|\downarrow _{z}\rangle }$

where ${\displaystyle |\uparrow _{z}\rangle }$ is the state with a definite value of the spin operator Sz equal to +1/2 and ${\displaystyle |\downarrow _{z}\rangle }$ is the state with a definite value of the spin operator Sz equal to -1/2.

Since these are a basis, any quantum state of the particle can be expressed as a linear combination (i.e., quantum superposition) of these two states:

${\displaystyle |\psi \rangle =a_{\psi }|\uparrow _{z}\rangle +b_{\psi }|\downarrow _{z}\rangle }$

where ${\displaystyle a_{\psi },b_{\psi }}$ are complex numbers.

A different basis for the same Hilbert space is:

${\displaystyle |\uparrow _{x}\rangle ,\;|\downarrow _{x}\rangle }$

defined in terms of Sx rather than Sz.

Again, any state of the particle can be expressed as a linear combination of these two:

${\displaystyle |\psi \rangle =c_{\psi }|\uparrow _{x}\rangle +d_{\psi }|\downarrow _{x}\rangle }$

In vector form, you might write

${\displaystyle |\psi \rangle ={\begin{pmatrix}a_{\psi }\\b_{\psi }\end{pmatrix}},\;\;{\text{OR}}\;\;|\psi \rangle ={\begin{pmatrix}c_{\psi }\\d_{\psi }\end{pmatrix}}}$

depending on which basis you are using. In other words, the "coordinates" of a vector depend on the basis used.

There is a mathematical relationship between ${\displaystyle a_{\psi },b_{\psi },c_{\psi },d_{\psi }}$; see change of basis.

## Linear operators

### Linear operators acting on kets

A linear operator is a map that inputs a ket and outputs a ket. (In order to be called "linear", it is required to have certain properties.) In other words, if A is a linear operator and ${\displaystyle |\psi \rangle }$ is a ket, then ${\displaystyle A|\psi \rangle }$ is another ket.

In an N-dimensional Hilbert space, ${\displaystyle |\psi \rangle }$ can be written as an N×1 column vector, and then A is an N×N matrix with complex entries. The ket ${\displaystyle A|\psi \rangle }$ can be computed by normal matrix multiplication.

Linear operators are ubiquitous in the theory of quantum mechanics. For example, observable physical quantities are represented by self-adjoint operators, such as energy or momentum, whereas transformative processes are represented by unitary linear operators such as rotation or the progression of time.

### Linear operators acting on bras

Operators can also be viewed as acting on bras from the right hand side. Specifically, if A is a linear operator and ${\displaystyle \langle \phi |}$ is a bra, then ${\displaystyle \langle \phi |A}$ is another bra defined by the rule

${\displaystyle {\bigg (}\langle \phi |A{\bigg )}\;|\psi \rangle =\langle \phi |\;{\bigg (}A|\psi \rangle {\bigg )}}$.

(in other words, a function composition). This expression is commonly written as (cf. energy inner product)

${\displaystyle \langle \phi |A|\psi \rangle .}$

In an N-dimensional Hilbert space, ${\displaystyle \langle \phi |}$ can be written as a 1×N row vector, and A (as in the previous section) is an N×N matrix. Then the bra ${\displaystyle \langle \phi |A}$ can be computed by normal matrix multiplication.

If the same state vector appears on both bra and ket side,

${\displaystyle \langle \psi |A|\psi \rangle ,}$

then this expression gives the expectation value, or mean or average value, of the observable represented by operator A for the physical system in the state ${\displaystyle |\psi \rangle }$.

### Outer products

A convenient way to define linear operators on H is given by the outer product: if ${\displaystyle \langle \phi |}$ is a bra and ${\displaystyle |\psi \rangle }$ is a ket, the outer product

${\displaystyle |\phi \rangle \langle \psi |}$

For a finite-dimensional vector space, the outer product can be understood as simple matrix multiplication:

${\displaystyle |\phi \rangle \,\langle \psi |={\begin{pmatrix}\phi _{1}\\\phi _{2}\\\vdots \\\phi _{N}\end{pmatrix}}{\begin{pmatrix}\psi _{1}^{*}&\psi _{2}^{*}&\cdots &\psi _{N}^{*}\end{pmatrix}}={\begin{pmatrix}\phi _{1}\psi _{1}^{*}&\phi _{1}\psi _{2}^{*}&\cdots &\phi _{1}\psi _{N}^{*}\\\phi _{2}\psi _{1}^{*}&\phi _{2}\psi _{2}^{*}&\cdots &\phi _{2}\psi _{N}^{*}\\\vdots &\vdots &\ddots &\vdots \\\phi _{N}\psi _{1}^{*}&\phi _{N}\psi _{2}^{*}&\cdots &\phi _{N}\psi _{N}^{*}\end{pmatrix}}}$

The outer product is an N×N matrix, as expected for a linear operator.

One of the uses of the outer product is to construct projection operators. Given a ket ${\displaystyle |\psi \rangle }$ of norm 1, the orthogonal projection onto the subspace spanned by ${\displaystyle |\psi \rangle }$ is

${\displaystyle |\psi \rangle \langle \psi |.}$

### Hermitian conjugate operator

{{#invoke:main|main}} Just as kets and bras can be transformed into each other (making ${\displaystyle |\psi \rangle }$ into ${\displaystyle \langle \psi |}$) the element from the dual space corresponding with ${\displaystyle A|\psi \rangle }$ is ${\displaystyle \langle \psi |A^{\dagger }}$ where A denotes the Hermitian conjugate (or adjoint) of the operator A. In other words,

${\displaystyle |\phi \rangle =A|\psi \rangle }$ if and only if ${\displaystyle \langle \phi |=\langle \psi |A^{\dagger }}$.

If A is expressed as an N×N matrix, then A is its conjugate transpose.

Self-adjoint operators, where A=A, play an important role in quantum mechanics; for example, an observable is always described by a self-adjoint operator. If A is a self-adjoint operator, then ${\displaystyle \langle \psi |A|\psi \rangle }$ is always a real number (not complex). This implies that expectation values of observables are real.

## Properties

Bra-ket notation was designed to facilitate the formal manipulation of linear-algebraic expressions. Some of the properties that allow this manipulation are listed herein. In what follows, c1 and c2 denote arbitrary complex numbers, c* denotes the complex conjugate of c, A and B denote arbitrary linear operators, and these properties are to hold for any choice of bras and kets.

### Linearity

• Since bras are linear functionals,
${\displaystyle \langle \phi |\;{\bigg (}c_{1}|\psi _{1}\rangle +c_{2}|\psi _{2}\rangle {\bigg )}=c_{1}\langle \phi |\psi _{1}\rangle +c_{2}\langle \phi |\psi _{2}\rangle .}$
• By the definition of addition and scalar multiplication of linear functionals in the dual space,[8]
${\displaystyle {\bigg (}c_{1}\langle \phi _{1}|+c_{2}\langle \phi _{2}|{\bigg )}\;|\psi \rangle =c_{1}\langle \phi _{1}|\psi \rangle +c_{2}\langle \phi _{2}|\psi \rangle .}$

### Associativity

Given any expression involving complex numbers, bras, kets, inner products, outer products, and/or linear operators (but not addition), written in bra-ket notation, the parenthetical groupings do not matter (i.e., the associative property holds). For example:

${\displaystyle \langle \psi |(A|\phi \rangle )=(\langle \psi |A)|\phi \rangle \,{\stackrel {\text{def}}{=}}\,\langle \psi |A|\phi \rangle }$
${\displaystyle (A|\psi \rangle )\langle \phi |=A(|\psi \rangle \langle \phi |)\,{\stackrel {\text{def}}{=}}\,A|\psi \rangle \langle \phi |}$

and so forth. The expressions on the right (with no parentheses whatsoever) are allowed to be written unambiguously because of the equalities on the left. Note that the associative property does not hold for expressions that include non-linear operators, such as the antilinear time reversal operator in physics.

### Hermitian conjugation

Bra-ket notation makes it particularly easy to compute the Hermitian conjugate (also called dagger, and denoted †) of expressions. The formal rules are:

• The Hermitian conjugate of a bra is the corresponding ket, and vice-versa.
• The Hermitian conjugate of a complex number is its complex conjugate.
• The Hermitian conjugate of the Hermitian conjugate of anything (linear operators, bras, kets, numbers) is itself—i.e.,
${\displaystyle (x^{\dagger })^{\dagger }=x}$.
• Given any combination of complex numbers, bras, kets, inner products, outer products, and/or linear operators, written in bra-ket notation, its Hermitian conjugate can be computed by reversing the order of the components, and taking the Hermitian conjugate of each.

These rules are sufficient to formally write the Hermitian conjugate of any such expression; some examples are as follows:

• Kets:
${\displaystyle \left(c_{1}|\psi _{1}\rangle +c_{2}|\psi _{2}\rangle \right)^{\dagger }=c_{1}^{*}\langle \psi _{1}|+c_{2}^{*}\langle \psi _{2}|.}$
• Inner products:
${\displaystyle \langle \phi |\psi \rangle ^{*}=\langle \psi |\phi \rangle }$
• Matrix elements:
${\displaystyle \langle \phi |A|\psi \rangle ^{*}=\langle \psi |A^{\dagger }|\phi \rangle }$
${\displaystyle \langle \phi |A^{\dagger }B^{\dagger }|\psi \rangle ^{*}=\langle \psi |BA|\phi \rangle }$
• Outer products:
${\displaystyle \left((c_{1}|\phi _{1}\rangle \langle \psi _{1}|)+(c_{2}|\phi _{2}\rangle \langle \psi _{2}|)\right)^{\dagger }=(c_{1}^{*}|\psi _{1}\rangle \langle \phi _{1}|)+(c_{2}^{*}|\psi _{2}\rangle \langle \phi _{2}|)}$

## Composite bras and kets

Two Hilbert spaces V and W may form a third space ${\displaystyle V\otimes W}$ by a tensor product. In quantum mechanics, this is used for describing composite systems. If a system is composed of two subsystems described in V and W respectively, then the Hilbert space of the entire system is the tensor product of the two spaces. (The exception to this is if the subsystems are actually identical particles. In that case, the situation is a little more complicated.)

If ${\displaystyle |\psi \rangle }$ is a ket in V and ${\displaystyle |\phi \rangle }$ is a ket in W, the direct product of the two kets is a ket in ${\displaystyle V\otimes W}$. This is written variously as

${\displaystyle |\psi \rangle |\phi \rangle }$ or ${\displaystyle |\psi \rangle \otimes |\phi \rangle }$ or ${\displaystyle |\psi \phi \rangle }$ or ${\displaystyle |\psi ,\phi \rangle .}$

## The unit operator

Consider a complete orthonormal system (basis), ${\displaystyle \{e_{i}\ |\ i\in \mathbb {N} \}}$, for a Hilbert space H, with respect to the norm from an inner product ${\displaystyle \langle \cdot ,\cdot \rangle }$. From basic functional analysis we know that any ket ${\displaystyle |\psi \rangle }$ can be written as

${\displaystyle |\psi \rangle =\sum _{i\in {\mathbb {N} }}\langle e_{i}|\psi \rangle |e_{i}\rangle ,}$

with ${\displaystyle \langle \cdot |\cdot \rangle }$ the inner product on the Hilbert space. From the commutativity of kets with (complex) scalars now follows that

${\displaystyle \sum _{i\in {\mathbb {N} }}|e_{i}\rangle \langle e_{i}|={\hat {1}}}$

must be the unit operator, which sends each vector to itself. This can be inserted in any expression without affecting its value, for example

${\displaystyle \langle v|w\rangle =\langle v|\sum _{i\in {\mathbb {N} }}|e_{i}\rangle \langle e_{i}|w\rangle =\langle v|\sum _{i\in {\mathbb {N} }}|e_{i}\rangle \langle e_{i}|\sum _{j\in {\mathbb {N} }}|e_{j}\rangle \langle e_{j}|w\rangle =\langle v|e_{i}\rangle \langle e_{i}|e_{j}\rangle \langle e_{j}|w\rangle }$

where in the last identity Einstein summation convention has been used.

In quantum mechanics it often occurs that little or no information about the inner product ${\displaystyle \langle \psi |\phi \rangle }$ of two arbitrary (state) kets is present, while it is possible to say something about the expansion coefficients ${\displaystyle \langle \psi |e_{i}\rangle =\langle e_{i}|\psi \rangle ^{*}}$ and ${\displaystyle \langle e_{i}|\phi \rangle }$ of those vectors with respect to a chosen (orthonormalized) basis. In this case it is particularly useful to insert the unit operator into the bracket one time or more (for more information see Resolution of the identity).

## Notation used by mathematicians

The object physicists are considering when using the "bra-ket" notation is a Hilbert space (a complete inner product space).

Let ${\displaystyle {\mathcal {H}}}$ be a Hilbert space and ${\displaystyle h\in {\mathcal {H}}}$ is a vector in ${\displaystyle {\mathcal {H}}}$. What physicists would denote as ${\displaystyle |h\rangle }$ is the vector itself. That is

${\displaystyle (|h\rangle )\in {\mathcal {H}}}$.

Let ${\displaystyle {\mathcal {H}}^{*}}$ be the dual space of ${\displaystyle {\mathcal {H}}}$. This is the space of linear functionals on ${\displaystyle {\mathcal {H}}}$. The isomorphism ${\displaystyle \Phi :{\mathcal {H}}\to {\mathcal {H}}^{*}}$ is defined by ${\displaystyle \Phi (h)=\phi _{h}}$ where for all ${\displaystyle g\in {\mathcal {H}}}$ we have

${\displaystyle \phi _{h}(g)={\mbox{IP}}(h,g)=(h,g)=\langle h,g\rangle =\langle h|g\rangle }$,

Where

${\displaystyle {\mbox{IP}}(\,,\,),(\,,\,),\langle \,,\,\rangle ,\langle \,|\,\rangle }$

are just different notations for expressing an inner product between two elements in a Hilbert space (or for the first three, in any inner product space). Notational confusion arises when identifying ${\displaystyle \phi _{h}}$ and ${\displaystyle g}$ with ${\displaystyle \langle h|}$ and ${\displaystyle |g\rangle }$ respectively. This is because of literal symbolic substitutions. Let ${\displaystyle \phi _{h}=H=\langle h|}$ and let ${\displaystyle g=G=|g\rangle }$. This gives

${\displaystyle \phi _{h}(g)=H(g)=H(G)=\langle h|(G)=\langle h|(|g\rangle ).}$

One ignores the parentheses and removes the double bars. Some properties of this notation are convenient since we are dealing with linear operators and composition acts like a ring multiplication.

Moreover (and more embarrassingly, although this is essentially trivial), mathematicians usually write the dual entity not at the first place, as the physicists do, but at the second one, and they don't use the *-symbol, but an overline (which the physicists reserve to averages) to denote conjugate-complex numbers, i.e. for scalar products mathematicians usually write

${\displaystyle (\phi ,\psi )=\int \phi (x)\cdot {\overline {\psi (x)}}\,{\rm {d}}x\,,}$

whereas physicists would write for the same quantity

${\displaystyle \langle \psi |\phi \rangle =\int {\rm {d}}x\,\psi ^{*}(x)\cdot \phi (x)\,.}$

## References and notes

1. Template:Cite article
2. {{#invoke:citation/CS1|citation |CitationClass=book }}
3. Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN(10-) 0-07-145546 9
4. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
5. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
6. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
7. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
8. Lecture notes by Robert Littlejohn, eqns 12 and 13