18 Pages

## Course InformationEdit

This is the third course. The focus here will be computation. We will learn how to work with quantum gates and qubits in a more proper way then we have done thus far (throwing kets around).

This course lays the mathematical foundation for course D, in which we will finish Nielsen's video lectures and "wrap things up". Hopefully, at that point you will know enough quantum computing basics and math to be able to read and understand a real book on quantum computing (Nielsen's book, for example), should you want to take things further.

Course B

## Lecture 1 - The tensor productEdit

General info

When we write things like this |0>|1>, or |01> (which is the same thing), we are combining two kets. What does this really mean? It means we take a certain type of product of the two. The product is called the tensor product. We will now learn how to do tensor multiplication formally, and look at some of its properties.

The tensor product can be applied to matrices and vectors alike. It can even be applied to scalars. The tensor product of two matrices A and B is written:

$A \otimes B$

The ket notation |01> is just a simple way of expressing the tensor product of |0> and |1>.

$|01\rangle = |0\rangle|1\rangle = |0\rangle \otimes |1\rangle$

Tensor product rules

The tensor product is associative, meaning we can do this:

$A \otimes (B \otimes C) = (A \otimes B) \otimes C = A \otimes B \otimes C$

The tensor product distributes over addition. So we can do this, for example:

$(|0\rangle + |1\rangle)(|0\rangle + |1\rangle) = |0\rangle|0\rangle + |0\rangle|1\rangle + |1\rangle|0\rangle + |1\rangle|1\rangle = |00\rangle + |01\rangle + |10\rangle + |11\rangle$

Like the regular matrix product it is not commutative, however, so we generally have:

$|a\rangle|b\rangle \neq |b\rangle|a\rangle$

This is the reason why |01> and |10> are different vectors.

Tensor product computation

We will explain all tensor product rules by learning how to compute the tensor product of two matrices. These rules can be used for both matrices and vectors, and even scalars.

Take two matrices A and B.

1) The height of the tensor product will be the height of A multiplied with the height of B, and the width of the product will be the width of A multiplied with the width of B.

Lets say A is a 2x3 matrix, and B is a 5x1 matrix. The product would be a (2*5)x(3*1) = 10x3 matrix.

Lets say we have |0> and |1>. These are both 2 dimensional (column) vectors, so 2x1 matrices. The tensor product would be a (2*2)x(1*1) = 4x1 matrix, or in other words a 4 element vector.

2) The resulting matrix will be a series of copies of the right matrix (B), multiplied by the elements of A, in the following way - let:

$A = \left[\begin{matrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{matrix}\right], B = \left[\begin{matrix} b_{00} & b_{01} \\ b_{10} & b_{11} \end{matrix}\right]$

$A \otimes B = \left[\begin{matrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{matrix}\right] \otimes \left[\begin{matrix} b_{00} & b_{01} \\ b_{10} & b_{11} \end{matrix}\right] = \left[\begin{matrix} a_{00}B & a_{01}B \\ a_{10}B & a_{11}B \end{matrix}\right]$

When expanding B we get a 4x4 matrix:

$A \otimes B = \left[\begin{matrix} a_{00}b_{00} & a_{00}b_{01} & a_{01}b_{00} & a_{01}b_{01} \\ a_{00}b_{10} & a_{00}b_{11} & a_{01}b_{10} & a_{01}b_{11} \\ a_{10}b_{00} & a_{10}b_{01} & a_{11}b_{00} & a_{11}b_{01} \\ a_{10}b_{10} & a_{10}b_{11} & a_{11}b_{10} & a_{11}b_{11} \end{matrix}\right]$

This video shows examples on how to calculate tensor products.

Ket and gate combinations

We will now evaluate a simple quantum circuit in two different ways - first by running the qubits separately, then by combining their states and the gates. This illustrates how great braket notation really is.

The circuit we want to evaluate is this:

|1>---Z---> (qubit 2)

|0>---X---> (qubit 1)

We have qubit 1 in the |0> state, and qubit 2 in the |1> state, and we run them through an X and a Z gate respectively. What would the final states be?

$|\psi_1\rangle = X|0\rangle = |1\rangle$

$|\psi_2\rangle = Z|1\rangle = -|1\rangle$

The column vector forms would be:

$|\psi_1\rangle = \left[\begin{matrix} 0 \\ 1 \end{matrix}\right], |\psi_2\rangle = - \left[\begin{matrix} 0 \\ 1 \end{matrix}\right]$

We can also do this by combining the states.

$|\psi_{start}\rangle = |0\rangle|1\rangle = |01\rangle = \left[\begin{matrix} 0 \\ 1 \\ 0 \\ 0 \end{matrix}\right]$

We then form a two-qubit multigate by tensoring X and Z:

$G = X \otimes Z = \left[\begin{matrix} 0 & 1\\ 1 & 0 \end{matrix}\right] \otimes \left[\begin{matrix} 1 & 0\\ 0 & -1 \end{matrix}\right] = \left[\begin{matrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \end{matrix}\right]$

We then multiply the matrix with the state to get the final state:

$|\psi_{end}\rangle = G|\psi_{start}\rangle = \left[\begin{matrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \end{matrix}\right] \left[\begin{matrix} 0 \\ 1 \\ 0 \\ 0 \end{matrix}\right] = \left[\begin{matrix} 0 \\ 0 \\ 0 \\ -1 \end{matrix}\right] = - \left[\begin{matrix} 0 \\ 0 \\ 0 \\ 1 \end{matrix}\right]$

Note that the final state can be written as the tensor product of the end results above:

$|\psi_1\rangle = \left[\begin{matrix} 0 \\ 1 \end{matrix}\right], |\psi_2\rangle = - \left[\begin{matrix} 0 \\ 1 \end{matrix}\right]$

This means we got the same result, but as a tensor product. You often end up with a composite state like this, which means you have to break out individual qubits before you can read their values. It is sometimes not possible to separate two qubit states, for example when the states are entangled, but you can always break a qubit state out if you measure it first.

Simplification using kets and product rules

We could have evaluated the circuit much easier by using kets. It would have required a product rule though. When combining matrix multiplication and tensor multiplication, the following rule holds:

$(A \otimes B) \cdot (C \otimes D) = (A \cdot C) \otimes (B \cdot D)$

Using this rule, we could have done this instead:

$|\psi_{start}\rangle = |0\rangle \otimes |1\rangle$

$|\psi_{end}\rangle = (X \otimes Z)|\psi_{start}\rangle = (X \otimes Z)(|0\rangle \otimes |1\rangle) = (X|0\rangle) \otimes (Z|1\rangle) = |1\rangle \otimes -|1\rangle$

The last expression can be rewritten:

$|1\rangle \otimes -|1\rangle = -|11\rangle$

Scalars

A scalar can be seen as a 1x1 matrix, so we can use tensor products on those as well. The number 4 would be a 1x1 matrix with the only element being 4.

Taking the tensor product of constant multiples of matrices or vectors follows the same rules:

$(aM) \otimes (bN) = (a \otimes b)(M \otimes N)$

The tensor product of a scalar and another scalar would be a 1x1 matrix with the only element being the product of the two scalars, so it turns out to be regular scalar multiplication.

$(aM) \otimes (bN) = (a \otimes b)(M \otimes N) = ab(M \otimes N)$

Remember the probability rule? If we have a 50% chance of qubit A being |0> and a 50% chance of qubit B being |0>, the chance of both being |0> is 25%? We can derive that result using the tensor product rules.

$(\alpha|0\rangle + \beta|1\rangle) \otimes (\gamma|0\rangle + \delta|1\rangle) =$

Apply the distributive rule for addition:

$(\alpha|0\rangle) \otimes (\gamma|0\rangle) + (\alpha|0\rangle)\otimes(\delta|1\rangle) + (\beta|1\rangle)\otimes(\gamma|0\rangle) + (\beta|1\rangle)(\delta|1\rangle) =$

Apply the tensor-matrix multiplication rule:

$(\alpha \otimes \gamma)(|0\rangle \otimes |0\rangle) + (\alpha \otimes \delta)(|0\rangle \otimes |1\rangle) + (\beta \otimes \gamma)(|1\rangle \otimes |0\rangle) + (\beta \otimes \delta)(|1\rangle \otimes |1\rangle) =$

Apply the scalar multiplication rule:

$\alpha\gamma|00\rangle + \alpha\delta|01\rangle + \beta\gamma|10\rangle + \beta\delta|11\rangle$

Probabilities are multiplied.

## Lecture 2 - Inner and outer productsEdit

The inner product and norms

In real linear algebra there is something called the "dot product". If we have two 2D vectors:

$\vec{v} = (a,b), \vec{u} = (c,d)$

We calculate the dot product as such:

$\vec{v} \cdot \vec{u} = ac + bd$

In the general case, if we have to N-dimensional vectors:

$\vec{v} = (a_0, a_1, \ldots , a_N), \vec{u} = (b_0, b_1, \ldots , b_N)$

The dot product is:

$\vec{v} \cdot \vec{u} = a_0b_0 + a_1b_1 + \ldots + a_Nb_N$

We can use this product to define a norm:

$\|\vec{v}\| = \sqrt{\vec{v} \cdot \vec{v}} = \sqrt{a_0^2 + a_1^2 + \ldots + a_N^2}$

That is the regular norm in euclidian N-dimensional space. If we take the vector (1,1), the length of that vector is $\sqrt 2$. If we do the definition we get:

$\|(1,1)\| = \sqrt{1*1 + 1*1} = \sqrt 2$

The absolute value of a real number can be written:

$|a| = \sqrt{a^2}$

This means we can re-write the norm as such:

$\|\vec{v}\| = \sqrt{a_0^2 + a_1^2 + \ldots + a_N^2} = \sqrt{|a_0|^2 + |a_1|^2 + \ldots + |a_N|^2}$

This emphasizes the fact that we're dealing with real distances. The norm is really just a generalization of the Pythagorean theorem to N-dimensional space.

What about in complex space? Lets say we have the same situation where the coefficients are complex, can we make a norm and a dot product as well? The answer is yes.

The dot product, or inner product of two vectors with complex entries are defined as such:

$\langle\vec{v},\vec{u}\rangle = \vec{v} \cdot \vec{u}^* = a_0b_0^* + a_1b_1^* + \ldots + a_Nb_N^*$

In this equation, * means the complex conjugate:

$(a + bi)^* = a - bi$

The norm is defined in the same way as in real space.

$\|\vec{v}\| = \sqrt{\langle\vec{v} , \vec{v}\rangle} = \sqrt{\vec{v} \cdot \vec{v}^*}$

If we have a complex number, the absolute value (or modulus) of that number can be expressed:

$|z| = \sqrt{zz^*}$

For 1-dimensional vectors, the norm would be the absolute value of the coefficient. When there are more dimensions, we get the general expression:

$\|\vec{v}\| = \sqrt{\langle\vec{v} , \vec{v}\rangle} = \sqrt{a_0a_0^* + a_1a_1^* + \ldots + a_Na_N^*} = \sqrt{|a_0|^2 + |a_1|^2 + \ldots + |a_N|^2}$

It's a natural generalization of the norm of real vectors. Qubits (and their tensor products) live in N-dimensional complex space. Sometimes these (and similar) spaces are referred to as Hilbert spaces. What Hilbert space means in this context is basically this:

Hilbert space is a vector space of any dimension (including infinite) over the field of complex numbers, coupled with an inner product.

Finally, we can re-define the qubit normalization constraint using the norm:

$\| |\psi\rangle \| = 1$

The outer product

In the lecture 1 video on tensor products, we calculated the tensor product of the row and column vector representation of the same ket. For |0> it was done in the following manner:

$\left[\begin{matrix} 1 \\ 0 \end{matrix}\right] \otimes \left[\begin{matrix} 1 & 0 \end{matrix}\right] = \left[\begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix}\right]$

This is a useful matrix. It happens to define a projection onto the basis state |0>. To give an example of this, consider an arbitrary qubit state:

$|\psi \rangle = \alpha|0\rangle + \beta|1\rangle = \left[\begin{matrix} \alpha \\ \beta \end{matrix}\right]$

Now multiply the ket-0 matrix with the vector.

$\left[\begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix}\right] \left[\begin{matrix} \alpha \\ \beta \end{matrix}\right] = \left[\begin{matrix} \alpha \\ 0 \end{matrix}\right] = \alpha \left[\begin{matrix} 1 \\ 0 \end{matrix}\right] = \alpha |0\rangle$

What we got from multiplying the state with the |0> matrix was to give us only the |0> part of the vector.

For |1>, we have the following corresponding matrix:

$\left[\begin{matrix} 0 \\ 1 \end{matrix}\right] \otimes \left[\begin{matrix} 0 & 1 \end{matrix}\right] = \left[\begin{matrix} 0 & 0 \\ 0 & 1 \end{matrix}\right]$

If we do the same thing, we get:

$\left[\begin{matrix} 0 & 0 \\ 0 & 1 \end{matrix}\right] \left[\begin{matrix} \alpha \\ \beta \end{matrix}\right] = \left[\begin{matrix} 0 \\ \beta \end{matrix}\right] = \beta \left[\begin{matrix} 0 \\ 1 \end{matrix}\right] = \beta |1\rangle$

Same thing there. We get the coefficient corresponding to |1> when using the |1> projection matrix.

This can be used for many things. For example, it's part of the mathematical model of measurement. Remember that alpha and beta are both probability amplitudes, so if we can separate them from a general state we can find out the probability of measuring a qubit in a certain state. More on this in the segment about measuring.

Bra and Ket

With each 'ket' comes something called a 'bra'. Take |0> as an example. The corresponding 'bra' would be written <0|. In order to explain what the bra is, we first need to solidify what a ket is:

A ket is always a column vector.

The term |0> always refer to the column vector form. it is never on the row vector format. Sometimes it's written (1,0) in running text, but it always refer to the column vector. Consider the expression X|0>. The X matrix multiplied with |0>. That would not be legal unless |0> represents a column matrix.

The bra is the row matrix form of the ket, but with each element being the complex conjugate of the corresponding element of the ket.

More formally, if we define a ket as such:

$|\psi\rangle = \left[\begin{matrix} a_0 \\ a_1 \\ \vdots \\ a_N \end{matrix}\right]$

The corresponding bra would be:

$\langle\psi| = \left[\begin{matrix} a_0^* & a_1^* & \ldots & a_N^* \end{matrix}\right]$

Another way of defining it would be:

$\langle \psi | = {|\psi\rangle}^*$

Because taking the complex conjugate twice is the same as doing nothing, we also get:

${\langle \psi |}^* = ({|\psi\rangle}^*)^* = |\psi\rangle$

This turns out to be an extremely useful vector. Take the normal matrix product of the bra and ket, for example. It turns out to be the inner product:

$\langle\psi| |\psi\rangle = \left[\begin{matrix} a_0^* & a_1^* & \ldots & a_N^* \end{matrix}\right]\left[\begin{matrix} a_0 \\ a_1 \\ \vdots \\ a_N \end{matrix}\right] = a_0^*a_0 + a_1^*a_1 + \ldots + a_N^*a_N$

We normally remove one of the bars as part of the notation:

$\langle\psi| |\psi\rangle = \langle\psi | \psi\rangle$

This would also be equal to the inner product, so we use the same notation for the inner product:

$\langle |\psi\rangle, |\psi \rangle \rangle = \langle\psi | \psi\rangle$

The norm of a qubit state could now be expressed in this better looking form:

$\| |\psi \rangle \| = \sqrt{\langle\psi | \psi\rangle}$

Since the square of 1 is 1, we can now update the normalization constraint to this:

$\langle\psi | \psi\rangle = 1$

It is also legal to take the inner product of two different vectors, provided they are of the same dimensions. The result could be different depending on which vector is in the ket form, and which vector is in the bra form. In other words:

$\langle\psi | \varphi\rangle \neq \langle\varphi | \psi\rangle$

Notice that since the 'bra' is a row vector, this would be a legal expression:

$\langle\varphi| = \langle0|X$

It would correspond to this:

$\langle\varphi| = \left[\begin{matrix} 1 & 0 \end{matrix}\right] \left[\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right] = \left[\begin{matrix} 0 & 1 \end{matrix}\right]$

Bra and Ket in outer product notation

One thing to keep in mind now is that despite adding the bra, we still consider the expression |0>|0> to be the tensor product of |0> with itself. Only in the case of bras do we mean matrix multiplication. <0|0> means matrix multiplication, |0>|0> means tensor product.

Now, if we consider the fact that the complex conjugate of a real number is the same:

$a^* = a$

we see that the tensor products of |0> and |1> in the outer product section could have been written like this instead:

$\left[\begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix}\right] = \left[\begin{matrix} 1 \\ 0 \end{matrix}\right] \otimes \left[\begin{matrix} 1 & 0 \end{matrix}\right] = |0\rangle \otimes \langle0|$

The tensor product sign is often omitted in braket notation:

$|\psi\rangle \otimes \langle \varphi| = |\psi\rangle \langle \varphi|$

## Unitary matrices and braketsEdit

In course B we talked loosly about unitary matrices. We're now going to formalize that a bit more.

Definition: A square matrix is called unitary if the following relation holds: $UU^* = U^*U= I$, where * means the conjugate transpose.

We will first look at one of the most important properties of unitary matrices.

If a vector is acted upon by a linear operator O(|v>) = U|v>, where U is a unitary matrix, then we have:

$\|O(|v\rangle)\| = \|U|v\rangle\| = \||v\rangle\|$

We will prove this using kets. If we look at the definition of the conjugate transpose, we may notice that one way of defining a 'bra' is as the conjgate transpose of the corresponding 'ket' - and vice versa. The 'bra' will be a 1x2 matrix whos elements are the complex conjugates of the elements in a 2x1 matrix (the 'ket').

For the conjugate transpose, we have:

$(AB)^* = B^*A^*$

It works exactly like the transpose, in other words.

To get the norm of a ket, we have seen that we can take the matrix product of it and its 'bra' in the following way:

$\||\psi\rangle\| = \langle\psi|\psi\rangle$

This can be used for matrix multiples of kets as well, using the standard matrix rules. If we have the vector $U|\psi\rangle$, we can get the norm of that vector by matrix multiplying with its 'bra'. The 'bra' would be:

$(U|\psi\rangle)^* = (|\psi\rangle)^*U^* = \langle\psi|U^*$

The norm becomes:

$\|U|\psi\rangle\| = \langle\psi|U^*U|\psi\rangle = \langle\psi|I|\psi\rangle = \langle\psi|\psi\rangle = \||\psi\rangle\|$

The thing to take away here is that by applying a quantum gate to a valid qubit state, we get a new valid qubit state, because all quantum gates are unitary matrices.

Unitary matrices and inverses

Unitary matrices has an inverse by definition. An inverse has the following properties: $MM^{-1} = M^{-1}M = I$, and so has the conjugate transpose for a unitary matrix. We could put this in other terms:

If U is a unitary matrix, then we have:

$U^* = U^{-1}$

This means that all unitary matrices represents reversible operations. All quantum gates can be reversed. To reverse it, just find the gate that corresponds to the conjugate transpose of the gate matrix and apply it. This can of course be tricky in practice, but in theory it is always possible.

Unitary matrices and orthogonal bases

If we look at the relation $UU^* = I$, we can find another important property of unitary matrices. Remember that * means conjugate transpose. If we only concern ourselves with N-dimensional Euclidian (real) space, with its standard norm and dot-product, * would mean transpose, so the equation becomes: $UU^T = I$. Now let us look at matrix multiplication. To get an element $c_{i,j}$ in the product C = AB, we calculate the dot product of row i of A, and column j of B. $c_{i,j} = r_A(i) \cdot c_B(j)$. If we have the relation $A=U,B=U^T$ that means the columns of B are the rows of A, so we get: $c_{i,j} = r_U(i) \cdot r_U(j)$.

Now, since the product is $UU^T = I$, we see that when $i = j, c_{i,j} = c_{i,i} = c_{j,j} = 1$. These are the entries on the main diagonal of C. In all other cases, $c_{i,j} = 0$. We can express this in one single equation using the Kronecker delta: $c_{i,j} = \delta_{ij}$.

What does this tell us? It tells us that the dot product of a row vector and itself is 1, and hence also the norm. $r_U(i) \cdot r_U(i) = \|r_U(i)\| = 1$. This means all row vectors are normal. It also means that the dot product of any row vector with another row vector is 0, which means they are orthogonal. Apparently, the following statement holds:

The row vectors of a unitary matrix forms an orthonormal basis of N-dimensional Euclidian space, where dim(U) = NxN.

Due to the nature of the transpose, it is easy to see that this holds for the column vectors as well. The set of column vectors of a unitary matrix forms an orthonormal basis of N-dimensional real space.

It is easy to check that this holds for complex vector spaces as well, using the regular inner product.

Unitary matrices and multi-gates

When we build circuits, we often have to combine gates by using matrix and tensor products. What are the rules concerning matrix- and tensor-products of unitary matrices?

Theorem: If U and V are two unitary matrices, then UV is unitary.

We prove this by showing that $(UV)(UV)^* = I$.

$(UV)(UV)^* = UVV^*U^* = UIU^* = UU^* = I$

Theorem: The tensor product of two $N \times N$ dimensional unitary matrices is an $N^2 \times N^2$ unitary matrix.

To prove this, we recall that unitary matrices are square matrices. If U has dimension NxN and V MxM. We need to show that $(U \otimes V)(U \otimes V)^* = I_{NM\times NM}$, meaning their product is the identity matrix with the proper dimensions. First we need a small support statement, or "lemma":

Lemma: For any two matrices A and B, we have: $(A \otimes B)^* = A^* \otimes B^*$

The lemma is trivial and does not need to be proved here. We will simply combine it with the tensor/matrix product rule to prove the theorem:

$(U \otimes V)(U \otimes V)^* = (U \otimes V)(U^* \otimes V^*) = UU^* \otimes VV^* = I_{NxN} \otimes I_{MxM}= I_{NM\times NM}$

These results show that unitary matrices can be matrix multiplied and tensor multiplied without loosing the property of being unitary. This is not true for addition though. We can check that the sum of two unitaries, U + U = 2U, is not unitary - because it will scale the norm of any vector by a factor of 4:

$\langle\psi|2U^*2U|\psi\rangle = 4\langle\psi|U^*U|\psi\rangle = 4\langle\psi|\psi\rangle$

## MeasuringEdit

In this section we will learn a simple procedure for measuring qubits using matrices and tensor products. This is how it's normally done, and how it's done in the mod code.

You may want to review lecture 5 of course B before reading this, or at least the part concerning partial measurements. That section contains a procedure for measuring qubits based on ket algebra. We will create a similar step-by-step procedure here.

$|\psi\rangle = \left[\begin{matrix} \alpha \\ \beta \end{matrix}\right]$

We define:

$M_0 = |0\rangle \langle 0|$

$M_1 = |1\rangle \langle 1|$

Now we evaluate this expression for j = 0 and 1:

$\langle \psi | M_j^*M_j |\psi \rangle$

Note that we have:

$M_0 M_1 = M_1 M_0 = 0$

$M_j^n = M_j$

$M_j M_j^* = M_j$

For 0, we get:

$\langle \psi | M_0^*M_0 |\psi \rangle = \langle \psi | M_0 | \psi \rangle = \left[\begin{matrix} \alpha^* & \beta^* \end{matrix}\right] \left[\begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix}\right] \left[\begin{matrix} \alpha \\ \beta \end{matrix}\right] = \left[\begin{matrix} \alpha^* & \beta^* \end{matrix}\right] \left[\begin{matrix} \alpha \\ 0 \end{matrix}\right] = \alpha^* \alpha = |\alpha|^2$

What we are left with is the probability amplitude for |0>. Now we do the same thing for |1>:

$\langle \psi | M_1^*M_1 |\psi \rangle = \langle \psi | M_1 | \psi \rangle = \left[\begin{matrix} \alpha^* & \beta^* \end{matrix}\right] \left[\begin{matrix} 0 & 0 \\ 0 & 1 \end{matrix}\right] \left[\begin{matrix} \alpha \\ \beta \end{matrix}\right] = \left[\begin{matrix} \alpha^* & \beta^* \end{matrix}\right] \left[\begin{matrix} 0 \\ \beta \end{matrix}\right] = \beta^* \beta = |\beta|^2$

This gives the probability amplitude for |1>. Generally we have this:

Let $|\beta_0\rangle, |\beta_1\rangle$ be an orthonormal basis of 2-dimensional Hilbert space. The probability of getting $|\beta_j\rangle$ when measuring a state $|\psi\rangle$ is equal to:

$p(j) = \langle \psi | M_j^*M_j |\psi \rangle$

Here, $M_j = |\beta_j\rangle \langle \beta_j |$, j = 0 or 1.

The state after measuring is:

$|\psi_{post} \rangle = \frac{M_j |\psi \rangle}{\sqrt{(p(j)}}$

Example:

We have the state $|\psi\rangle = \frac{(1,2)}{\sqrt{5}}$

We measure in the computational basis. What is the probability of measuring $|0\rangle$?

We apply the formula:

$p(0) = \langle \psi | M_j^*M_j |\psi \rangle = \langle \psi | M_j |\psi \rangle = \frac{1}{5} \left[\begin{matrix} 1 & 2 \end{matrix}\right] \left[\begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix}\right] \left[\begin{matrix} 1 \\ 2 \end{matrix}\right] = \frac{1}{5} \left[\begin{matrix} 1 & 2 \end{matrix}\right] \left[\begin{matrix} 1 \\ 0 \end{matrix}\right] = \frac{1}{5}$

Partial measurements

If we have a multi-qubit state we can still apply the formula, but we have to "pad" the measurement operators with identity matrices.

Let's say we have an N qubit state $|\phi\rangle$. It will be made up of qubits 0, 1, 2, ... , N - 1. If we want to measure the probability of qubit k (0 <= k < N) having state $|\beta_j\rangle$, we will do this:

1) Create the measurement operator: $N_{(j, k)} = I \otimes I \otimes \ldots \otimes M_j \otimes \ldots \otimes I$

The M-operator should be in spot k, and the total number of terms should be N.

2) Apply the formula:

$p(j, k) = \langle \psi | N_{(j, k)}^*N_{(j, k)} |\psi \rangle$

3) The post-measurement state will be:

$|\psi_{post} \rangle = \frac{N_{(j, k)} |\psi \rangle}{\sqrt{(p(j,k)}}$

Example

We have the two-qubit state:

$|\psi\rangle = \frac{1}{\sqrt 2} \left[\begin{matrix} 1 \\ 0 \\ 1 \\ 0 \end{matrix}\right]$

We want to calculate the chance that qubit 0 is measured in the 1 state.

We start by creating the measurement operator:

$N_{(0,0)} = M_0 \otimes I = \left[\begin{matrix} 1 & 0\\ 0 & 0 \end{matrix}\right] \otimes \left[\begin{matrix} 1 & 0\\ 0 & 1 \end{matrix}\right] = \left[\begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right]$

We do the p(j,k) calculation:

$p(0, 0) = \frac{1}{\sqrt 2} \left[\begin{matrix} 1 & 0 & 1 & 0 \end{matrix}\right] \left[\begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right] \frac{1}{\sqrt 2} \left[\begin{matrix} 1 \\ 0 \\ 1 \\ 0 \end{matrix}\right] = \frac{1}{2} \left[\begin{matrix} 1 & 0 & 1 & 0 \end{matrix}\right] \left[\begin{matrix} 1 \\ 0 \\ 0 \\ 0 \end{matrix}\right] = \frac{1}{2}$

There is a 50% chance the first qubit is in the |0> state. If we measure it in this state, the post state will be:

$|\psi_{post} \rangle = \frac{M_j |\psi \rangle}{\sqrt{(p(j,k)}} = \frac{1}{\sqrt{\frac{1}{2}}} \left[\begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right] \frac{1}{\sqrt 2} \left[\begin{matrix} 1 \\ 0 \\ 1 \\ 0 \end{matrix}\right] = \left[\begin{matrix} 1 \\ 0 \\ 0 \\ 0 \end{matrix}\right]$

The test-state is actually the matrix version of (H|0>)|0>, which explains why the first qubit is |0> with a 50% chance. If we measure the first qubit in the |0> state, the post measuring state will be |0>|0>.

The measurement operators has a few important properties. We've seen a few simple algebraic properties of |0><0| and |1><1| in the previous sections, but there are others. For example, they abide by the following rule:

$\sum_j |\beta_j\rangle \langle \beta_j| = I$

We test this for the computational basis:

$\sum_j|j\rangle \langle j| = |0\rangle\langle 0| + |1\rangle\langle 1| = \left[\begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix}\right] + \left[\begin{matrix} 0 & 0 \\ 0 & 1 \end{matrix}\right] = \left[\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right]$

Let us also test this for the +/- basis:

$|+\rangle \langle + | = \frac{1}{\sqrt 2} \left[\begin{matrix} 1 \\ 1 \end{matrix}\right] \frac{1}{\sqrt 2} \left[\begin{matrix} 1 & 1 \end{matrix}\right] = \frac{1}{2} \left[\begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix}\right]$

$|-\rangle \langle -| = \frac{1}{\sqrt 2} \left[\begin{matrix} 1 \\ -1 \end{matrix}\right] \frac{1}{\sqrt 2} \left[\begin{matrix} 1 & -1 \end{matrix}\right] = \frac{1}{2} \left[\begin{matrix} 1 & -1 \\ -1 & 1 \end{matrix}\right]$

$\sum_j|\beta_j\rangle\langle \beta_j| = |+\rangle \langle + | + |-\rangle \langle -| = \frac{1}{2} \left[\begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix}\right] + \frac{1}{2} \left[\begin{matrix} 1 & -1 \\ -1 & 1 \end{matrix}\right] = \frac{1}{2} \left[\begin{matrix} 2 & 0 \\ 0 & 2 \end{matrix}\right] = I$

At this point we can combine gates and qubits any way we like, and also measure them. This is essentially all we need to create a basic quantum computer simulator.

Course D deals with a few remaining subjects, such as the role of eigenvalues and eigenvectors, and a few important general laws of quantum computing theory and quantum mechanics that we have not yet touched upon.