The Anti-Infinity Parade

Mar 17, 2021

The XOR Fourier and quantum algorithms

I am teaching a course in quantum computing this semester, and the hardest part is building some intuition on why quantum algorithms work. Right now, we are discussing the most basic of quantum algorithms, the Deutsch-Jozsa and the Bernstein-Vazirani algorithms. The popular textbooks on quantum computing don't do justice to why these algorithms work, but Ryan O'Donnel quantum computing course does. Both video lectures and notes are available. Here, I want to synthesize at a high level, Ryan's explanation.

We are familiar with the standard Fourier series. Given any periodic function over the reals, we can decompose it into a sum of sinusoidal functions with different frequencies. We can think of each sinusoid as a pattern, and any function as a sum of these patterns. The different patterns are orthogonal and complete. And the Fourier transform is the process of going from the "data basis" to the "pattern basis". This give rise to the idea of trying to decompose functions defined over other fields, using appropriate patterns.

In the quantum algorithms mentioned above, the classical functions take binary strings to a single binary bit. When implemented as a quantum gate, we get the transformation <{}math><{}semantics><{}mrow><{}mi mathvariant="normal">∣<{}mi>x<{}msub><{}mo stretchy="false">⟩<{}mi>n<{}mi mathvariant="normal">∣<{}mi>y<{}msub><{}mo stretchy="false">⟩<{}mn>1<{}mo>→<{}mi mathvariant="normal">∣<{}mi>x<{}msub><{}mo stretchy="false">⟩<{}mi>n<{}mi mathvariant="normal">∣<{}mi>y<{}mo>+<{}mi>f<{}mo stretchy="false">(<{}mi>x<{}mo stretchy="false">)<{}msub><{}mo stretchy="false">⟩<{}mn>1<{}annotation encoding="application/x-tex">|x\rangle_n |y\rangle_1 \to |x\rangle_n |y+f(x)\rangle_1, where the plus is the bitwise xor operation. It is quite easy to show that if the last bit is in the <{}math><{}semantics><{}mrow><{}mi mathvariant="normal">∣<{}mo>−<{}mo stretchy="false">⟩<{}annotation encoding="application/x-tex">|-\rangle state, then the last bit remains unchanged by the function, and instead the state picks up an overall phase of <{}math><{}semantics><{}mrow><{}mo stretchy="false">(<{}mo>−<{}mn>1<{}msup><{}mo stretchy="false">)<{}mrow><{}mi>f<{}mo stretchy="false">(<{}mi>x<{}mo stretchy="false">)<{}annotation encoding="application/x-tex">(-1)^{f(x)}. This phase will either be a <{}math><{}semantics><{}mrow><{}mo>+<{}mn>1<{}annotation encoding="application/x-tex">+1 or a <{}math><{}semantics><{}mrow><{}mo>−<{}mn>1<{}annotation encoding="application/x-tex">-1 depending on the value of <{}math><{}semantics><{}mrow><{}mi>f<{}mo stretchy="false">(<{}mi>x<{}mo stretchy="false">)<{}annotation encoding="application/x-tex">f(x). Hence, we can ignore the last qubit, concentrate on the input qubits alone. If we have some superposition in the state of the input qubits, then <{}math><{}semantics><{}mrow><{}msub><{}mo>∑<{}mi>x<{}mi mathvariant="normal">∣<{}mi>x<{}mo stretchy="false">⟩<{}mo>→<{}msub><{}mo>∑<{}mi>x<{}mo stretchy="false">(<{}mo>−<{}mn>1<{}msup><{}mo stretchy="false">)<{}mrow><{}mi>f<{}mo stretchy="false">(<{}mi>x<{}mo stretchy="false">)<{}mi mathvariant="normal">∣<{}mi>x<{}mo stretchy="false">⟩<{}annotation encoding="application/x-tex">\sum_x |x\rangle \to \sum_x (-1)^{f(x)} |x\rangle. So, the action of a function implemented quantumly is to change the amplitudes of the various states in the superposition.

What will the patterns look like in this case? They will be some special superpositions, that are orthogonal and complete. We want to stick to superpositions, where each entry is a <{}math><{}semantics><{}mrow><{}mo>+<{}mn>1<{}annotation encoding="application/x-tex">+1 or <{}math><{}semantics><{}mrow><{}mo>−<{}mn>1<{}annotation encoding="application/x-tex">-1. We know that for <{}math><{}semantics><{}mrow><{}mi>n<{}annotation encoding="application/x-tex">n-bit functions, we should have <{}math><{}semantics><{}mrow><{}msup><{}mn>2<{}mi>n<{}annotation encoding="application/x-tex">2^n patterns, because there are this many <{}math><{}semantics><{}mrow><{}mi>n<{}annotation encoding="application/x-tex">n-bit to <{}math><{}semantics><{}mrow><{}mn>1<{}annotation encoding="application/x-tex">1-bit functions. It is a combinatorial exercise to construct a list of such patterns that are orthogonal. As it turns out for one-bit to one-bit functions, these is just the columns of the Hadamard gate, and for <{}math><{}semantics><{}mrow><{}mi>n<{}annotation encoding="application/x-tex">n-bit to <{}math><{}semantics><{}mrow><{}mn>1<{}annotation encoding="application/x-tex">1-bit functions, these are just the columns of the tensor product of <{}math><{}semantics><{}mrow><{}mi>n<{}annotation encoding="application/x-tex">n Hadamard gates (all up to a pesky normalization constant). The most important of these patterns is the one with all positive signs, <{}math><{}semantics><{}mrow><{}msub><{}mo>∑<{}mi>x<{}mi mathvariant="normal">∣<{}mi>x<{}mo stretchy="false">⟩<{}annotation encoding="application/x-tex">\sum_x |x\rangle. We can also visualize this as a vector with all ones in it.

Now, we turn to quantum algorithms. The structure of these basis algorithms is the same. Suppose, we have some unknown function (from a set of known functions), which we want to identify. Classically, we would have to query the function many times to determine what values it takes. Quantumly, we can query it with a superposition of basis states. Let's query it with the all positive pattern. The reason being that all basis states are treated identically and equally by this pattern. We can create the all positive pattern by applying the Hadamard gate to the all zero basis state. Another way of saying the above is that the all zero basis state is in the pattern basis. We use the Hadamards to take it to the data basis, to create the state <{}math><{}semantics><{}mrow><{}mo stretchy="false">[<{}mn>1<{}mo separator="true">,<{}mo>…<{}mo separator="true">,<{}mn>1<{}mo stretchy="false">]<{}annotation encoding="application/x-tex">[1,\dots,1].

Next, we query the function with this pattern to get a state that encodes all the values that the function takes as a phase. This can be visualized as a vector with each entry as a phase, <{}math><{}semantics><{}mrow><{}mo stretchy="false">[<{}mo stretchy="false">(<{}mo>−<{}mn>1<{}msup><{}mo stretchy="false">)<{}mrow><{}mi>f<{}mo stretchy="false">(<{}mn>0<{}mo>…<{}mn>0<{}mo stretchy="false">)<{}mo separator="true">,<{}mo>…<{}mo separator="true">,<{}mo stretchy="false">(<{}mo>−<{}mn>1<{}msup><{}mo stretchy="false">)<{}mrow><{}mi>f<{}mo stretchy="false">(<{}mn>1<{}mo>…<{}mn>1<{}mo stretchy="false">)<{}mo stretchy="false">]<{}annotation encoding="application/x-tex">[(-1)^{f(0\dots 0)},\dots, (-1)^{f(1\dots 1)}]. In order to extract information about the function, we rotate back to pattern basis. This can be done by applying the inverse of the Hadamard gates, but the Hadamard is fortunately self-inverse. So, we apply a tensor product of Hadamards again. This is equivalent to projecting onto the pattern basis. Now, we have a vector <{}math><{}semantics><{}mrow><{}mo stretchy="false">[<{}mi>p<{}mo stretchy="false">(<{}mn>0<{}mo>…<{}mn>0<{}mo stretchy="false">)<{}mo separator="true">,<{}mo>…<{}mo separator="true">,<{}mi>p<{}mo stretchy="false">(<{}mn>1<{}mo>…<{}mn>1<{}mo stretchy="false">)<{}mo stretchy="false">]<{}annotation encoding="application/x-tex">[p(0\dots 0),\dots,p(1\dots 1)], where each <{}math><{}semantics><{}mrow><{}mi>p<{}annotation encoding="application/x-tex">p is the strength of the relevant pattern.

Now, the core idea is that if our set of known functions can be divided into groups, where each one group outputs to a different subspace in the pattern basis, then the above formulation works. For instance, the Bernstein-Vazirani problem is simply to determine the unknown <{}math><{}semantics><{}mrow><{}mi>s<{}annotation encoding="application/x-tex">s, where <{}math><{}semantics><{}mrow><{}mi>f<{}mo stretchy="false">(<{}mi>x<{}mo stretchy="false">)<{}mo>=<{}mi>s<{}mo>⋅<{}mi>x<{}annotation encoding="application/x-tex">f(x) = s\cdot x. But there are <{}math><{}semantics><{}mrow><{}msup><{}mn>2<{}mi>n<{}annotation encoding="application/x-tex">2^n such functions (because <{}math><{}semantics><{}mrow><{}mi>s<{}annotation encoding="application/x-tex">s is <{}math><{}semantics><{}mrow><{}mi>n<{}annotation encoding="application/x-tex">n bits), and each maps onto a unique pattern. Hence, the above formulation will yield at the output exactly one pattern, and on measurement we will get exactly that pattern, and hence can identify <{}math><{}semantics><{}mrow><{}mi>s<{}annotation encoding="application/x-tex">s.

The Deutsch-Jozsa algorithm is a bit more complicated. Here, we have two subsets of functions, balanced and constant. As it turns out constant functions map only to the all positive pattern, while the balanced functions never map onto this pattern. Hence, the presence or absence of the all positive pattern on measurement is an indicator variable for constant and balanced functions respectively.