# Introduction to classical error correction#

The field of error correction is mainly concerned with information processing with unreliable physical appratus. In such computing or communication units, the information is subjected to errors, which must be dealt with, if the results of the information processing is to be trusted.

While error correction for classical computation and communication is a well developed and has been for many decades, it is an active area of research in the field of quantum computation and communication. This development is necessary, as without quantum error correction we are unlikely to construct useful quantum computers with current and forseeable levels of engineering technology.

The theory of quantum error correction (QEC) should be learned in tandem with classical error correction (CEC), as some of the most important quantum error correction codes (QECC), namely the Calderbank-Shor-Steane (CSS) codes, are built from classical error correction codes (CECC). Therefore, I review essential CEC theory before proceeding with QEC theory.

## A model for error correction problems#

Consider the scenario where Alice wants to send information to Bob, via a transmission line/channel. This channel is a noisy one, so any information (strings of bits for example) is subject to errors, for example bits are flipped or lost.

Therefore, in order to reliably transmit information, Alice and Bob must agree on some method by which errors are detected and possibly corrected. The problem of finding such methods is one instance from the set of error correction problems.

It may seem that the above scenario is very specialized to a communication problem, but we can imagine Alice and Bob to be two points of time in a computational circuit, where the different bits in the circuit experience errors.

Finally, note that I have not made any mention of classical or quantum information in the above model, and this model is sufficient for the types of CECCs and QECCs I will discuss below.

## Classical Error Correction#

I begin by making a more concrete model of the Alice-Bob scenario above. Suppose that Alice wants to send Bob some information, which is in the form of a long stream of bits, called the data stream. The information has to be sent through a noisy transmission channel, as stated above. A simple such channel is one in which, with probability $p$, every bit sent via the channel is flipped. Otherwise, with probability $1-p$, the bit is transmitted with no error.

Suppose, that Alice wants to send the data stream $101$ to Bob. If she uses the naive strategy and sends the stream as is, the channel could corrupt the bits and, for instance, transform it into $100$. Bob would not have any way to knowing that an error occured during transmission or not. This method only succeeds if there is no error, which has probability $(1-p)^3$.

### Repetition code#

Instead, Alice and Bob can agree on some process that increases the probability of correct transmission. One such method is called the 'repetition code'. In this method, instead of sending $0$, Alice sends the block of bits $000$, and instead of sending $1$, Alice sends the block $111$. This process is called encoding, and formally, we use a bar over the message to indicate its encoded form. Hence, $$\bar{0} = 000, \quad \bar{1} = 111.$$

This means that if the data stream is $101$, then Alice sends the code stream, consisting of three code blocks, $$\overline{101} == 111000111.$$

Suppose, that Bob receives the corrupted code-stream $101100111$, in which the second and fourth bits have been flipped by the noisy channel. To correct any errors, he will apply a majority voting algorithm to each block of three bits in the received code-stream. The first block is $101$, and since there are two 1s in it and one 0, he infers that the first sent bit is likely to be 1. Similarly, he infers from the second and third blocks of three, that the second and third bits are likely to be 0 and 1. Hence, he correctly decodes the message to message was $101$.

Notice that this method only works if there is at most one bit flip in each block of three. If there are two or more errors, then Bob inferred bit string will be different from the one Alice intended to send. For example, if the corrupted code-string was $101110111$, then using the same algorithm Bob infers that that message was $110$.

Complete the function repetition_encode that when passed a bit string, encodes it according to the repetition code.

• Parameters:
message a str, guaranteed to only contain 0 or 1.
• Returns:
A str that is the the encoded version of the message
def repetition_encode(message):
pass

See our solution

Complete the function repetition_decode that when passed a bit string, decodes it according to the repetition code. This include identifying any errors and correcting for them.

• Parameters:
received_message a str, guaranteed to only contain 0 or 1, and length divisible by 3.
• Returns:
A str that is the the decoded version of the message
def repetition_decode(received_message):
pass

See our solution

Here is a script that generates random messages, run them though the encoder, then corrupts the result, and finally decodes them. Use this to check your answers.

# from random import randint

# # these are the two parameters we need to set
# message_length = 5
# p = 0.1

# # create a random message
# message = ''.join([str(randint(0,1)) for i in range(message_length)])
# print('message =', message)

# # encode the message
# encoded_message = repetition_encode(message)
# print('encoded_message =', encoded_message)

# # send it through the noisy channel
# corrupted_message = ''
# for c in encoded_message:
#     if random.random() < p:
#         if c == '0':
#             corrupted_message += '1'
#         else:
#             corrupted_message += '0'
#     else:
#         corrupted_message += c
# print('corrupted_message =', corrupted_message)

# # decode it
# decoded_message = repetition_decode(corrupted_message)
# print('decoded_message =', decoded_message)
# print('Is message = decoded_message?', message == decoded_message)

### Probability analysis#

Using the repetition code, what is the probability that Bob correctly infers the correct bit-string? Each block of three is treated separately, so we only need to estimate the probability of correct inference from each block. Let $p$ be the probability of error on a single bit. Let the probability of an error on each bit be independent.

What is the probability of

• zero errors
• one error
• two errors
• three errors.

See our solution

Consider the case of when Alice and Bob do not use any error correction process. Then if Alice sends a single bit, the probability of Bob infering the wrong bit against the physical probability of error $p$ is as follows. Obviously, it's a straight line.

import matplotlib.pyplot as plt
import numpy as np

# probability of error on transmitted bit
p = np.linspace(0, 1, 100)
# probability of incorrect inference by Bob
pr_of_incorrect_inference = p

_, ax = plt.subplots(figsize=(5, 5), layout='constrained');
ax.plot(p, pr_of_incorrect_inference);
ax.set_xlabel('probability of error on transmitted bit');
ax.set_ylabel('probability of incorrect inference by Bob');

When Alice and Bob do use error correction, plot the probability of incorrect inference by Bob vs the probability of error on a single bit $p$. Keep the line drawn above, so you can compare the two cases. For what values of $p$ is it beneficial to use error correction?



_, ax = plt.subplots(figsize=(5, 5), layout='constrained');
ax.plot(p, pr_of_incorrect_inference, label='no error correction');
# you can uncomment the line below to add another plot to the same figure

# ax.plot(p, pr_of_incorrect_inference, label='with error correction');

ax.set_xlabel('probability of error on transmitted bit');
ax.set_ylabel('probability of incorrect inference by Bob');
plt.legend();

See our solution

Hopefully, you were able to show that if the probablity of error is low, then the error correction procedure helps in successfully transmitting the message.

• A NOT operation, that takes $\bar{0}$ to $\bar{1}$, and $\bar{1}$ to $\bar{0}$.
$\overline{00}$ $\overline{0}$
$\overline{01}$ $\overline{1}$
$\overline{10}$ $\overline{1}$
$\overline{11}$ $\overline{1}$