Friedman translation
A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small (frequently constant) number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.
In contrast, locally decodable codes use a small number of bits of the codeword to probabilistically recover the original information. The fraction of errors determines how likely it is that the decoder correctly recovers the original bit; however, not all locally decodable codes are locally testable.[1]
Clearly, any valid codeword should be accepted as a codeword, but strings that are not codewords could be only one bit off, which would require many (certainly more than a constant number) probes. To account for this, testing failure is only defined if the string is off by at least a set fraction of its bits. This implies words of the code must be longer than the input strings by adding some redundancy.
Definition
To measure the distance between two strings, the Hamming distance is used
The distance of a string from a code is computed by
Relative distances are computed as a fraction of the number of bits
A code is called -local -testable if there exists a Turing machine M given random access to an input that makes at most non-adaptive queries of and satisfies the following:[2]
Limits
It remains an open question whether there are any locally testable codes of linear size, but there are several constructions that are considered "nearly linear":[3]
- Polynomial arbitrarily close to linear; for any , .
- Functions of the form , where is a function tending toward 0. This makes n closer to linear as k increases. For example:
These have both been achieved, even with constant query complexity and a binary alphabet, such as with for any . The next nearly linear goal is linear up to a polylogarithmic factor; . Nobody has yet to come up with a linearly testable code that satisfies this constraint.[3]
Parallels
Locally testable codes have a lot in common with probabilistically checkable proofs (PCPs). This should be apparent from the similarities of their construction. In both, we are given random nonadaptive queries into a large string and if we want to accept, we must with probability 1, and if not, we must accept no more than half the time. The major difference is that PCPs are interested in accepting if there exists a so that . Locally testable codes, on the other hand, accept if it is part of the code. Many things can go wrong in assuming a PCP proof encodes a locally testable code. For example, the PCP definition says nothing about invalid proofs, only invalid inputs.
Despite this difference, locally testable codes and PCPs are similar enough that frequently to construct one, a prover will construct the other along the way.[4]
Examples
Hadamard code
One of the most famous error-correcting codes, the Hadamard code is a locally testable code. A codeword x is encoded in the Hadamard code to be the linear function (mod 2). This requires listing out the result of this function for every possible y, which requires exponentially more bits than its input. To test if a string w is a codeword of the Hadamard code, all we have to do is test if the function it encodes is linear. This means simply checking if for x and y uniformly random vectors (where denotes bitwise XOR).
It is easy to see that for any valid encoding , this equation is true, as that is the definition of a linear function. Somewhat harder, however, is showing that a string that is -far from C will have an upper bound on its error in terms of . One bound is found by the direct approach of approximating the chances of exactly one of the three probes yielding an incorrect result. Let A, B, and C be the events of , , and being incorrect. Let E be the event of exactly one of these occurring. This comes out to
This works for , but shortly after, . With additional work, it can be shown that the error is bounded by
For any given , this only has a constant chance of false positives, so we can simply check a constant number of times to get the probability below 1/2.[5]
Long code
The Long code is another code with very large blowup which is close to locally testable. Given an input (note, this takes bits to represent), the function that returns the bit of the input, , is evaluated on all possible -bit inputs , and the codeword is the concatenation of these (of length ). The way to locally test this with some errors is to pick a uniformly random input and set , but with a small chance of flipping each bit, . Accept a function as a codeword if . If is a codeword, this will accept as long as was unchanged, which happens with probability . This violates the requirement that codewords are always accepted, but may be good enough for some needs.[6]
Other locally testable codes include Reed-Muller codes (see locally decodable codes for a decoding algorithm), Reed-Solomon codes, and the short code.
References
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.