Binary erasure channel
{{redirect|Erasure channel|another method|Packet erasure channel}}
Image:Binaryerasurechannel.png
In coding theory and information theory, a binary erasure channel (BEC) is a communications channel model. A transmitter sends a bit (a zero or a one), and the receiver either receives the bit correctly, or with some probability receives a message that the bit was not received ("erased") .
Definition
A binary erasure channel with erasure probability is a channel with binary input, ternary output, and probability of erasure . That is, let be the transmitted random variable with alphabet . Let be the received variable with alphabet , where is the erasure symbol. Then, the channel is characterized by the conditional probabilities:{{sfnp|MacKay|2003|p=148}}
:
\operatorname {Pr} [ Y = 0 | X = 0 ] &= 1 - P_e \\
\operatorname {Pr} [ Y = 0 | X = 1 ] &= 0 \\
\operatorname {Pr} [ Y = 1 | X = 0 ] &= 0 \\
\operatorname {Pr} [ Y = 1 | X = 1 ] &= 1 - P_e \\
\operatorname {Pr} [ Y = e | X = 0 ] &= P_e \\
\operatorname {Pr} [ Y = e | X = 1 ] &= P_e
\end{align}
Capacity
The channel capacity of a BEC is , attained with a uniform distribution for (i.e. half of the inputs should be 0 and half should be 1).{{sfnp|MacKay|2003|p=158}}
:
class="toccolours collapsible collapsed" width="80%" style="text-align:left"
!Proof{{sfnp|MacKay|2003|p=158}} |
By symmetry of the input values, the optimal input distribution is . The channel capacity is:
: Observe that, for the binary entropy function (which has value 1 for input ), : as is known from (and equal to) y unless , which has probability . By definition , so :. |
If the sender is notified when a bit is erased, they can repeatedly transmit each bit until it is correctly received, attaining the capacity . However, by the noisy-channel coding theorem, the capacity of can be obtained even without such feedback.{{sfnp|Cover|Thomas|1991|p=189}}
Related channels
If bits are flipped rather than erased, the channel is a binary symmetric channel (BSC), which has capacity (for the binary entropy function ), which is less than the capacity of the BEC for
History
The BEC was introduced by Peter Elias of MIT in 1955 as a toy example.{{cn|date=July 2020}}
See also
Notes
{{reflist}}
References
- {{cite book |first1=Thomas M. |last1=Cover |first2=Joy A. |last2=Thomas |title=Elements of Information Theory |publisher=Wiley |location=Hoboken, New Jersey |isbn=978-0-471-24195-9 |year=1991}}
- {{cite book |last=MacKay|first=David J.C. |author-link=David J. C. MacKay|url=http://www.inference.phy.cam.ac.uk/mackay/itila/book.html|title=Information Theory, Inference, and Learning Algorithms|publisher=Cambridge University Press|year=2003|isbn=0-521-64298-1}}
- {{citation
| last = Mitzenmacher | first = Michael | authorlink = Michael Mitzenmacher
| doi = 10.1214/08-PS141
| journal = Probability Surveys
| mr = 2525669
| pages = 1–33
| title = A survey of results for deletion channels and related synchronization channels
| volume = 6
| year = 2009| doi-access = free
}}