Tagged: information theory Toggle Comment Threads | Keyboard Shortcuts

  • CG 11:03 pm on May 24, 2014 Permalink | Reply
    Tags: convolutional code, , information theory, trellis diagram, viterbi   

    Convolutional Code 

    Unlike block codes, convolutional codes does not send the message followed by (or interspersed with) the parity bits. In a convolutional code, the sender sends only the parity bits.

    The encoder uses a a sliding window to calculate r > 1 parity bits by combining various  subsets of bits in the window. The size of the window, in bits, is called the code’s constraint length. the longer the constraint length, the larger to number of parity bits that are influenced by any given message bit. Because the parity bits are the only bits sent over the channel, a larger constraint length generally implies a greater resilience to bit errors. The trade-off is that it will take more time to decode codes for more constraint length.

    If a convolutional code that produces r parity bits per windows and slides the window forward by one bit at a time, its rate is 1/r. The greater the value of r, the higher the resilience of bit errors, but the trade-off is that a proportionally ginger amount of communication bandwidth is devoted to coding overhead. In practice, it is more commen to pick 4 and the constraint length to be as small as possible while providing a low enough resulting probability of a bit error.

    This is an example of a convolutional code with rate 1/2:
    r 1:3

    This is a systematic 1/3 encoder:
    1:3

    And this is a 2/3 encoder:
    trelis-diagram

    To make the convolutional code able to cope with more errors, it is not by changing the k or n (r = k/n). But by adding more memory inside the encoder (the square with D letter). Notes that all the circles with plus symbol (or the xor process) is the generator functions to determine each output. It works just like other generator of other error correction codes.

    This is the Trellis Diagram of the first convolutional code:
    2:3

    Trellis Diagram is a graph whose nodes are ordered into vertical slices (time), and with each node at each time connected to at least one node at an earlier and at least one node at a later time. The earliest and latest times in the trellis have only one node.

    This diagram gives “spatial” and “temporal” information about the code. “Temporally” it shows the state on each t while “spatially” it shows every possible route defined by the encoder design. Later, the decoding would be done by comparing the received “route” with the “most likely route” to track down the error and then correct it.

    Viterbi is a scheme for decoding, using maximum likelihood decoding that set up a threshold to check the “route” with the “deviated route”.

    References:

    1. MIT 6.02 DRAFT Lecture Notes, Fall 2010 (Last update: October 4, 2010)
    2. http://en.wikipedia.org/wiki/Trellis_(graph)
    3. “Error Control Coding”, Shu Lin and Daniel J. Costello
     
  • CG 11:44 pm on January 10, 2014 Permalink | Reply
    Tags: channel coding, error correction, information theory, reed solomon   

    Reed Solomon – A Brief Introduction 

    Reed Solomon is one brilliant error correction method that is a non-binary cyclic codes. The codeword looks like the picture below. It has symbols in m-bits. The bit string is treated as a group of bits. The group of bits is treated as non-binary. We will see how it makes this method powerful.

    Reed Solomon

    Let’s take an example of RS(15,11). This means that we have the codeword with length 15 bits that consists of original message 11 bits  and 4 bits for parity. t is how many errors (in symbols or group of bits) that can be corrected.

    p(x) or irreducible polynomial is used to generate the finite field like shown in the table below.

    RS(15,11)

    Now we need a generator to start the encoding process.

    Generator

    RS Encoding Process 1

    RS Encoding Process 2

    RS Encoding Process 3

    The process of decoding includes several steps. Let’s take an example of double-symbol error.

    RS Decoding

    The first process is computing the syndrome to detect the error. For this case will have 4 syndromes: S1, S2, S3 and S4. Each S that is not equal to 0 contains error.

    RS Syndrome Computation

    The second step is locating the error in e(x). This can be done using matrix.

    RS Error Locating

    After the location has been determined, now it is the time to calculate the values of the error so we can correct the error.

    RS Error Values

    Done. We get the message corrected. Do you notice that detecting the errors in symbols (groups of bits) in this method makes it runs faster?

     
    • ravi 1:41 pm on March 20, 2016 Permalink | Reply

      sir plz help me out wit the code it really cost me much……..

  • CG 10:24 pm on May 24, 2013 Permalink | Reply
    Tags: BCH code, cyclic code, digital system, , hamming code, information theory, linear block code, , quad tree, reed muller   

    Principles and Analogies of Error Correction Codes 

    I use a very abstract visual way to explain about different error correction algorithms. Here I will over-simplify every algorithm and then use visualizations instead, just to get the principle of how they works, and why.

    Linear Block Code

    In linear block code, every codewords will have n-k bits of redundant checkin part, and k-bits of message.

    Linear Block Code

    The generator matrix would look like this, where there is some identity matrix in it, that later will be used to generate syndrome to identify errors.

    Linear Block Code Table

    Hamming Code
    The picture below shows how to identify error with even parity bits. The bits in black is the message, the bits in green is the parity to make sure that the number of 1 in a circle is even, the bit in red is the error, that later can be identified because the overlapping circle area will show where the error occur.

    Hamming Code Diagram

    The Hamming Code works like shown by the table below, the parity bits (unlike the previous method) are inserted in certain positions (2^0, 2^1, 2^2 and 2^3) to make sure that each of every 4 bits of message is being screened so that later the error can be tracked.

    Hamming Code Table

    Cyclic Code

    This is my favorite. This method use the characteristic of polynomial in GF(2^n). So that identifying errors can be done by divide the codeword with the generator g(x). If the result is not 0, there is an error. Another thing that I like from this method is that every codeword is a shift from previous codeword (that’s why this method called “cyclic”) . So the implementation will be very easy, just shifting here and there (using LFSR), which in hardware implementation, is “costless”.

    Cyclic Code Table

    BCH Code

    This method is a little bit complicated but the point is, to make a matrix H so that this matrix can screen out every bits (that’s why there are 2 rows that are linearly redundant).

    BCH Code
    Reed Muller

    This one is interesting. This method has orders to identify different number of errors. Look at the matrix below. R(1,3) means that the order is 1 and the message length is 3. In Reed Muller, the pattern is obvious. Each vector in the matrix (x1, x2, x3) are used to search the location of error in convergence. Each row will direct the algorithm to a smaller space of error possibility location.
    Reed Muller 1
    For R(2, 3), we have the same number bits of message, which is three, but this time in order 2. Means that the matrix will be expanded and thus, more error can be detected. The part of matrix in yellow is the “basic” vectors, while the green shows the “additional” vectors to help allocating more errors.

    Now look at R(3,3). More rows of the matrix, and more errors to be detected.


    Reed Muller 3

    So oversimplifyingly, I can say that error correction code, in principle, how to encode a message (to be a codeword) to get through a channel (error correction code is in domain channel coding, while data compression is in domain source coding) so that we would be able to make the message reliable (while in source coding, the target is not reliability but efficiency), by making sure that we can locate error and fix it. And the way to decode is like the illustration of quadtree below. The methods will generate matrix, or equations, or mathematical characteristic (polynomial groups etc) to help to narrow down the search space.

    Quadtree
    Quadtree

    Hopefully this article can be useful, I will teach Reed Solomon Code next week, and let’s see if I can add up more things here, or publish a new blogpost.

     
  • CG 12:12 pm on August 19, 2011 Permalink | Reply
    Tags: , information theory, video tutorial   

    Learning Information Theory 

    I took a course on Information Theory years ago when I was taking master degree and completely forgot about everything when I need to learn it quickly to use the theory to support my arguments for an cryptographic issues.

    I read some books that only caused more desperation, so I decided to google some introductions about information theory, and find these brilliant video tutorials:

    1. http://www.youtube.com/watch?NR=1&v=hzdvX1ong18
    2. http://www.youtube.com/watch?v=XqwUL9QRr-Y&feature=relmfu
    3. http://www.youtube.com/watch?v=NuAwCbW5Lug&feature=relmfu
    4. http://www.youtube.com/watch?v=HwIAYM6hS2U&feature=relmfu
    5. http://www.youtube.com/watch?v=Jjzc4-a6Iwg&feature=relmfu
    6. http://www.youtube.com/watch?v=v1wyG64Xd8E&feature=relmfu
    Sometimes video tutorial seems to offer more advantages to me than reading textbooks. It’s easier to understand and does not cause dizziness or sleepiness, ha ha ah. OK now I have enough understanding to continue on writing next publication, and using information theory as one of the arguments. Yay 🙂
     
    • Budi Rahardjo 1:27 pm on August 19, 2011 Permalink | Reply

      YouTube rocks!

      • CG 2:20 pm on August 19, 2011 Permalink | Reply

        youtube is a great source of useful information! high-speed bandwidth is mandatory! 🙂

    • jarwadi 1:48 pm on August 19, 2011 Permalink | Reply

      cg’s hand writing …

      • CG 2:22 pm on August 19, 2011 Permalink | Reply

        is it decipherable? 🙂

        • andika 8:12 pm on August 19, 2011 Permalink

          kok mirip tulisan tangan BR ya?

        • CG 4:42 pm on August 20, 2011 Permalink

          mas andika: bagusan tulisan saya, hihihihhh tapi mudah2an artinya saya nantinya bisa sehebat BR dan lulus s3 *hloh

    • amirul 1:52 pm on August 19, 2011 Permalink | Reply

      bingung bos..

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel