Updates from May, 2014 Toggle Comment Threads | Keyboard Shortcuts

  • CG 11:03 pm on May 24, 2014 Permalink | Reply
    Tags: convolutional code, , , trellis diagram, viterbi   

    Convolutional Code 

    Unlike block codes, convolutional codes does not send the message followed by (or interspersed with) the parity bits. In a convolutional code, the sender sends only the parity bits.

    The encoder uses a a sliding window to calculate r > 1 parity bits by combining various  subsets of bits in the window. The size of the window, in bits, is called the code’s constraint length. the longer the constraint length, the larger to number of parity bits that are influenced by any given message bit. Because the parity bits are the only bits sent over the channel, a larger constraint length generally implies a greater resilience to bit errors. The trade-off is that it will take more time to decode codes for more constraint length.

    If a convolutional code that produces r parity bits per windows and slides the window forward by one bit at a time, its rate is 1/r. The greater the value of r, the higher the resilience of bit errors, but the trade-off is that a proportionally ginger amount of communication bandwidth is devoted to coding overhead. In practice, it is more commen to pick 4 and the constraint length to be as small as possible while providing a low enough resulting probability of a bit error.

    This is an example of a convolutional code with rate 1/2:
    r 1:3

    This is a systematic 1/3 encoder:

    And this is a 2/3 encoder:

    To make the convolutional code able to cope with more errors, it is not by changing the k or n (r = k/n). But by adding more memory inside the encoder (the square with D letter). Notes that all the circles with plus symbol (or the xor process) is the generator functions to determine each output. It works just like other generator of other error correction codes.

    This is the Trellis Diagram of the first convolutional code:

    Trellis Diagram is a graph whose nodes are ordered into vertical slices (time), and with each node at each time connected to at least one node at an earlier and at least one node at a later time. The earliest and latest times in the trellis have only one node.

    This diagram gives “spatial” and “temporal” information about the code. “Temporally” it shows the state on each t while “spatially” it shows every possible route defined by the encoder design. Later, the decoding would be done by comparing the received “route” with the “most likely route” to track down the error and then correct it.

    Viterbi is a scheme for decoding, using maximum likelihood decoding that set up a threshold to check the “route” with the “deviated route”.


    1. MIT 6.02 DRAFT Lecture Notes, Fall 2010 (Last update: October 4, 2010)
    2. http://en.wikipedia.org/wiki/Trellis_(graph)
    3. “Error Control Coding”, Shu Lin and Daniel J. Costello
  • CG 4:06 pm on January 26, 2014 Permalink | Reply
    Tags: business, , economy, price elasticity   

    Amdahl’s Law and Price Elasticity 

    I have just noticed that Amdahl’s Law used in measuring processor performance is similar with the Price Elasticity Law (I read from the book “Starbucks (Corporations that Changed the World) – Marie Bussing-Burks”) in economic concepts.

    The same principle is that there is a limit in increasing processing performance to get more throughput, or to rephrase it in business language: there is a limit in reducing price of an item to get more revenue.

    Amdahl’s Law says that speedup is how a machine performs after enhancement. A SpeedUp(E) = Performance with E / Performance without E = Execution time without E / Execution time with E. Execution time = Execution time unaffected + (Execution time with E / Amount of improvement).

    (Notes: Examples is taken from EL 2244 Course being taught at ITB this semester. The reference book is John L. Hennessy and  David A. Patterson , Computer Organization and Design: The Software Hardware Interface, Morgan Kaufmann Publishers, 4th Edition, 2009.)

    Ex. 1:

    A program runs in a machine in 10s. 50% of the time is doing multiplications. If we improve the multiplication unit so it runs twice as fast, how big is the speed up?


    Exec_time(E) = (Affected_exec_time/improvement) + unaffected_exec_time

    = (5s/2) + 5s = 7,5 s

    Speed_up(E) = 10s/7,5s = 1,333  which is not 2 times faster

    Ex. 2:

    A program runs for 10s. 70% of the time is doing additions. How much improvement on the additions if we want to reduce the running time to 3s?


    Exec_time(E) = (Affected_exec_time/improvement) + unaffected_exec_time

    3s = (7s/n) + (10-7)s

    3s = (7s/n) + 3s

    0 = 7s/n

    No amount of improvement can reduce the running time to 3s.

    Now let’s see the Price Elasticity Law. Price Elasticity (E) = % change in quantity demand / % change in price.

    Ex 1:

    If we reduce the price of 36 inch TV from $450 to $400, the average price would be $425. The absolute value of percentage change = $50/$425 = 0.118. Number of unit sold is increased from 200 to 300 so the average number of unit sold = 250.

    So the percentage  of change in quantity demand is 100/250 x 100% = 40%.

    The price elasticity = 0.4/0.118 = 3.39%

    If the absolute value of price elasticity is between 0 – 0.99, demand is inelastic. Necessity items like coffee, milk, gasoline, prescription drugs are tend to be relatively insensitive to price change.

    Ex 2:

    A store manager drops the price of a gallon of milk from $4 to $3. The average price will be $3.5. The absolute value of % change = $1/$3.5 = 0.29

    Milk sold going from 10 to 11. The average number of gallon sold = 10.5. Percent of change in quantity demand = 1/10.5 = 0.1.

    Price elasticity = 0.1/0.29 = 0.34. The demand is inelastic.

    So if demand is elastic, a price cut will increase total revenue (and an increase in price will mean lower total revenue). If we take Ex 1:

    price x quantity = total revenue

    $450 x 200 = $90,000

    $400 x 300 = $120,000

    While when demand is inelastic, a price cut will decrease total revenue. As in Ex 2:

    $4 x 10 = $40

    $3 x 11 = $33.

    The conclusion is that in terms of machine performance and total revenue, there is a limit to get “improvement”. There is a certain point that we cannot further improve the speed of a machine as well as there is a certain point that we cannot change price to get more total revenue.

  • CG 11:44 pm on January 10, 2014 Permalink | Reply
    Tags: channel coding, error correction, , reed solomon   

    Reed Solomon – A Brief Introduction 

    Reed Solomon is one brilliant error correction method that is a non-binary cyclic codes. The codeword looks like the picture below. It has symbols in m-bits. The bit string is treated as a group of bits. The group of bits is treated as non-binary. We will see how it makes this method powerful.

    Reed Solomon

    Let’s take an example of RS(15,11). This means that we have the codeword with length 15 bits that consists of original message 11 bits  and 4 bits for parity. t is how many errors (in symbols or group of bits) that can be corrected.

    p(x) or irreducible polynomial is used to generate the finite field like shown in the table below.


    Now we need a generator to start the encoding process.


    RS Encoding Process 1

    RS Encoding Process 2

    RS Encoding Process 3

    The process of decoding includes several steps. Let’s take an example of double-symbol error.

    RS Decoding

    The first process is computing the syndrome to detect the error. For this case will have 4 syndromes: S1, S2, S3 and S4. Each S that is not equal to 0 contains error.

    RS Syndrome Computation

    The second step is locating the error in e(x). This can be done using matrix.

    RS Error Locating

    After the location has been determined, now it is the time to calculate the values of the error so we can correct the error.

    RS Error Values

    Done. We get the message corrected. Do you notice that detecting the errors in symbols (groups of bits) in this method makes it runs faster?

    • ravi 1:41 pm on March 20, 2016 Permalink | Reply

      sir plz help me out wit the code it really cost me much……..

  • CG 10:24 pm on May 24, 2013 Permalink | Reply
    Tags: BCH code, cyclic code, digital system, , hamming code, , linear block code, , quad tree, reed muller   

    Principles and Analogies of Error Correction Codes 

    I use a very abstract visual way to explain about different error correction algorithms. Here I will over-simplify every algorithm and then use visualizations instead, just to get the principle of how they works, and why.

    Linear Block Code

    In linear block code, every codewords will have n-k bits of redundant checkin part, and k-bits of message.

    Linear Block Code

    The generator matrix would look like this, where there is some identity matrix in it, that later will be used to generate syndrome to identify errors.

    Linear Block Code Table

    Hamming Code
    The picture below shows how to identify error with even parity bits. The bits in black is the message, the bits in green is the parity to make sure that the number of 1 in a circle is even, the bit in red is the error, that later can be identified because the overlapping circle area will show where the error occur.

    Hamming Code Diagram

    The Hamming Code works like shown by the table below, the parity bits (unlike the previous method) are inserted in certain positions (2^0, 2^1, 2^2 and 2^3) to make sure that each of every 4 bits of message is being screened so that later the error can be tracked.

    Hamming Code Table

    Cyclic Code

    This is my favorite. This method use the characteristic of polynomial in GF(2^n). So that identifying errors can be done by divide the codeword with the generator g(x). If the result is not 0, there is an error. Another thing that I like from this method is that every codeword is a shift from previous codeword (that’s why this method called “cyclic”) . So the implementation will be very easy, just shifting here and there (using LFSR), which in hardware implementation, is “costless”.

    Cyclic Code Table

    BCH Code

    This method is a little bit complicated but the point is, to make a matrix H so that this matrix can screen out every bits (that’s why there are 2 rows that are linearly redundant).

    BCH Code
    Reed Muller

    This one is interesting. This method has orders to identify different number of errors. Look at the matrix below. R(1,3) means that the order is 1 and the message length is 3. In Reed Muller, the pattern is obvious. Each vector in the matrix (x1, x2, x3) are used to search the location of error in convergence. Each row will direct the algorithm to a smaller space of error possibility location.
    Reed Muller 1
    For R(2, 3), we have the same number bits of message, which is three, but this time in order 2. Means that the matrix will be expanded and thus, more error can be detected. The part of matrix in yellow is the “basic” vectors, while the green shows the “additional” vectors to help allocating more errors.

    Now look at R(3,3). More rows of the matrix, and more errors to be detected.

    Reed Muller 3

    So oversimplifyingly, I can say that error correction code, in principle, how to encode a message (to be a codeword) to get through a channel (error correction code is in domain channel coding, while data compression is in domain source coding) so that we would be able to make the message reliable (while in source coding, the target is not reliability but efficiency), by making sure that we can locate error and fix it. And the way to decode is like the illustration of quadtree below. The methods will generate matrix, or equations, or mathematical characteristic (polynomial groups etc) to help to narrow down the search space.


    Hopefully this article can be useful, I will teach Reed Solomon Code next week, and let’s see if I can add up more things here, or publish a new blogpost.

  • CG 6:50 pm on October 13, 2010 Permalink | Reply
    Tags: , , , linux,   

    Compiling assembly on Linux (Ubuntu on Virtual Box) 

    gcc -S logical.c

    gcc -O1 -S logical.c

    gcc -O2 -S logical.c

    objdump -d logical.o

  • CG 6:53 pm on October 12, 2010 Permalink | Reply
    Tags: computer architectures, ,   

    Registers in Snow Leopard 64-bit 

    Snow Leopard has different architecture and different register names.

    [image taken from http://www.sealiesoftware.com/blog/archive/2008/09/22/objc_explain_So_you_crashed_in_objc_msgSend.html%5D

  • CG 3:39 pm on October 12, 2010 Permalink | Reply
    Tags: , , , , mac os x,   

    Compiling assembly on Snow Leopard 

    Comparing the compiling result with compiling assembly with Leopard (Mac OS X 10.5)

    code in c

    int logical(int x, int y){
       int t1 = x^y;
       int t2 = t1 >> 17;
       int mask = (1<<13)-7;
       int rval = t2 & mask;
       return rval;

    gcc -S logical.c

    gcc -O1 -S logical.c

    gcc -O2 -S logical.c

    dumping object file
    gcc -c logical.c
    otool -tv logical.o

  • CG 2:10 pm on March 8, 2010 Permalink | Reply
    Tags: c language, quine, self generating code   

    This is INTERESTING. http://rsatrioadi.w… 

    This is INTERESTING.


    More about it here.

    • Satrio Adi Rukmono 2:19 pm on March 8, 2010 Permalink | Reply

      wah saya di-feature.. makasih ya mas..

      • Satrio Adi Rukmono 2:22 pm on March 8, 2010 Permalink | Reply

        wah saya asal2an ini, baru baca about, maaf mbak, bukan mas 🙂

        • CG 2:24 pm on March 8, 2010 Permalink

          terimakasih kembali mas satrio 🙂
          gak apa2x, malah lebih bagus kalau gak jelas mas atau mbak 😀

  • CG 1:57 pm on February 7, 2010 Permalink | Reply  

    Courses for this semester 

    1. II5164
    2. EL2010
Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc