## Math and cryptography: uneasy

Neal Koblitz wrote it’s uneasy because:

1. Bandwagon effect
2. Clash of research cultures between math and cryptography
3. Different assumptions and understanding

VERY interesting.

## Complexity Theory

How strong, how complex, how efficient, how effective is an algorithm?

• #### Soni 9:10 pm on January 13, 2010 Permalink | Reply

nitip nulis coret2xan di sini ya. biar ga ilang 🙂

Eightfold way to the Theory of Complexity:
1. The “mathematical” -> Turing -> von Neumann -> modern complexity theory
2. Information theory
3. Ergodic theory -> dynamic maps -> chaos, attractors
4. Cellular automata
5. Random manifolds and broken ergodicity, percolation, localization, spin glass, neural nets
6. self-organized criticality (SOC)
7. AI
8. Wetware: the attempt to understand how the brain actually does work.

Philip W. Anderson – Princeton University

• #### CG 11:28 pm on January 13, 2010 Permalink | Reply

apaan sih ini kok malah jadi gak ngerti :))

• #### zakimath 6:57 pm on January 24, 2010 Permalink | Reply

Wah, belajar teori kompleksitas juga ya?
Kayaknya susah deh, mungkin tingat kesulitannya masuk klas NP-Hard, he2… 😀

## Group-based cryptography

How difficult it is to construct a group-based cryptography?

• #### zakimath 8:48 pm on January 21, 2010 Permalink | Reply

Wah, jadi inget Braid Group nih… 🙂

• #### CG 4:40 am on January 22, 2010 Permalink | Reply

wah, saya jadi pengin baca ttg braid group. mau cari dulu referensi. terimakasih ya 🙂

• #### zakimath 6:52 pm on January 24, 2010 Permalink | Reply

Tahun kemaren ada workshop tentang Braid Group di Matematika ITB dan UGM lho bu… Mungkin di perpustakaan Matematika ITB ada bukunya…

## Digital Circuit Decomposition

Gathering ideas about decomposition. The philosophy, the theory, the math, the examples.

1. General Decomposition and Its Use in Circuit Synthesis Digital – LECH JOZWIAK, VLSI DESIGN 1995, Vol. 3, Nos. 3-4, pp. 225-248(C) 1995 OPA (Overseas Publishers Association)
2. FUNCTIONAL DECOMPOSITION – THE VALUE AND IMPLICATION FOR MODERN DIGITAL DESIGNING – Mariusz RAWSKI†, Tadeusz ŁUBA†, Zbigniew JACHNA‡, Rafał RZECHOWSKI†,The International Workshopon Discrete-Event System Design, DESDes’01, June 27÷29, 2001; Przytok near Zielona Gora, Poland
3. And a good and simple example can be found here.

• #### CG 9:31 pm on January 9, 2010 Permalink | Reply Tags: NAF ( 2 )

NAF is a signed-digit representation. It’s a unique integer representation. In NAF representation, two non-zero values cannot be adjacent. In cryptography using, NAF representation reduces number of multiplications needed for performing exponentiation.

Example:
7 can have several binary representations:

(0 1 1 1) = 4 + 2 + 1 = 7

(1 0 −1 1) = 8 − 2 + 1 = 7

(1 −1 1 1) = 8 − 4 + 2 + 1 = 7

(1 0 0 −1) = 8 − 1 = 7

But only the last representation is NAF.

The algorithm is:
``` Input: E = (em − 1 em − 2 ··· e1 e0)10 Output: E = (zm zm − 1 ··· z1 z0)NAF```

``` ```

```i ← 0 while E > 0 do if E is odd then zi ← 2 − (E mod 4) else zi ← 0 E ← (E − zi)/2 i ← i + 1 return z ```

The coding implementation will be done as soon as the phd student finish writing progress report, presentation slides and simulating small curves for progress seminar.

## Big numbers in crypto

One big problem I always have is I have no sense of how big are numbers used in cryptography. Yes I always now that it can reach $2^{571}$. But HOW BIG IS IT??? And I also get lost on comparing that 2 power of something is equal to 10 power of something.

Last night I had a very unexpected intellectually challenging conversation with a professor of chemistry. Surprisingly he’s very good at general cryptography and math. And probably everything 😀 He also play chess 🙂

As Rosing wrote in his book that elliptic curve crypto systems are in the range of astronomical, it means that numbers used in crypto is large enough compared to the large numbers in the universe.

Here’s is the example given by the prof. He get a piece of paper and rip it into two pieces, piled it and rip it into two pieces and again for about 4 times. After doing that he said that how thick would you think the paper would be if I rip the paper for 50 times with the assumption that the paper is 1mm thick?

First I thought that it’s maybe just as thick as The Codebreakers book but later I think that it’s $2^{50}$. But how big it is? Let’s grab a calculator.

$2^{50}$ = 1125899906842624. So the pile of papers is 1.1 x $10^{15}$ mm.

Still have no idea how big it is.

Now let’s compare it to some astronomical numbers. From here I get the info that the distance from earth to sun is 92,935,700 miles or 149565139.43800 km or 149565139438000 mm or 1.5 x $10^{12}$ mm.

So then the pile of paper wouldn’t fit the space between earth and the sun! WOW!!!

Now let’s pick another example. For a $2^{512}$ bit length key, $2^{512}$ = 13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084096, which is equal to 1.3 x $10^{154}$. Which is MUCH bigger than the number of atoms in all the earth, which is 8.87 x $10^{49}$.

Well, now I have a better sense of how big those numbers are. Hoping you will too 🙂

• #### Budi Rahardjo 4:45 pm on January 1, 2010 Permalink | Reply

I usually use factorization to get the feel how big the number is. For a “large” number, it would take zillion years to factor with current computing equipments (resources).

I think Rosing and Schneier also have tables that show big numbers (and their physical associations).

Good stuff.

• #### CG 4:51 pm on January 1, 2010 Permalink | Reply

@BR: but i still don’t get the feel with zillion years(time). i think distances work better for me 🙂 or number of some things 😀

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r