How decrypt keccak algorithm [closed] - cryptography

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I am preparing a presentation about Keccak (http://keccak.noekeon.org/).
In that presentation, I would like to encrypt a plain-text – which brings up the following questions:
What is the exact role of the padding function (how do we obtain a cube of 1600 bits from 64 bit)?
After encrypting the text, how can we decrypt it again?

You cannot "decrypt" the output of Keccak, since it isn't an encryption algorithm, but a one way hash function. Instead, you use it for verifying that a hash value is indeed the hash output of a particular text (which you must know already), simply by calculating the hash of the text, and comparing the output to the first hash value.

The padding is needed for sponge function since Keccak uses the sponge construction. Depending on the width of permutation r, here I'm guessing you use 1600 bits, the padding function appends 10*1 to the input text to form a padded string of length in multiple of r. This is why you get 1600 bit from 64-bit text.
When you apply Keccak algorithm on a text message, you get a "message digest".
Keccak is the winner of SHA3, where SHA stands for Secure Hash Algorithm. You can tell by its name that Keccak is a cryptographic hash function which has three properties:
Pre-image resistance
Second pre-image resistance
Collision resistance
These basically mean that Keccek is a one-way function and it is extremely hard to find two message having the same message digest, and vice versa. And the first point simply tells you that you can't recover the message from the message digest.

Related

Is a very long alphabetic password harder to crack than a shorter password with special characters? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Is it true that a password consisting of the alphabet, even of common known names is much harder to find for a computer program than a short password, even though it uses numbers and other characters?
Is Tr0ub4dor&3 harder to find than correct horse battery staple?
I would be greatful for a detailed answer, one a computational thinker would understand. Below is a visualisation of what I mean. I such is true I feel I am not the only one who will have to rethink his password strategy.
By assigning 11 bits of entropy to each word, the pass phrase implies that each was chosen randomly from a list of 2048 words. This is a relatively short list; you could think of it as the 2000 most common nouns. If four such words are chosen at random, any one of the 244 possible phrases is equally likely.
The base word "troubador" is allowed more entropy (16 bits) because presumably it was chosen at random from a larger dictionary of "uncommon words". If you had a dictionary of about 65,000 such words, and chose one at random, this would be a fair guess. The rest of the entropy are based on reasonable estimates: Is a common transform applied to the character, or not? One bit apiece; randomly pick a digit: 3 bits; etc.
However, the important thing to understand is that the length of the word "troubador" doesn't really matter. Because it's a dictionary word, what matters is how many words are in the dictionary. If you give me the letters "tr b d r", I can easily guess the rest. Individual letters are only unpredictable if they are chosen randomly. If you use words, then you have to consider entire words to be the letters of your alphabet.
But even a huge dictionary of real words is only going to be on the order a few hundred thousand, about 18–19 bits of entropy per word. That's why you need to pick multiple words for a passphrase, or give up words and pick letters, numbers, and symbols randomly.
Here's a visual comparison of the two approaches. This is the number of possible passwords as you increase the alphabet size given a fixed password length:
And this is when you increase the password length given a fixed alphabet size:
What you are describing is correct for a brute force attack on a password. A brute-force attack is repeatedly try guesses for the password and is trying all available combinations.
However in reality attacks are more often dictionary attacks. This means the software tries to determine the password by trying hundreds or sometimes millions of likely possibilities, such as words in a dictionary.
So a long alphabetic password can be hard to crack when using brute-force but still easy to crack with a dictionary attack.
The maths in the comic are correct, but the important point is that
good passwords must be both hard to guess and easy to remember. The
main message of the comic is to show that common "password generation
rules" fail at both points: they make hard to remember passwords,
which are nonetheless not that hard to guess.
It also illustrates the failure of human minds at evaluating security.
"Tr0ub4dor&3" looks more randomish than "correcthorsebatterystaple";
and the same minds will give good points to the latter only because of
the wrong reason, i.e. the widespread (but misguided) belief that
password length makes strength. It does not. A password is not strong
because it is long; it is strong because it includes a lot of
randomness (all the entropy bits we have been discussing all along).
Extra length just allows for more strength, by giving more room for
randomness; in particular, by allowing "gentle" randomness that is
easy to remember, like the electric horse thing. On the other hand, a
very short password is necessarily weak, because there is only so much
entropy you can fit in 5 characters.
https://security.stackexchange.com/questions/6095/xkcd-936-short-complex-password-or-long-dictionary-passphrase

Can creators of RSA read all encoded messages? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
According to this page http://en.wikipedia.org/wiki/RSA_numbers each RSA version uses one single constant long number which is hard to factor.
Is this right?
For example, RSA-100 uses number
1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139
which was factored in 1991.
Meanwhile RSA-210 uses number
245246644900278211976517663573088018467026787678332759743414451715061600830038587216952208399332071549103626827191679864079776723243005600592035631246561218465817904100131859299619933817012149335034875870551067
which was not factored yet.
My question is: doesn't this mean that CREATORS of any specific RSA version KNOW the factor numbers and can consequently READ all encoded messages? If they don't know factorization then how they could generate a number?
Those numbers are just sample random numbers, which are used by RSA to judge the adequacy of the algorithm. The RSA asymmetric-key algorithm itself relies on the difficulty in factorizing numbers of a large size, for security.
The approximate time or difficulty in factoring these numbers is an indicator of how other such numbers used in the algorithm will fare against the amount of computational power we have.
These numbers, which were challenges, are described as follows.
(Quoting from Reference)
The RSA challenge numbers were generated using a secure process that
guarantees that the factors of each number cannot be obtained by any
method other than factoring the published value. No one, not even RSA
Laboratories, knows the factors of any of the challenge numbers. The
generation took place on a Compaq laptop PC with no network connection
of any kind. The process proceeded as follows:
First, 30,000 random
bytes were generated using a ComScire QNG hardware random number
generator, attached to the laptop's parallel port.
The random bytes
were used as the seed values for the B_GenerateKeyPair function, in
version 4.0 of the RSA BSAFE library.
The private portion of the
generated keypair was discarded. The public portion was exported, in
DER format to a disk file.
The moduli were extracted from the DER
files and converted to decimal for posting on the Web page.
The
laptop's hard drive was destroyed.
When it becomes fairly trivial and quick, to reliably factorize numbers of a particular size, it usually implies it is time to move to a longer number.
Look at Ron was wrong, Whit is right. It is a detailed analysis of duplicate RSA key use and the use of RSA keys using common factors (the problem you describe). There is a lot in the article but, to quote from its conclusion:
We checked the computational properties of millions of public keys
that we collected on the web. The majority does not seem to suffer from
obvious weaknesses and can be expected to provide the expected level
of security. We found that on the order of 0.003% of public keys is
incorrect, which does not seem to be unacceptable.
Yes, it is a problem and the problem will continue to grow but the sheer number of possible keys means the problem is not too serious, at least not yet. Note that the article does not cover the increasing ease of brute forcing shorter RSA keys, either.
Note that this is not an issue with the RSA algorithm or the random number generators used to generate keys (although the paper does mention seeding may still be an issue). It is the difficulty of checking a newly generated key against an ever expanding list of existing keys from an arbitrary, sometimes disconnected device. This differs from the known weak keys for DES, for example, where the weak keys are known upfront.

What is pull up and pull down resistor in microcontroller [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Why pull up and pull down resister is connected to the pin.
how to configure pin as pull up and pull down or as interrupt souce.
For an output, it gives the pin a defined logic state when the GPIO is in the reset state which is normally a high impedance input, so is not driving the output to a valid logic state.
For an input the need for it is determined by the attached device, which may also be high-impedence or "floating" on start-up, in which case the pull-up/down will ensure a valid level.
Devices with open-drain/open-collector outputs will need a pull-up/down.
You will need at least a basic understanding of electronics to be successful in embedded systems development (unless all you need happens to be on one off-the-shelf board without modifications or additions. Get yourself a copy of Horowitz & Hill's The Art of Electronics or similar.
Some devices by design can only drive a 1 or drive a 0 and some can drive both. Where you would use something like that is a shared line like spi for example, if you are the device being addressed you pull zero, only one device pulls a zero at a time. Since the line is either floating or zero to make it a one the rest of the time you need a pull up resistor, think of it as a ball or something on a spring the spring keeps the ball up near the ceiling, when you need the ball on the ground it is pulled down against the spring, the spring is fairly weak. when the ball is released the spring pulls it back up to the ceiling. Similar thing on these kinds of busses.

Evolution Strategies [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
What is the basic idea behind self adaptive evolution strategies? What are the strategy parameters and how are they manipulated during the run of the algorithm?
There's an excellent article on scholarpedia on the Evolution Strategy. I can also recommend the excellent journal article: Beyer, H.-G. & Schwefel, H.-P. Evolution Strategies - A Comprehensive Introduction. Natural Computing, 2002, 1, 3-52.
In the history of ES there have been several ways of adopting strategy parameters. The target of the adaptation generally is the shape and size of the sampling region around the current solution. The first one was the 1/5th success rule, then came the sigma self-adaptation and finally covariance matrix adaptation (CMA-ES). Why is this important? To put it simple: Adaptation of the mutation strength is necessary to maintain the evolution progress in all stages of the search. The closer you come to the optimum, the less you want to mutate your vector.
The advantage of CMA-ES over sigma self-adaptation is that it also adapts the shape of the region. Sigma self-adaption is restricted to axis-parallel adaptions only.
To get a larger picture, the book Introduction to Evolutionary Computing has a great chapter (#8) on parameter control, which self adaptation is part of.
Here is a quote taken from the introductory section:
Globally, we distinguish two major forms of setting parameter values:
parameter tuning and parameter control. By parameter tuning we mean
the commonly practised approach that amounts to finding good values
for the parameters Wont the run of the algorithm and then running the
algorithm using these values, which remain fixed during the run. Later
on in this section we give arguments that any static set of parameters
having the values fixed during an EA run seems to be inappropriate.
Parameter control forms an alternative. as it amounts to starting a
run with initial parameter values that are changed during the run.
Parameter tuning is a typical approach to algorithm design. Such
tuning is done by experimenting with different values and selecting
the ones that give the best results on the test problems at. hand.
However, the number of possible parameters and their different values
means that this is a very time-consuming activity
[Parameter control] is based on the observation that finding good
parameter values for an evolutionary algorithm is a poorly structured,
ill-defined, complex problem. This is exactly the kind of problem on
which EAs are often considered to perform better than other methods.
It is thus a natural idea to use an EA for tuning an EA to a
particular problem. This could be done using two EAs: one for problem
solving and another one - the so-called meta-EA - to tune the first
one. It could also be done by using only one EA that
tunes itself to a given problem, while solving that problem.
Self-adaptation, as introduced in evolution strategies for varying the
mutation parameters, falls within this category
It is then followed by concrete examples and further details.
well the goal behind self adapting in evolutionary computation in general is that algorithms should be general and require as less problem knowledge in form of input parameters you have to specify as possible.
self adapting makes an algorithm more general without the need of problem knowledge to choose the correct parametrisation.

Can I know the algorithm type?

I have some text (original), and I have the encrypted version of this text.
Can I detect the type of the algorithm that has been used to encrypt that text?
From a similar recent question (and its answers) on the Cryptography Stack Exchange site:
If the algorithm is any good, no, apart from some basic properties.
Your output looks like a hexadecimal encoding of the actual output - and the 48 hexadecimal characters correspond to 192 bits. Thus, it looks like your algorithm has a block size of 192 bits.
We can't derive much more information here.
Depending on the block cipher modes of operation and in the block length you can get a guess. But often it is by the entropy of the ciphered text that you can get the best approximation.
Sometimes, they are even attached as metadata. A vast majority of those algorithms are open so the important piece is the key(s).
This discipline is called Cryptanalysis