How Hex is converted to binary before hashing sha256 - cryptography

I know that string is converted to binary using its correspanding number in ASCII table, but what about Hexadecimal? It uses the binary representation of that number?
0x0 is 0000
0x9 is 1001
0xF is 1111 and so on?

Related

Is 0000 a valid EBCDIC signed value?

We have an ASCII file with numbers formatted as EBCDIC signed fields.
Sometimes the value is 0000 while I would expect 000{ or 000}.
Is 0000 a valid EBCDIC signed value within an ASCII file?
Short Answer
Yes, both '0000' and '000{' denote a positive zero. '000}' denotes a negative zero.
Detailed Answer
Packed decimal number are often used on IBM mainframe systems, since the processor has a set of decimal instructions. Those instructions assume that its operands follow the rules for decimal packed numbers in storage. See IBM z/Architecture Principles of Operation, Chapter 8 "Decimal Instructions".
In summary, a decimal packed number has digits, i.e. 0x0 - 0x9, in every nibble of every byte, except for the right nibble in the rightmost byte (rightmost nibble). The rightmost nibble holds the sign, which has preferred values 0xC for positive, and 0xD for negative values. The system also accepts 0xA, 0xE, and 0xF as positive signs, and 0xBas negtive sign.
Making Packed Decimal Human Readable
If you need to make a decimal packed number human readable, you can use the UNPK (unpack) processor instruction. This instruction transforms each byte, except for the rightmost byte, nibble by nibble, to the corresponding EBCDIC character digit, i.e.
0x0 --> '0' (= 0xF0)
0x1 --> '1' (= 0xF1)
...
0x9 --> '9' (= 0xF9)
The rigtmost byte is handled differently, since it contains a digit in the left nibble and the sign in the right nibble. This byte is transformed by simply exchanging the nibbles. For decimal numbers with preferred sign values, this is:
positive values: 0xdC --> 0xCd
negative values: 0xdD --> 0xDd
where the lowercase d denotes the digit nibble value, i.e. 0x0, 0x1, ..., 0x9.
So, positvie values lead to:
0xC0, 0xC1, ..., 0xC9
and negative values lead to
0xD0, 0xD1, ..., 0xD9.
The corresponding resulting EBCDIC characters
'{', 'A', 'B', ..., 'I' (positive values)
'}', 'J', 'K', ..., 'R' (negative values)
To make the numbers really human readable, programs then usually overlay the left nibble of this last character with 0xF to make it a read EBCDIC character digit. This is called zoned decimal format.
So far, only the preferred sign codes were used. If alternate sign codes (as noted above) would be used, all sorts of additional characters might appear. For example, variations of the number zero with alternate sign codes would show (in EBCDIC):
positive zero: 0x0A --> 0xA0, which is 'µ'
positive zero: 0x0E --> 0xE0, which is '\'
positive zero: 0x0F --> 0xF0, which is '0'
negative zero: 0x0B --> 0xB0, which is '^'
Handling Imporperly Unpacked Numbers
If the program doing the unpacking of packed decimal numbers does not handle the sign nibble correctly for human redability, you can:
In EBCDIC, overlay the left nibble of the right most character byte with 0xF to make sure it is a real EBCDIC character digit.
In ASCII, overlay the left nibble of the right most character byte with 0x3 to make sure it is a real ASCII character digit.

How can a 32 bytes address represent more than 32 characters?

I have just started studying solidity and coding in general, and I tend to see things like this:
Click for image
I am confused as to how a "32 bytes hash" can include more than 32 characters (even after the "0x000"). I was under the impression that each byte can represent a character. I often see references, as well, saying things like "32 bytes address (64 bytes hex address)". But how can a 64 byte hex address be represented if it is a 32 bytes address - would you still need a byte per character? I know this is probably a stupid/noob question, and I'm probably missing something obvious, but I can't quite figure it out.
One byte is the range 00000000 - 11111111 in binary, or 0x00 - 0xFF in hex. As you can see, one byte is represented in hex as a 2 character string. Therefore, a 32 byte hex string is 64 characters long.
The 32-bit address points to the first byte of 32, 64, 1000 or 100 million sequential bytes. All other follow or are stored on address + 1, +2, +3...

Quoted Printables Encoding - counting Bits

Let's say I want to encode a word in quoted printable (with charset ISO 8859-1) and count bits afterwards. How do you count the encoded quoted printable tag ("=" and hex) in bits?
Original: hätte -> 7+8+7+7+7 = 36 Bits
Encoded: h=E4tte -> does "=E4" count for 3*7 Bits or 1*7 Bits?

Convert "emailAdress=<email-address>" found in Subject field of x.509 SSL certificate to hexadecimal

I have a 'Subject' of SSL x.509 certificate given as
Subject: C=XX, ST=XX, L=XX, O=XX, OU=XX, emailAddress=admin#adobe.pw, CN=trustasia.asia
and I want to covert this to binary stream as found in SSL certificate when it is sent on wire, I know definition Subject field is given in RFC-5280 in ASN.1 notation and DER encoding rules given in x.609 are to covert this field to binary representation, these two documents and with little help from code(which gave hexadecimal representations of OID such as id-at-countryName:2.5.4.6:{0x55, 0x04, 0x06}) i was able to covert all the RDNs(RelativeDistinguishedNames) to their binary representation, but I am stuck with emailAdress filds.
I found its OID:1.2.840.113549.1.9.1 but don't know what it is hexadecimal representation.
Can you please guide me how can I covert this to binary representation.
I suspect that you are talking about OID encoding using ASN.1 Distinguished Encoding Rules (DER). I would suggest to check this article to get detailed information about OBJECT_IDENTIFIER encoding rules: OBJECT IDENTIFIER
OID string value conversion to ASN.1 DER will result in:
06 09 2A 86 48 86 F7 0D 01 09 01
where, 0x06 -- is OBJECT_IDENTIFIER tag identifer, 0x09 -- encoded OID value length in bytes, the rest bytes (2A 86 48 86 F7 0D 01 09 01) represent OID binary form
emailAddress is of type IA5String so it would appear in the certificate in the same form as shown in subject line: 'admin#adobe.pw'.

Hexadecimal numbers vs. hexadecimal enocding (with base64 as well)

Encoding with hexadecimal numbers seems to be different from using hexadecimals to represent numbers. For example, then hex number 0x40 to me should be equal to 64, or BA_{64}, but when I put it through this hex to base64 converter, I get the output: QA== which to me is equal to some number times 64. Why is this?
Also when I check the integer value of the hex string deadbeef I get 3735928559, but when I check it other places I get: 222 173 190 239. Why is this?
Addendum: So I guess it is because it is easier to break the number into bit chunks than treat it as a whole number when encoding? That is pretty confusing to me but I guess I get it.
You may wish to read this:
http://en.wikipedia.org/wiki/Base64
In summary, base64 specifies a specific encoding, which involves using different values for letters than their ASCII encoding.
For the second part, one source is treating the entire string as a 32 bit integer, and the other is dividing it into bytes and giving the value of each byte.