What is the max possible size of an 32x32px .ico file? - size

I'm making a favicon.ico script, and I need to know the max amount of bits possible.

It depends on the number of colours you are using.
For 8bit (256 colours):
32 * 32 * 8 = 8192 bits
8192 / 8 = 1024 bytes
1024 bytes = 1Kb
For 32bit (16.7 million colours):
32 * 32 * 32 = 32768 bits
32768 / 8 = 4096 bytes
4096 bytes = 4Kb
See wikipedia.

It maxes out at 32 bits per pixel, 24 RGB plus alpha transparency, so that would be 32 x 32 x 32, or 32768 bits.
So 4096 bytes (4K).

In theory, a single ico file can contain up to 65,535 images (see header description). That would mean that the maximum number of pixels could be as large as 65535*32*32, which at 4 bytes per pixel comes to 268,435,456 bytes.

Related

why in bip32 entropy's max size is 512 bit , but bip39 is 256 bit?

from bip 32 https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki
Generate a seed byte sequence S of a chosen length (between 128 and 512 bits; 256 bits is advised) from a (P)RNG.
max entropy can be 512bit
from bip 39 https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki
The allowed size of ENT is 128-256 bits.
max entropy can only be 256bit(use 24 words).
so why we cant use 48 word create a 512bit entropy under bip39?

Size of buffer to hold base58 encoded data

When trying to understand how base58check works, in the referenced implementation by bitcoin, when calculating the size needed to hold a base58 encoded string, it used following formula:
// https://github.com/bitcoin/libbase58/blob/master/base58.c#L155
size = (binsz - zcount) * 138 / 100 + 1;
where binsz is the size of the input buffer to encode, and zcount is the number of leading zeros in the buffer. What is 138 and 100 coming from and why?
tl;dr
It’s a formula to approximate the output size during base58 <-> base256 conversion.
i.e. the encoding/decoding parts where you’re multiplying and mod’ing by 256 and 58
Encoding output is ~138% of the input size (+1/rounded up):
n * log(256) / log(58) + 1
(n * 138 / 100 + 1)
Decoding output is ~73% of the input size (+1/rounded up):
n * log(58) / log(256) + 1
( n * 733 /1000 + 1)

What is the minimum size of an address register for a computer with 5TB of memory?

There is this question that I'm having a bit a difficulty to answer
Here it is:
An n-bit register can hold 2^n distinct bit patterns. As such,
it can only be used to address a memory whose number of addressable units
(typically, bytes) is less than or equal to 2^n. In this question, register
sizes need not be a power of two. K = 2^10
a) What is the minimum size of an address register for a computer
with 5 TB of memory?
b) What is the minimum size of an address register for a computer
with 7 TBs of memory?
c) What is the minimum size of an address register for a computer
with 2.5 PBs of memory?
From the conversion, I know that:
1KB = $2^{10}$ bytes
1MB = $2^{20}$ bytes
1GB = $2^{30}$ bytes
1TB = $2^{40}$ bytes
If I convert 5TB into bytes we get 5,497,558,138,880 bytes
What would be the next step though? I know that 1 byte = 8 bits
This is how I would proceed:
1 TB = 2^40 bytes
Calculate the number of bytes in 5 TB = 5,497,558,138,880 bytes (assume this number is n);
The logarithmic function log(Base2)(n) = the minimum size of an address register and in this case it would be 42.321928095 bits which I would round up to 43 bits.
Same logic for the other questions.
I suggest you divide by 8.
5,497,558,138,880/8 = 687194767360
Using logarithms, 2^n = 687194767360 therefore log2(687194767360) = n
Therefore n = 39.321928095
The same steps can be used to achieve part b and c

TLS/PSK key length

Is there a maximum length for the length of the key. I am using gnutls's psktool to create keys and I need to create key of the size of 128 bits. but the maximum value it lets me to use as key length is 64 Is this impossible.
Gnutls's psktool takes the key size in bytes, not in bits. So the maximum length is 64 bytes = 512 bits.

Bitwise operators: How do I clear the most significat bit?

I'm working on a problem where I need to convert an integer into a special text encoding. The requirements state the I pack the int into bytes and then clear the most significant bit. I am using the bitwise operator I am unsure of how to clear the most significant bit. Here is the problem and my method that I'm working with so far:
PROBLEM:
For this task, you need to write a small program including a pair of functions that can
convert an integer into a special text encoding
The Encoding Function
This function needs to accept a signed integer in the 14-bit range [-8192..+8191] and return a 4 character string.
The encoding process is as follows:
Add 8192 to the raw value, so its range is translated to [0..16383]
2.Pack that value into two bytes such that the most significant bit of each is cleared
Unencoded intermediate value (as a 16-bit integer):
00HHHHHH HLLLLLLL
Encoded value:
0HHHHHHH 0LLLLLLL
1 of 3
Format the two bytes as a single 4-character hexadecimal string and return it.
Sample values:
Unencoded (decimal) | Intermediate (decimal) | Intermediate (hex) | Encoded (hex)
0 | 8192 | 2000 | 4000
-8192 | 0 | 0000 | 0000
8191 | 16383 | 3fff | 7F7F
2048 | 10240 | 2800 | 5000
-4096 | 4096 | 1000 | 2000
My function
-(NSString *)encodeValue{
// get the input value
int decValue = [_inputValue.text intValue];
char* bytes = (char*)&decValue;
NSNumber *number = #(decValue+8192); //Add 8192 so that the number can't be negative, because we're about to lose the sign.
u_int16_t shortNumber = [number unsignedShortValue]; //Convert the integer to an unsigned short (2 bytes) using NSNumber.
shortNumber = shortNumber << 1; // !!!! This is what I'm doing to clear the MSB !!!!!!!
NSLog(#"%hu", shortNumber);
NSString *returnString = [NSString stringWithFormat:#"%x", shortNumber]; //Convert the 2 byte number to a hex string using format specifiers
return returnString;
}
I'm using the shift bitwise operator to clear the MSB and I get the correct answer for a couple of the values, but not every time.
If I am understanding you correctly then I believe you are after something like this:
u_int16_t number;
number = 0xFFFF;
number &= ~(1 << ((sizeof(number) * 8) - 1));
NSLog(#"%x", number); // Output will be 7fff
How it works:
sizeof(number) * 8 gives you the number of bits in the input number (eg. 16 for a u_int16_t)
1 << (number of bits in number - 1) gives you a mask with only the MSB set (eg. 0x8000)
~(mask) gives you the bitwise NOT of the mask (eg. 0x7fff)
ANDing the mask with your number then clears only the MSB leaving all others as they were
You are misunderstanding your task.
You are not supposed to clear the most significant bit anywhere. You have 14 bits. You are supposed to separate these 14 bits into two groups of seven bits. And since a byte has 8 bits, storing 7 bits into a byte will leave the most significant bit cleared.
PS. Why on earth are you using an NSNumber? If this is homework, I would fail you for the use of NSNumber alone, no matter what the rest of the code does.
PS. What is this char* bytes supposed to be good for?
PS. You are not clearing any most significant bit anywhere. You have an unsigned short containing 14 significant bits, so the two most significant bits are cleared. You shift the number to the left, so the most significant bit, which was always cleared, remains cleared, but the second most significant bit isn't. And all this has nothing to do with your task.