I am trying to encrypt hex using AES128-ECB using the Big Endian protocol.
I know that ECB is not secure but it is something that I need to use to connect with a Bluetooth application.
I am building a React-Native application that will connect to a Bluetooth peripheral.
I am using the aes-js npm module.
My code so far is:
const key = Buffer.from("20572F52364B3F473050415811632D2B", "hex")
const text = '0x060x010x010x01'
const textBytes = aesjs.utils.utf8.toBytes(text);
console.log('textBytes: ', textBytes)
const aesEcb = new aesjs.ModeOfOperation.ecb(key);
console.log('aesEcb: ', aesEcb)
const encryptedBytes = aesEcb.encrypt(textBytes);
console.log('encryptedBytes: ', encryptedBytes)
const encryptedHex = aesjs.utils.hex.fromBytes(encryptedBytes);
console.log('encryptedHex: ', encryptedHex);
I don't think that this is maintaining big-endian.
I would love some help please.
I am trying to encrypt hex using AES128-ECB using the Big Endian protocol.
Big endian is a method of mapping numbers to bits / bytes. With big endian, the most significant part is to the left (low index) while with little endian it is to the right (high index). Bytes are generally seen as atomic and don't have an explicit order (generally the highest bit is shown to the left, but the index is from high to low instead of from low to high, so they are little endian, just to make things interesting). Characters only have an endianess assigned to the if they are encoded as a multi-byte number.
But UTF-8 and AES have an explicit byte order that doesn't include any number encoding. As such, big or little endianness doesn't come into play. AES internally may operate on 32 bit values, however that doesn't matter when the output is well defined; AES operates as AES, and that's the end of it. It only is a problem if a low level routine returns 32 or 16 bit words instead of bytes.
The utf16 codec in CryptoJS seems to use UTF-16BE (big endian), and AES won't change that order after encryption / decryption. Note that I have not read this specifically, but since there is also a utf16le I think that there aren't that many other options.
Related
I use AES128 crypto in CTR mode for encryption, implemented for different clients (Android/Java and iOS/ObjC). The 16 byte IV used when encrypting a packet is formated like this:
<11 byte nonce> | <4 byte packet counter> | 0
The packet counter (included in a sent packet) is increased by one for every packet sent. The last byte is used as block counter, so that packets with fewer than 256 blocks always get a unique counter value. I was under the assumption that the CTR mode specified that the counter should be increased by 1 for each block, using the 8 last bytes as counter in a big endian way, or that this at least was a de facto standard. This also seems to be the case in the Sun crypto implementation.
I was a bit surprised when the corresponding iOS implementation (using CommonCryptor, iOS 5.1) failed to decode every block except the first when decoding a packet. It seems that CommonCryptor defines the counter in some other way. The CommonCryptor can be created in both big endian and little endian mode, but some vague comments in the CommonCryptor code indicates that this is not (or at least has not been) fully supported:
http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-60026/Source/API/CommonCryptor.c
/* corecrypto only implements CTR_BE. No use of CTR_LE was found so we're marking
this as unimplemented for now. Also in Lion this was defined in reverse order.
See <rdar://problem/10306112> */
By decoding block by block, each time setting the IV as specified above, it works nicely.
My question: is there a "right" way of implementing the CTR/IV mode when decoding multiple blocks in a single go, or can I expect it to be interoperability problems when using different crypto libs? Is CommonCrypto bugged in this regard, or is it just a question of implementing the CTR mode differently?
The definition of the counter is (loosely) specified in NIST recommendation sp800-38a Appendix B. Note that NIST only specifies how to use CTR mode with regards to security; it does not define one standard algorithm for the counter.
To answer your question directly, whatever you do you should expect the counter to be incremented by one each time. The counter should represent a 128 bit big endian integer according to the NIST specifications. It may be that only the least significant (rightmost) bits are incremented, but that will usually not make a difference unless you pass the 2^32 - 1 or 2^64 - 1 value.
For the sake of compatibility you could decide to use the first (leftmost) 12 bytes as random nonce, and leave the latter ones to zero, then let the implementation of the CTR do the increments. In that case you simply use a 96 bit / 12 byte random at the start, in that case there is no need for a packet counter.
You are however limited to 2^32 * 16 bytes of plaintext until the counter uses up all the available bits. It is implementation specific if the counter returns to zero or if the nonce itself is included in the counter, so you may want to limit yourself to messages of 68,719,476,736 = ~68 GB (yes that's base 10, Giga means 1,000,000,000).
because of the birthday problem you've got a 2^48 chance (48 = 96 / 2) of creating a collision for the nonce (required for each message, not each block), so you should limit the amount of messages;
if some attacker tricks you into decrypting 2^32 packets for the same nonce, you run out of counter.
In case this is still incompatible (test!) then use the initial 8 bytes as nonce. Unfortunately that does mean that you need to limit the number of messages because of the birthday problem.
Further investigations sheds some light on the CommonCrypto problem:
In iOS 6.0.1 the little endian option is now unimplemented. Also, I have verified that CommonCrypto is bugged in that the CCCryptorReset method does not in fact change the IV as it should, instead using pre-existing IV. The behaviour in 6.0.1 is different from 5.x.
This is potentially a security risc, if you initialize CommonCrypto with a nulled IV, and reset it to the actual IV right before encrypting. This would lead to all your data being encrypted with the same (nulled) IV, and multiple streams (that perhaps should have different IV but use same key) would leak data via a simple XOR of packets with corresponding ctr.
I've been banging my head for the last couple of hours with what seemed to be a very easy task.
My app is communicating with a server over tcpip. The protocol requires that the first 4 bytes of each request be the length of the stream, in reverse order. For example, if the length if 13, I need to supply (decimal) {0,0,0,13}; if it's 300, I need to supply {0,0,44,256}. Then, the actual data follows.
Apparently this is something very straightforward to do in Java, and also in VB (e.g. BitConverter.GetBytes(sendString.Length).Reverse().ToArray()). But in obj-c I just couldn't make it work, I've tried all sorts of conversions between NSString/NSData/NSArray, with no luck.
Thanks in advance!
The server is asking for the data in big-endian order (most significant byte first). Big-endian is the standard network byte order for Internet protocols (including IP, TCP, UDP, DNS, and lots more). It happens that you're compiling for a little-endian platform, so you need to swap the bytes.
However, you should not rely on being on a little-endian platform. Instead, you should make your code independent of the local (host) byte order, using the Core Foundation byte-swapping functions.
Specifically, you should use CFSwapInt32HostToBig to convert your 4-byte int to big-endian order. On a little-endian platform, this rearranges the bytes. On a big-endian platform, this does nothing.
Similarly, you should use CFSwapInt32BigToHost to convert the 4-byte ints you receive from the server to your host byte order.
Alternatively, you can use the standard POSIX byte-swapping functions. The htonl function stands for host-to-network-long, and converts a 32-bit int from host order to network (big-endian) order. The ntohl function converts a 32-bit int from network to host order. (Back when these functions were created, some popular operating systems had 16-bit ints and 32-bit longs. Can you believe it?)
NSInteger a = 300; //13;
char* aa = &a;
Byte b[] = {0,0,0,0};
memcpy(&b[0], &aa[3], 1);
memcpy(&b[1], &aa[2], 1);
memcpy(&b[2], &aa[1], 1);
memcpy(&b[3], &aa[0], 1);
As indicated in the accepted answer for the duplicate question, Foundation provides functions for byte swapping. In this case, since you're dealing with a long, you probably want NSSwapLong.
I need to be able to write signed bytes to a serial port using
SerialPort.Write() method, except that method only takes byte[] arrays of unsigned bytes, how would i write a signed byte to the serial port?
For what I'm working on the particular command takes values from -1700 to 1700.
thanks
nightmares
The serial communication channel has no concept of signed or unsigned, only a concept of 1's and 0's on the wire. It is your operating system (and ultimately your CPU architecture) that assigns a numeric value to those 1's and 0's, on both the sending and receiving side.
The value range you state cannot be represented in a byte (per my comment and your reply). You need to understand what bit pattern the receiving device expects for a given number (is the other device big endian or little endian?), and then you can send an appropriate sequence of byte[] to represent the number you want to transmit.
If both devices have the same endianness, you can setup an array of short then copy to an array of byte like this:
short[] sdata = new short[] { 1, -1 };
byte[] bdata = new byte[sdata.Length * 2];
Buffer.BlockCopy(sdata, 0, bdata, 0, bdata.Length);
However, be sure and test for a range of values. Especially if you are dealing with embedded devices, numeric encoding may not be exactly as on an Intel PC.
when I call getwork on my bitcoind server, I get the following:
./bitcoind getwork
{
"midstate" : "695d56ae173bbd0fd5f51d8f7753438b940b7cdd61eb62039036acd1af5e51e3",
"data" : "000000013d9dcbbc2d120137c5b1cb1da96bd45b249fd1014ae2c2b400001511000000009726fba001940ebb5c04adc4450bdc0c20b50db44951d9ca22fc5e75d51d501f4deec2711a1d932f00000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000",
"hash1" : "00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000",
"target" : "00000000000000000000000000000000000000000000002f931d000000000000"
}
This protocol does not seem to be documented. How do I compute the hash from this data. I think that this data is in little endian. So the first step is to convert everything to big endian? Once that is done, I calculate the sha256 of the data. The data can be divided in two chuncks of 64 bytes each. The hash of the first chuck is given by midstate and therefore does not have to be computed.
I must therefore hash the chunck #2 with sha256, using the midstate as the initial hash values. Once that is done, I end up with a hash of chunk 2, which is 32 bytes. I calculate the hash of this chunk one more time to get a final hash.
Then, do I convert everything to little endian and submit the work?
What is hash1 used for?
The hash calculation is documented at Block hashing algorithm.
Start there for the relatively simple basics. The basic data structures are documented in Protocol specification - Bitcoin Wiki. Note that the protocol definition (and the definition of work) more or less assumes that SHA-256 hashes are 256-bit little-endian values, rather than big-endian as the standard implies. See also
Getwork is more complicated and runs into more serious endian/byte ordering confusion.
First note that the getwork API is optimized to speed up the initial steps of mining.
The midstate and hash1 values are for these performance optimizations and can be ignored. Just look at the "data".
And when a standard sha256 implementation is used, only the first 80 bytes (160 hex characters) of the "data" are hashed.
Unfortunately, the JSON data presented in the getwork data structure has different endian characteristics than what is needed for hashing in the block example above.
They all say to go to the source for the answer, but the C++ source can be big and confusing. A simple alternative is the poold.py code. There is discussion of it here: New mining pool for testing. You only need to look at the first few lines of the "checkwork" routine, and the "bufreverse" and "bytereverse" functions, to get the byte ordering right. In the end it is just a matter of doing a reversal of the bytes in each 32-bit segment of the data. Yes - very odd. But endian issues are tricky and can end up that way....
Some other helpful information on the way "getwork" works can be found in discussions at:
Do I understand header hashing?
Stupid newbie question about the nonce
Note that finding the signal to noise in the original Bitcoin forum is getting very hard, and there is currently an Area51 proposal for a StackExchange site for Bitcoin and Crypto Currency in general. Come join us!
It sounds right, there is a script in javascript that do calculate the hash but I do not fully understand it so I don't know, maybe you understand it better if you look.
this.tryHash = function(midstate, half, data, hash1, target, nonce){
data[3] = nonce;
this.sha.reset();
var h0 = this.sha.update(midstate, data).state; // compute first hash
for (var i = 0; i < 8; i++) hash1[i] = h0[i]; // place it in the h1 holder
this.sha.reset(); // reset to initial state
var h = this.sha.update(hash1).state; // compute final hash
if (h[7] == 0) {
var ret = [];
for (var i = 0; i < half.length; i++)
ret.push(half[i]);
for (var i = 0; i < data.length; i++)
ret.push(data[i]);
return ret;
} else return null;
};
SOURCE: https://github.com/jwhitehorn/jsMiner/blob/4fcdd9042a69b309035dfe9c9ddf716119831a16/engine.js#L149-165
Frankly speaking
Bitcoin block hashing algorithm is not officially described by any source.
"
The hash calculation is documented at Block hashing algorithm.
"
should read
The hash calculation is "described" at Block hashing algorithm.
en.bitcoin.it/wiki/Block_hashing_algorithm
btw the example code in PHP comes with a bug (typo)
the example code in Python generates errors when run by Python3.3 for Windows XP 32
(missing support for string.decode)
I'm trying to encrypt some date using a public key derived form the exchange key pair made with the CALG_RSA_KEYX key type. I determined the block size was 512 bits using cryptgetkeyparam KP_BLOCKLEN. It seems the maximum number of bytes I can feed cryptencrypt in 53 (424 bits) for which I get an encrypted length of 64 back. How can I determine how many bytes I can feed into cryptencrypt? If I feed in more than 53 bytes, the call fails.
RSA using the usual PKCS#1 v.1.5 mode can encrypt a message that is at most k-11 bytes, where k is the length of the modulus in bytes. So a 512 bit key can encrypt up to 53 bytes and a 1024 bit key can encrypt up to 117 bytes.
RSA using OAEP can encrypt a message up to k-2*hLen-2, where k is the modulus byte-length and hLen is the length of the output of the underlying hash-function. So using SHA-1, a 512 bit key can encrypt up to 22 bytes and a 1024 bit key can encrypt up to 86 bytes.
You should not normally use a RSA key to encrypt your message directly. Instead you should generate a random symmetric key (f.x. an AES key), encrypt your message with the symmetric key, encrypt the key with the RSA key and transmit both encryptions to the recipient. This is usually called hybrid encryption.
EDIT: Although this response is marked as accepted by the OP, please see Rasmus Faber response instead, as this is a much better response. Posted 24 hours later, Rasmus's response corrects factual errors,in particular a mis-characterization of OAEP as a block cipher; OAEP is in fact a scheme used atop PKCS-1's Encoding Primitive for the purpose of key-encryption. OAEP is more secure and puts an even bigger limit on the maximum message length, this limit is also bound to a hash algorithm and its key length.
Another shortcoming of the following reply is its failure to stress that CALG_RSA_KEYX should be used exclusively for the key exchange, after which transmission of messages of any length can take place with whatever symmetric key encryption algorithm desired. The OP was aware of this, he was merely trying to "play" with the PK, and I did cover that much, albeit deep in the the long remarks thread.
Fore the time being, I'm leaving this response here, for the record, and also as Mike D may want to refer to it, but do remark-me-in, if you think that it would be better to remove it altogether; I don't mind doing so for sake of clarity!
-mjv- Sept 29, 2009
Original reply:
Have you check the error code from GetLastError(), following cryptencrypt()'s false return?
I suspect it might be NTE_BAD_LEN, unless there's be some other issue.
Maybe you can post the code that surrounds your calling criptencryt().
Bingo, upon seeing the CryptEncrypt() call.
You do not seem to be using the RSAES w/ OAEP scheme, since you do not have the CRYPT_OAEP flag on. This OAEP scheme is a block cipher based upon RSAES. This latter encryption algorihtm, however, can only encrypt messages slightly less than its key size (expressed in bytes). This is due to the minimum padding size defined in PKCS#1; such padding helps protect the algorithm from some key attacks, I think the ones based on known cleartext).
Therefore you have three options:
use the CRYPT_OAEP in the Flag parameter to CryptEncrypt()
extend the key size to say 1024 (if you have control over it, beware that longer keys will increase the time to encode/decode...)
Limit yourself to clear-text messages shorter than 54 bytes.
For documentation purposes, I'd like to make note of a few online resources.
- The [RSA Labs][1] web site which is very useful in all things crypto.
- Wikipedia articles on the subject are also quite informative, easier to read
and yet quite factual (I think).
When in doubt, however, do consult a real crypto specialist, not someone like me :-)