X509_NAME - how to get the actual buffer? - ssl

I am adding openssl to my application and I can send and receive data however so far it is not encrypted or checking the certificates.
I get the servers certificate with:
X509 *gCert = NULL;
X509_NAME *gCertName = NULL;
gCert = SSL_get_peer_certificate(sslConnection.ssl);
gCertName = X509_NAME_new();
gCertName = X509_get_subject_name(gCert);
certNameLen = strlen(gCertName);
memcpy(write_buffer, gCertName, certNameLen);
Then I write write_buffer to the SSL socket but what I receive in the other end is just gibberish.
How do I use X509_NAME? strlen does not work on it it seems? I get the length 4 which is the pointer size I think, not the buffer size...
And what encoding is X509_NAME? It does not seem like utf-8...
I know sending downstream works, if I just put 0xAA or 0XBB in the buffer it is received correctly on the other end.

You are fairly close. However X509_get_name returns a X509_NAME structure which you need to cast into a human readable string (or use 'as-is' for detailed and safer comparisons; as parsing/comparing plain ascii is a bit risky - a clever adversary could put things like 'C=' in the actual text).
The easiest of the lot is
X509 * gCertName =SSL_get_peer_certificate(con);
// if null - error out
const char * buf = X509_NAME_oneline(gCertName, 0, 0);
// if null - release memory and error out
printf("Client\t: %s\n", buf);
OpenSSL_free(buf);
X509_free(gCertName);
which nets you a one line approximation as a C \0 terminated string. Note that the _oneline functions are depricated - see the man pages of openssl for more elaborate ways to get a text rendition which is UTF8 and multi-line safe. But above is fine for simple things like pure ASCII westerncharset domain names without things like alternatives/variants.

Related

Ergo - Unable to Parse ReducedTransaction, getting Not enough bytes in buffer

If we have a reducedTxBytes as String form. How can we sign it via appkit? (I found that we can use
ctx.parseReducedTransaction
, however i'm getting this
requirement failed: Not enough bytes in the buffer
I'm doing:
{
(Address.create(tx._1),
client.getContext.parseReducedTransaction(tx._2.getBytes))
}
Where tx = (String, String) representing (WalletAddress, ReducedTxBytes)
For a sample tx like this:
3QIB_1JG5qTPntBexsiCikbQ_VE1uhtoBG5TyWttASKOkosAAAADg2FcoBCWIDoj4kbTFy2D_mMHmGvdRZRldiDJ0oWk0Dd85dhG5-wa8gfLDadilJjVz2_6zXtNKNCNzZXYicp3OvZjSvkKhjvNecY0PlaLGyzdPOvyMDYaEu87_luDcOdgA4CJegAIzQPULJZ0Jd77AX3zyVjLnTHR3SrYxe50EA_yE0yvp85wP62ZNAABBRTAhD0QBQQABAAONhACBKALCM0Ceb5mfvncu6xVoGKVzocLBwKb_NstzijZWfKBWxb4F5jqAtGSo5qMx6cBcwBzARABAgQC0ZaDAwGTo4zHsqVzAAABk8KypXMBAHRzAnMDgwEIze6sk7GlcwStmTQAAMC71dUEAAjNA9QslnQl3vsBffPJWMudMdHdKtjF7nQQD_ITTK-nznA_rZk0AwABAb_ckOgBAgEAzQPULJZ0Jd77AX3zyVjLnTHR3SrYxe50EA_yE0yvp85wP51P8Gw=
While trying to debug I found that the reason why it failed is due to decoding. As its a string that was encoded from an array bytes, it has to be decoded the same way.
Here is the way it was encoded:
reducedTx = Base64.getUrlEncoder.encodeToString(reducedTx.toBytes)
Where reducedTx is a ReducedTransaction
Answer below
The problem happens due to a failure of decoding it to the right bytes. Because it is being encoded via Base64.GetUrlEncoder, it has to be decoded the same way. Therefore this line down here works:
val result = (Address.create(tx._1),
client.getContext.parseReducedTransaction(
Base64.getUrlDecoder.decode(tx._2)))
Note that we're using Base64.getUrlDecoded rather than getBytes.

PyCrypto AES encryption for Server-Client comm

I'm developing a simple secure data exchange between Server-Client and having some problems at the time of implementing AES.
I've already implemented the Shared Key exchange (with public key crypto) and it works fine. The idea in my head was (pseudocode):
SERVER
ciphertext = AES.encrypt(sharedKey,data)
send(ciphertext)
CLIENT
ciphertext = receive()
plaintext = AES.decrypt(sharedKey,ciphertext)
And voilĂ . When I tried to implement that, I first found that there was an IV. I first tried setting it to all zeros, like this:
self.cipher = AES.new(self.Kshared, AES.MODE_CFB, '0000000000000000')
while( there is data to send ):
ciphertext = self.cipher.encrypt(data)
self.sendData(ciphertext)
Then, in the Client:
cipher = AES.new(Ksecreta, AES.MODE_CFB,'0000000000000000')
while( there is data to receive ):
plaintext = cipher.decrypt('0000000000000000'+data)[16:]
This works fine for the FIRST message, but not for the rest. I assume my problem might has something to do with the IV but I have no idea. Plus, the first implementation I found used a salt to generate another key and also a random IV but the problem is that the client has no idea of which salt/IV is the Server using. I guess you could send that via public key crypto but I first want a simple working AES crypto.
Thanks.
For your decryption code, there's no need to prepend the cipher text with the IV. Have you tried just plaintext = cipher.decrypt(data)?
It's safe to transmit the IV in clear text. So you can just generate it randomly, then send it along with the ciphertext outside of the communication. Something like self.sendData(iv + ciphertext) and later iv = data[:16] and ciphertext = data[16:]
Another common thing to consider is encoding - some transport formats don't play well with sending raw byes (which can include NULL characters). It's common to encode the ciphertext into base64 for transport. If you need that, look into base64.b64encode and base64.b64decode

Xcode Openssl RSA decryption function needed

I am reading an encrypted string from an application in xcode and I have to write a function that uses RSA decryption to decode and display the message.
I am completely lost on where to begin with this.
I have Openssl complied in xcode and I am using the openssl/rsa.h file.
I am trying to use the function:
RSA_private_decrypt(int flen, const unsigned char *from, unsigned char *to, RSA *rsa, int padding);
But then I'd read somewhere on the Openssl main site that the function just returns a number and not the actual string. I also have no idea what paramenters to pass through.
The only reference I have found is the openssl/rsa.h file and looking at the functions it contains.
I've tried doing some research the past couple hours but I have no found any answers.
I was wondering if there is a simple function that I can pass my encrypted string and my private key (using a file or hardcoded) and it can return the decrypted string?
If not is there a guide on how to use Openssl with Objective C programming?
Please let me know if you need more information on the issue.
Thank you in advance.
You may want to look Apple's example which uses security transforms (this avoids openssl) in their Security Overview.
With a bit of luck you can do things with apple transforms and go with that programme.
If not - or if for some reason you really want to use openssl; then the openssl source contains the example file openssl-0.9.8t/apps/rsa.c which pretty much allows for selective cut-and-paste to make things work.
Doing man RSA_private_decrypt from the command line will show you the manual page (or from within Xcode to the man page). Or see http://www.openssl.org/docs/crypto/RSA_public_encrypt.html.
Example use for the above:
unsigned char in[] = { 1, 2, ... byte array to decrypt };
// size of that in byte array
int inlen = sizeof(in);
// output buffer size depends on the key type.
char * out = malloc(RSA_size(rsa));
int e = RSA_private_decrypt(inlen, in, out, rsa, RSA_PKCS1_PADDING);
where padding is one of the values from the man-page.
The value of rsa is a bit more complex to initialise as this is where you set up your keys and what not. Check the above rsa.c file for examples of various ways of filling it - it normally boils down to something like:
EVP_PKEY *pkey = load_key( ... , password,... );
rsa = EVP_PKEY_get1_RSA(pkey);
where load_key is borrowed from the app examples of openssl.

first 8 bytes of CCCryptor 3DES decryption are always damaged?

recently I am implementing an crypto algorithm which uses 3DES. However, i found that the first 8 bytes of 4096 data block are always damaged. But it is sure that it can be decrypted correctly in java. Following is my code:
+ (void) DecryptBy3DES:(NSInputStream*)strmSrc Output:(NSOutputStream*)strmDest CryptoRef:(CCCryptorRef)tdesCrypto
{
size_t dataOutMoved;
uint8_t inputBuf[BlockSize];
uint8_t outputBuf[BlockSize];
CCCryptorStatus cryptStatus;
int iBytesRead = 0;
int iBuffUsed = 0;
while ( (iBytesRead = [strmSrc read:inputBuf maxLength:BlockSize]) > 0 )
{
cryptStatus = CCCryptorUpdate(tdesCrypto, &inputBuf, iBytesRead, &outputBuf, BlockSize, &dataOutMoved);
assert(cryptStatus==noErr);
[strmDest write:outputBuf maxLength:dataOutMoved];
}
CCCryptorReset(tdesCrypto, nil);
}
where BlockSize is 4096.
I reused the CCCryptoRef tdesCrypto to decrypt several blocks. The first block to be decrypted was correct, but the following blocks all had damaged bytes at the beginning. I also try to reset the CCCryptoRef, which seems in vain.
I am really confused. Anyone has the same problem?
Forget my previous answer, I deleted it. The reason that you get the "wrong bytes" in the buffer is that they are the last 8 bytes of plain text of the buffer you tried to decrypt before.
You must call CCCryptorFinal() right after the last call to CCCryptorUpdate(). This will remove the padding bytes before writing the last few bytes of plain text. Because the cipher internally does not know it the last block of the last buffer contains padding bytes, it can not write the data to the output buffer just yet.
Please do not destroy or reset the CCCryptor within your while loop. Simply add the call to CCCryptorFinal() right after, and don't forget to write the resulting output to stream as well. You may reset the CCCryptor after that.
I'm presuming (guessing) DESede with CBC mode and PKCS#5 padding here. See wikipedia to see what I am talking about.
Here is my CryptoRef:
CCCryptorCreateWithMode(kCCEncrypt, kCCModeCBC, kCCAlgorithm3DES, ccNoPadding, [abIV bytes], [abKey bytes], [abKey length], nil, 0, 0, kCCModeOptionCTR_BE, &cryptRef);
Since I use CBC mode and ccNoPadding, there is no need to call CCCryptorFinal(). Instead, when I finish one operation (i.e. finish encrypt/decrypt one file, etc.), I should call CCCryptorReset() to reset the CryptoRef's iv to initial state before next operation. Or the first block of data will be defect.
Thanks for comments and sorry for left this issue behind. I hope this could help people who encountered the same problem.

(bitcoin) Calculate hash from getwork function - how to do it?

when I call getwork on my bitcoind server, I get the following:
./bitcoind getwork
{
"midstate" : "695d56ae173bbd0fd5f51d8f7753438b940b7cdd61eb62039036acd1af5e51e3",
"data" : "000000013d9dcbbc2d120137c5b1cb1da96bd45b249fd1014ae2c2b400001511000000009726fba001940ebb5c04adc4450bdc0c20b50db44951d9ca22fc5e75d51d501f4deec2711a1d932f00000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000",
"hash1" : "00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000",
"target" : "00000000000000000000000000000000000000000000002f931d000000000000"
}
This protocol does not seem to be documented. How do I compute the hash from this data. I think that this data is in little endian. So the first step is to convert everything to big endian? Once that is done, I calculate the sha256 of the data. The data can be divided in two chuncks of 64 bytes each. The hash of the first chuck is given by midstate and therefore does not have to be computed.
I must therefore hash the chunck #2 with sha256, using the midstate as the initial hash values. Once that is done, I end up with a hash of chunk 2, which is 32 bytes. I calculate the hash of this chunk one more time to get a final hash.
Then, do I convert everything to little endian and submit the work?
What is hash1 used for?
The hash calculation is documented at Block hashing algorithm.
Start there for the relatively simple basics. The basic data structures are documented in Protocol specification - Bitcoin Wiki. Note that the protocol definition (and the definition of work) more or less assumes that SHA-256 hashes are 256-bit little-endian values, rather than big-endian as the standard implies. See also
Getwork is more complicated and runs into more serious endian/byte ordering confusion.
First note that the getwork API is optimized to speed up the initial steps of mining.
The midstate and hash1 values are for these performance optimizations and can be ignored. Just look at the "data".
And when a standard sha256 implementation is used, only the first 80 bytes (160 hex characters) of the "data" are hashed.
Unfortunately, the JSON data presented in the getwork data structure has different endian characteristics than what is needed for hashing in the block example above.
They all say to go to the source for the answer, but the C++ source can be big and confusing. A simple alternative is the poold.py code. There is discussion of it here: New mining pool for testing. You only need to look at the first few lines of the "checkwork" routine, and the "bufreverse" and "bytereverse" functions, to get the byte ordering right. In the end it is just a matter of doing a reversal of the bytes in each 32-bit segment of the data. Yes - very odd. But endian issues are tricky and can end up that way....
Some other helpful information on the way "getwork" works can be found in discussions at:
Do I understand header hashing?
Stupid newbie question about the nonce
Note that finding the signal to noise in the original Bitcoin forum is getting very hard, and there is currently an Area51 proposal for a StackExchange site for Bitcoin and Crypto Currency in general. Come join us!
It sounds right, there is a script in javascript that do calculate the hash but I do not fully understand it so I don't know, maybe you understand it better if you look.
this.tryHash = function(midstate, half, data, hash1, target, nonce){
data[3] = nonce;
this.sha.reset();
var h0 = this.sha.update(midstate, data).state; // compute first hash
for (var i = 0; i < 8; i++) hash1[i] = h0[i]; // place it in the h1 holder
this.sha.reset(); // reset to initial state
var h = this.sha.update(hash1).state; // compute final hash
if (h[7] == 0) {
var ret = [];
for (var i = 0; i < half.length; i++)
ret.push(half[i]);
for (var i = 0; i < data.length; i++)
ret.push(data[i]);
return ret;
} else return null;
};
SOURCE: https://github.com/jwhitehorn/jsMiner/blob/4fcdd9042a69b309035dfe9c9ddf716119831a16/engine.js#L149-165
Frankly speaking
Bitcoin block hashing algorithm is not officially described by any source.
"
The hash calculation is documented at Block hashing algorithm.
"
should read
The hash calculation is "described" at Block hashing algorithm.
en.bitcoin.it/wiki/Block_hashing_algorithm
btw the example code in PHP comes with a bug (typo)
the example code in Python generates errors when run by Python3.3 for Windows XP 32
(missing support for string.decode)