first 8 bytes of CCCryptor 3DES decryption are always damaged? - objective-c

recently I am implementing an crypto algorithm which uses 3DES. However, i found that the first 8 bytes of 4096 data block are always damaged. But it is sure that it can be decrypted correctly in java. Following is my code:
+ (void) DecryptBy3DES:(NSInputStream*)strmSrc Output:(NSOutputStream*)strmDest CryptoRef:(CCCryptorRef)tdesCrypto
{
size_t dataOutMoved;
uint8_t inputBuf[BlockSize];
uint8_t outputBuf[BlockSize];
CCCryptorStatus cryptStatus;
int iBytesRead = 0;
int iBuffUsed = 0;
while ( (iBytesRead = [strmSrc read:inputBuf maxLength:BlockSize]) > 0 )
{
cryptStatus = CCCryptorUpdate(tdesCrypto, &inputBuf, iBytesRead, &outputBuf, BlockSize, &dataOutMoved);
assert(cryptStatus==noErr);
[strmDest write:outputBuf maxLength:dataOutMoved];
}
CCCryptorReset(tdesCrypto, nil);
}
where BlockSize is 4096.
I reused the CCCryptoRef tdesCrypto to decrypt several blocks. The first block to be decrypted was correct, but the following blocks all had damaged bytes at the beginning. I also try to reset the CCCryptoRef, which seems in vain.
I am really confused. Anyone has the same problem?

Forget my previous answer, I deleted it. The reason that you get the "wrong bytes" in the buffer is that they are the last 8 bytes of plain text of the buffer you tried to decrypt before.
You must call CCCryptorFinal() right after the last call to CCCryptorUpdate(). This will remove the padding bytes before writing the last few bytes of plain text. Because the cipher internally does not know it the last block of the last buffer contains padding bytes, it can not write the data to the output buffer just yet.
Please do not destroy or reset the CCCryptor within your while loop. Simply add the call to CCCryptorFinal() right after, and don't forget to write the resulting output to stream as well. You may reset the CCCryptor after that.
I'm presuming (guessing) DESede with CBC mode and PKCS#5 padding here. See wikipedia to see what I am talking about.

Here is my CryptoRef:
CCCryptorCreateWithMode(kCCEncrypt, kCCModeCBC, kCCAlgorithm3DES, ccNoPadding, [abIV bytes], [abKey bytes], [abKey length], nil, 0, 0, kCCModeOptionCTR_BE, &cryptRef);
Since I use CBC mode and ccNoPadding, there is no need to call CCCryptorFinal(). Instead, when I finish one operation (i.e. finish encrypt/decrypt one file, etc.), I should call CCCryptorReset() to reset the CryptoRef's iv to initial state before next operation. Or the first block of data will be defect.
Thanks for comments and sorry for left this issue behind. I hope this could help people who encountered the same problem.

Related

Audio through CAN FD into headphones

I am trying to record audio using a 12 bit resolution ADC, take the sample buffer and send it through CAN FD to another device, which takes samples of this audio and creates a .wav and plays it. The problem is that I see the data of the microphone being sent through CAN FD to the other device, but I am not able to transform this data into a .wav file properly and hear what I say through the microphone. I only hear beeps.
I'm creating a new .wav every 4 CAN FD messages in order to make some kind of real time communication and decrease the delay, but I don't think this is possible or if I am thinking it the proper way.
In this thread I take the message sent by the CAN FD and concatenate it in a buffer in order to introduce it in a .wav file. I have tried bigger buffers but it doesn't change the outcome.
How could I be able to take the data from the CAN FD and hear it?
Clarification: I know using CAN FD to transmit audio isn't the proper way, but it is for a master project.
struct canfd_frame frame;
CAN_MSG msg;
int trama_can[72];
int nbytes;
while (status_libreria == 0)
;
unsigned char buffer[256];
// FILE * fPtr;
int i=0,x=0;
//fPtr = fopen("Test.txt", "w");
while (1) {
do {
nbytes = read(s, &frame, sizeof(struct canfd_frame));
} while (nbytes == 0);
msg.id.ext = frame.can_id;
msg.dlc = frame.len;
if (msg.dlc > 8)
msg.dlc = 8; //Protecci�n hasta adaptar AC3LIB a CANFD
Numas_memcpy(&(msg.data.bdata), &(frame.data), msg.dlc);
can_frame_2_ac3lib(&msg, BUS_VERTICAL);
for(x=0;x<64;x++) buffer[i*64+x] = frame.data[x];
printf("%d \r\n",frame.data[x]);
printf("i:%d \r\n",i);
// Copiar datos a fichero.wav y reproducirlo simultaneamente
if (i == 3) {
printf("Datos IN\r\n");
write_wav("prueba.wav",256 , (short int *)buffer, 16000);
//fwrite(buffer,1,sizeof(buffer),fPtr);
//fclose(fPtr);
system("aplay prueba.wav -f cd");
i = 0;
system("rm prueba.wav");
}
i++;
}
32 first bytes of the audio file being recorded
In the picture, as you can see, the data is being recorded. moreover, this data is the same data as in the ADC, but when I play it, I only hear noise.
Simplify the problem first. Make sure you can transmit known data from one end to the other first at low rates. I'm sure the suggestion below will sound far too trivial. But until you are absolutely confident you understand it all, I predict you sill have many struggles.
Slowly - one frame per second, or even slower.
Learn to send one 0x55 byte from one end to the other and verify at the receiver.
Learn to send a few 0x55 in one frame and verify.
Learn to send 0x12345678 - verify it ends up with the bytes in the right order at the other end
Learn to send a counter. Check it at the receiver, make sure you do not drop any data.
Now do it all again but 10x faster.
Continue until you can send a counter at 10x the rate you need to for the audio without dropping any frames at all, for minutes and then hours.
Stress the rest of the system to make sure it still works under stress.
Only now, can you start to learn about sending audio.
Trust me, you will learn a lot!

How to read byte data from a StorageFile in cppwinrt?

A 2012 answer at StackOverflow (“How do I read a binary file in a Windows Store app”) suggests this method of reading byte data from a StorageFile in a Windows Store app:
IBuffer buffer = await FileIO.ReadBufferAsync(theStorageFile);
byte[] bytes = buffer.ToArray();
That looks simple enough. As I am working in cppwinrt I have translated that to the following, within the same IAsyncAction that produced a vector of StorageFiles. First I obtain a StorageFile from the VectorView using theFilesVector.GetAt(index);
//Then this line compiles without error:
IBuffer buffer = co_await FileIO::ReadBufferAsync(theStorageFile);
//But I can’t find a way to make the buffer call work.
byte[] bytes = buffer.ToArray();
“byte[]” can’t work, to begin with, so I change that to byte*, but then
the error is “class ‘winrt::Windows::Storage::Streams::IBuffer’ has no member ‘ToArray’”
And indeed Intellisense lists no such member for IBuffer. Yet IBuffer was specified as the return type for ReadBufferAsync. It appears the above sample code cannot function as it stands.
In the documentation for FileIO I find it recommended to use DataReader to read from the buffer, which in cppwinrt should look like
DataReader dataReader = DataReader::FromBuffer(buffer);
That compiles. It should then be possible to read bytes with the following DataReader method, which is fortunately supplied in the UWP docs in cppwinrt form:
void ReadBytes(Byte[] value) const;
However, that does not compile because the type Byte is not recognized in cppwinrt. If I create a byte array instead:
byte* fileBytes = new byte(buffer.Length());
that is not accepted. The error is
‘No suitable constructor exists to convert from “byte*” to “winrt::arrayView::<uint8_t>”’
uint8_t is of course a byte, so let’s try
uint8_t fileBytes = new uint8_t(buffer.Length());
That is wrong - clearly we really need to create a winrt::array_view. Yet a 2015 Reddit post says that array_view “died” and I’m not sure how to declare one, or if it will help. That original one-line method for reading bytes from a buffer is looking so beautiful in retrospect. This is a long post, but can anyone suggest the best current method for simply reading raw bytes from a StorageFile reference in cppwinrt? It would be so fine if there were simply GetFileBytes() and GetFileBytesAsync() methods on StorageFile.
---Update: here's a step forward. I found a comment from Kenny Kerr last year explaining that array_view should not be declared directly, but that std::vector or std::array can be used instead. And that is accepted as an argument for the ReadBytes method of DataReader:
std::vector<unsigned char>fileBytes;
dataReader.ReadBytes(fileBytes);
Only trouble now is that the std::vector is receiving no bytes, even though the size of the referenced file is correctly returned in buffer.Length() as 167,513 bytes. That seems to suggest the buffer is good, so I'm not sure why the ReadBytes method applied to that buffer would produce no data.
Update #2: Kenny suggests reserving space in the vector, which is something I had tried, this way:
m_file_bytes.reserve(buffer.Length());
But it didn't make a difference. Here is a sample of the code as it now stands, using DataReader.
buffer = co_await FileIO::ReadBufferAsync(nextFile);
dataReader = DataReader::FromBuffer(buffer);
//The following line may be needed, but crashes
//co_await dataReader.LoadAsync(buffer.Length());
if (buffer.Length())
{
m_file_bytes.reserve(buffer.Length());
dataReader.ReadBytes(m_file_bytes);
}
The crash, btw, is
throw hresult_error(result, hresult_error::from_abi);
Is it confirmed, then, that the original 2012 solution quoted above cannot work in today's world? But of course there must be some way to read bytes from a file, so I'm just missing something that may be obvious to another.
Final (I think) update: Kenny's suggestion that the vector needs a size has hit the mark. If the vector is first prepared with m_file_bytes.assign(buffer.Length(),0) then it does get filled with file data. Now my only worry is that I don't really understand the way IAsyncAction is working and maybe could have trouble looping this asynchronously, but we'll see.
The array_view bridges the gap between Windows APIs and C++ array types. In this example, the ReadBytes method expects the caller to provide some array that it can copy bytes into. The array_view forwards a pointer to the caller's array as well as its size. In this case, you're passing an empty vector. Try resizing the vector before calling ReadBytes.
When you know how many bytes to expect (in this case 2 bytes), this worked for me:
std::vector<unsigned char>fileBytes;
fileBytes.resize(2);
DataReader reader = DataReader::FromBuffer(buffer);
reader.ReadBytes(fileBytes);
cout<< fileBytes[0] << endl;
cout<< fileBytes[1] << endl;

X509_NAME - how to get the actual buffer?

I am adding openssl to my application and I can send and receive data however so far it is not encrypted or checking the certificates.
I get the servers certificate with:
X509 *gCert = NULL;
X509_NAME *gCertName = NULL;
gCert = SSL_get_peer_certificate(sslConnection.ssl);
gCertName = X509_NAME_new();
gCertName = X509_get_subject_name(gCert);
certNameLen = strlen(gCertName);
memcpy(write_buffer, gCertName, certNameLen);
Then I write write_buffer to the SSL socket but what I receive in the other end is just gibberish.
How do I use X509_NAME? strlen does not work on it it seems? I get the length 4 which is the pointer size I think, not the buffer size...
And what encoding is X509_NAME? It does not seem like utf-8...
I know sending downstream works, if I just put 0xAA or 0XBB in the buffer it is received correctly on the other end.
You are fairly close. However X509_get_name returns a X509_NAME structure which you need to cast into a human readable string (or use 'as-is' for detailed and safer comparisons; as parsing/comparing plain ascii is a bit risky - a clever adversary could put things like 'C=' in the actual text).
The easiest of the lot is
X509 * gCertName =SSL_get_peer_certificate(con);
// if null - error out
const char * buf = X509_NAME_oneline(gCertName, 0, 0);
// if null - release memory and error out
printf("Client\t: %s\n", buf);
OpenSSL_free(buf);
X509_free(gCertName);
which nets you a one line approximation as a C \0 terminated string. Note that the _oneline functions are depricated - see the man pages of openssl for more elaborate ways to get a text rendition which is UTF8 and multi-line safe. But above is fine for simple things like pure ASCII westerncharset domain names without things like alternatives/variants.

How to read variable length data from an asynchronous tcp socket?

I'm using CocoaAsyncSocket for an iOS project. I'm trying to read VarInts through an asynchronous interface. The problem is unlike something else like a String, where I can prefix a length, I don't know the length of a varint beforehand. It needs to be processed one byte at a time, but since each read operation is asynchronous other read calls may have been queued in between.
I considered reading into a buffer then processing it, say reading 5 bytes (the max length for a varint-32), and pushing extra bytes back, but that may hang unnecessarily if the varint is only 4 bytes and I'm waiting for a 5th byte to be available.
How can I do this? Also, I cannot change the protocol on the other end, to use fixed size ints.
Here's a snippet of code as Josh requested
- (void)readByte:(void (^)(int8_t))onComplete {
NSUInteger size = 1;
int32_t tag = OSAtomicAdd32(1, &_nextTag);
dispatch_async(self.dispatchQueue, ^{
[self.onCompleteHandlers setObject:(^void (NSData* data) {
int8_t x = 0;
[data getBytes:&x length:size];
onComplete(x);
}) forKey:[NSNumber numberWithInteger:((NSInteger) tag)]];
[self.socket readDataToLength:size withTimeout:-1 tag:tag];
});
}
A callback is saved in a dictionary, which is used in the delegate method socket: didReadData: withTag.
Suppose I'm reading a VarInt byte by byte:
execute read first byte for varint
don't know if we need to read another byte for a varint or not; that depends on the result of the first read
(possible) read another byte for something else
read second byte for varint, but now it's actually the 3rd byte being read
I can imagine using a flag to indicate whether or not I'm in a multipart-read, and a queue to hold reads that should be executed after the multipart-read, and I've started writing it but it's quite messy. Just wondering if there is a standard/recommended/better way to approach this problem.
in short there are 4 ways to know how much to read from a socket...
read some format that you can infer the length from like the Content-Length header... only works if the whole request can be put together before the body is sent.
read until some pattern: like \r\n\r\n at the end of the headers
read until some timeout... after you get no bytes after n seconds you flush the buffers and close the connection.
read until the server closes the connection... actually used to be pretty common.
these each have problems and I would probably lean in your case from using some existing protocol.
of course there is overhead to doing it that way, and you may find that you don't want to use any of that application level stuff and your requests may be like:
client>"doMath(2+5)\0"
server>"(7)\0"
but it is hard to answer your general question specifically.
edit:
So I looked into the varint base-128 issue a little more and I think really only a timeout or the server closing the connection will work, if you are writing these right at the TCP level which is horrible...

Padding at the AVCaptureVideoData with kCVPixelFormatType_32BGRA

I´m trying to send a image through tcp to a server, firts getting the buffer from the camera and then converting to grayScale the buffer, finally I send the buffer to the server.
All is working fine, but the problem is that the image that the server receive it is not 100 % okay, it looks like there is some padding that I dind´t use at the conversion, all the images are more or less than the next.
I use the next code to get the image:
VImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
uint8_t * baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
the image is here http://s3.subirimagenes.com:81/imagen/previo/thump_6421684image001.png
The only padding you may get is per row of pixels — you should use something like:
/* ... */
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
for(interesting values of y)
{
uint8_t *pointerToThisLine = baseAddress + bytesPerRow*y;
}
Rather than assuming, one way or another, that one scanline ends somewhere in memory and then the next immediately starts.
That said, the top portion of your image is clearly correct and I've yet to see an instance where pitch wasn't equal to width*bytesPerPixel, so it'd be unlikely to be causing your problem in practice even if you haven't done that correctly.
Inspecting your image, it looks like the broken region contains copies of various fragments of the working region, so I don't think the problem is padding related — it's some sort of more obtuse memory management or transmission error. Have you checked that side of things?