Apache Commons Crypto - Getting IllegalBlockSizeException - apache

I am trying to use the following method to handle both encryption and decryption of AES data using Apache Commons Crypto. Encryption is working fine. But when I try to decrypt the data I just encrypted, I am getting this block size error, which I don't totally understand since I'm setting the blocksize to 1024, which of course is a multiple of 16.
javax.crypto.IllegalBlockSizeException: Input length (with padding) not multiple of 16 bytes
Here is my code:
final int bufferSize = 1024;
try {
this.cryptoCipher.init(cipherMode, this.secretKeySpec, this.ivParameterSpec);
ByteBuffer inBuffer = ByteBuffer.allocateDirect(bufferSize);
ByteBuffer outBuffer = ByteBuffer.allocateDirect(bufferSize);
inBuffer.put(getUTF8Bytes(dataToBeEncrypted));
inBuffer.flip();
int updateBytes = this.cryptoCipher.update(inBuffer, outBuffer);
int finalBytes = this.cryptoCipher.doFinal(inBuffer, outBuffer); <<<< EXCEPTION HAPPENS HERE!!!
byte[] encoded = new byte[updateBytes + finalBytes];
outBuffer.flip();
outBuffer.duplicate().get(encoded);
encryptedDecryptedData = DatatypeConverter.printBase64Binary(encoded);
} catch (Exception exc) {
LOGGER.logp(Level.SEVERE, MODULE_NAME, methodName, "encountered exception: {0}", exc);
}

AES has one block size: 16-bytes so setting the block size to another value is an error. But you are not setting the block size, just creating a buffer of (in this case) 1024 bytes.
Most implementations accept an array and will internally process the input a block at a time. But the input must be an exact multiple of the block size and that is accomplished with padding, usually PKCS#7 padding, and by an option handles this automatically by adding on encryption and removing on decryption.

Related

SslStream<TcpStream> read does not return client's message

I am trying to implement a client-server application using TLS (openssl). I followed the example given in rust doc for my code's structure: example
Server Code
fn handle_client(mut stream: SslStream<TcpStream>){
println!("Passed in handling method");
let mut data = vec![];
let length = stream.read(&mut data).unwrap();
println!("read successfully; size read:{}", length);
stream.write(b"From server").unwrap();
stream.flush().unwrap();
println!("{}", String::from_utf8_lossy(&data));
}
fn main() {
//remember: certificate should always be signed
let mut acceptor = SslAcceptor::mozilla_intermediate(SslMethod::tls()).unwrap();
acceptor.set_private_key_file("src/keyfile/key.pem", SslFiletype::PEM).unwrap();
acceptor.set_certificate_file("src/keyfile/certs.pem",SslFiletype::PEM).unwrap();
acceptor.check_private_key().unwrap();
let acceptor = Arc::new(acceptor.build());
let listener = TcpListener::bind("127.0.0.1:9000").unwrap();
for stream in listener.incoming(){
match stream{
Ok(stream)=>{
println!("a receiver is connected");
let acceptor = acceptor.clone();
//thread::spawn(move || {
let stream = acceptor.accept(stream).unwrap();
handle_client(stream);
//});
}
Err(_e)=>{println!{"connection failed"}}
}
}
println!("Server");
}
Client Code
fn main() {
let mut connector = SslConnector::builder(SslMethod::tls()).unwrap();
connector.set_verify(SslVerifyMode::NONE); //Deactivated verification due to authentication error
connector.set_ca_file("src/keyfile/certs.pem");
let connector = connector.build();
let stream = TcpStream::connect("127.0.0.1:9000").unwrap();
let mut stream = connector.connect("127.0.0.1",stream).unwrap();
stream.write(b"From Client").unwrap();
stream.flush().unwrap();
println!("client sent its message");
let mut res = vec![];
stream.read_to_end(&mut res).unwrap();
println!("{}", String::from_utf8_lossy(&res));
// stream.write_all(b"client").unwrap();
println!("Client");
}
The Server code and the client code both compile without issues, albeit with some warnings. The client is able to connect to the server. But when the client writes its message From Client to the stream, the stream.read called in handle_client() returns nothing. Furthermore, when the server writes its message From Server, the client is able to receive that.
Hence, is there an issue with the way I use SslStream or on the way I configured my server?
I presume when you say stream.read returns nothing, that it returns a zero value indicating that nothing was read.
The Read trait API says this:
This function does not provide any guarantees about whether it blocks
waiting for data, but if an object needs to block for a read and
cannot, it will typically signal this via an Err return value.
If n is 0, then it can indicate one of two scenarios:
This reader has reached its "end of file" and will likely no longer be
able to produce bytes. Note that this does not mean that the reader
will always no longer be able to produce bytes.
The buffer specified was 0 bytes in length.
It is not an error if the returned value n is smaller than the buffer
size, even when the reader is not at the end of the stream yet. This
may happen for example because fewer bytes are actually available
right now (e. g. being close to end-of-file) or because read() was
interrupted by a signal.
So, you need to repeatedly call read until you receive all the bytes you expect, or you get an error.
If you know exactly how much you want to read (as you do in this case) you can call read_exact which will read exactly the number of bytes needed to fill the supplied buffer.
If you want to read up until a delimeter (such as a newline or other character) you can use a BufReader, which provides methods such as read_until or read_line.

ESP32-CAM how to publish large binary payload to AWS IOT ssl mqtt topic, tested many libs without success :-(

I'm currently working on an ESP32CAM project to publish on AWS IOT topic some captures from the camera in high resolution (UXGA).
I've managed to publish some short json payloads with attributes to different AWS IOT certificate protected topics but I'm facing an annoying issue to do the same with large payload as the capture binary file.
I've browsed many sites, forums, tested differents libs like MQTT, PubSubClient, AsyncMQTTClient... but I've not found a true working solution for large payload around 100KB size.
For example with the PubSubClient lib, I try to fragment my binary payload with the BeginPublish, write, endPublish scheme as below :
bool publishBinary(const uint8_t *buffer,size_t len, const char *topicPubName)
{
Serial.print("publishing binary ["+(String)len+"] ...");
if (len == 0) {
// Empty file
Serial.println("Error : binary payload is empty!");
return(false);
}
if (!client.beginPublish(topicPubName,len,false)) {
Serial.println("MQTT beginPublish failed.");
return(false);
}
size_t max_transfer_size=80;
size_t n=0;
size_t size_send;
size_t offset=0;
while ((len-offset)>0) {
n=(len-offset);
if (n > max_transfer_size)
n=max_transfer_size;
size_send=client.write((const uint8_t *)(buffer+offset),n);
Serial.printf("%d/%d : %.02f %%\n",offset,len,(double)((100*offset)/len));
//Serial.println("n: "+(String)n+" - send: "+(String)size_send);
if(size_send != n) {
// error handling. this is triggered on write fail.
Serial.println("Error during publishing..."+(String)size_send+" instead of "+(String)n);
client.endPublish();
return(false);
} else {
offset+=size_send;
}
}
client.endPublish();
Serial.println("ok");
return(true);
}
client is defined as PubSubClient client(net) where net is WiFiClientSecure object with validated CA_cert, cert and private key.
The MQTT connection is working well but when I try to publish the large binary payload, the function fragments buffer into chunks till the end but there is quite always an error like UNKNOWN ERROR CODE (0050) or when it succeeds to publish, only a small part of payload is published on the destination. In this case, my jpeg file is truncated on my S3 bucket where the payload lands.
I have to say that sometimes, I managed to publish a 65K payload but like a stroke of luck... :-)
I've looked for some examples on the web but very often it is for small payload. As mentioned in a post, I've tested the Publish_P(...) from PubSubClient... but same result, it aborts during transfer.
I begin to ask myself if it really possible by mqtt topic or do I have to create an API gateway with a lambda to handle such large payload. Tell me I'm wrong :-)
If you know a good solution for a true working large payload publishing, I would be delighted to discuss with you :-)
Thanks !
#include <PubSubClient.h>
void setup() {
...
boolean res = mqttClient.setBufferSize(50*1024); // ok for 640*480
if (res) Serial.println("Buffer resized."); else Serial.println("Buffer resizing failed");
...
}
I'm working with 50kB buffer and it works well, haven't tried beyond that bu it should work with 100kB as well.
After you've resized the buffer publish as you normally would.
BTW, setBufferSize function was only recently added IIRC.
Hey I had problems with the PubSubClient library and large files too. In the end I figured out, that I have to update the PubSubClient.h as follows:
//128000 = 128 kB is the maximum size for AWS I think..
#define MQTT_MAX_PACKET_SIZE 100000
// It takes a long time to transmit the large files
// maybe even more than 200 seconds...
#define MQTT_KEEPALIVE 200
#define MQTT_SOCKET_TIMEOUT 200
I had the same problem, and I found that PubSubClient's bufferSize is defined as uint16_t.
https://github.com/knolleary/pubsubclient/blob/v2.8/src/PubSubClient.h#L92
So, we can't extend the buffer size over 64 kB, and can't publish a large payload.
Michael's comment might help you.

Objective -C AES: 128 Bit, CBC, IV Encryption?

I'm building an iOS client for generating a token for Shopify's Multipass: http://docs.shopify.com/api/tutorials/multipass-login
Our nodeJS code is working fine (using the library https://github.com/beaucoo/multipassify), so I’m using that as a reference. What I found out was the the length of the cipherText generated in NodeJS (208 bytes) is significantly shorter than the one for Objective-C (432 bytes). This is the function that performs the AES 128 bit, CBC, IV encryption:
NodeJS (correct)
multipassify.prototype.encrypt = function(plaintext) {
// Use a random IV
var iv = crypto.randomBytes(16);
var cipher = crypto.createCipheriv('aes-128-cbc', this._encryptionKey,iv);
// Use IV as first block of ciphertext
var encrypted = iv.toString('binary') + cipher.update(plaintext, 'utf8', 'binary') + cipher.final('binary');
return encrypted;
}
Objective C (incorrect?)
- (NSData *)encryptCustomerDict:(NSMutableDictionary *)customerDict{
NSData *customerData = [NSKeyedArchiver archivedDataWithRootObject:customerDict];
// Random initialization vector
NSData *iv = [BBAES randomIV];
// AES: 128 bit key length, CBC mode of operation, random IV
NSData *cipherText = [BBAES encryptedDataFromData:customerData
IV:iv
key:self.encryptionKey
options:BBAESEncryptionOptionsIncludeIV];
return cipherText;
}
`
The NodeJS version passes in a plainText as an argument, and that should be a stringified version of the JSON object customerDict. Ideally, the bytes returned by both function should be the same length. I'm using the BBAES library for encryption, have no idea how to do this with the CommonCrypto library. Am I implementing the objective C function correctly?
First I thought that the BBAES library would convert the result to hexadecimals, but that did not seem to be the case (I actually checked the source code).
So the only logical reasoning seems to be that the input is double the length. That could for instance be the case if UTF-16 (or any other multi-byte character encoding) is used for encrypting text.
Furthermore I do see that BBAES prepends the IV to the ciphertext as well. That and possibly some additional padding overhead could make enough difference for the ciphertext to be over twice the size compared with NodeJS.
Hint: view the binary input's to your functions in hexadecimals to make sure that there are no differences!

Apache MINA networking - How to get data from org.apache.mina.core.service.IoHandlerAdapter messageRecieved(IoSession, Object)

public void messageReceived(IoSession session, Object message) throws Exception
{
// do something
}
Can anyone tell me how to get data from the Object?
It's really quite simple, just cast the message into an IoBuffer and pull out the bytes.
// cast message to io buffer
IoBuffer data = (IoBuffer) message;
// create a byte array to hold the bytes
byte[] buf = new byte[data.limit()];
// pull the bytes out
data.get(buf);
// look at the message as a string
System.out.println("Message: " + new String(buf));
Cast message to the object type you used in the client's session.write.

WCF: Message Framing and Custom Channels

I am trying to understand how I would implement message framing with WCF. The goal is to create a server in WCF that can handle proprietary formats over Tcp. I can't use the net.Tcp binding because that is only for SOAP.
I need to write a custom channel that would receive messages in the following format
. An example message would be "5 abcde". In particular I am not sure how to do framing in my custom channel.
Here is some sample code
class CustomChannel: IDuplexSessionChannel
{
private class PendingRead
{
public NetworkStream Stream = null;
public byte[] Buffer = null;
public bool IsReading = false;
}
private CommunicationState state = CommunicationState.Closed;
private TcpClient tcpClient = null;
private MessageEncoder encoder = null;
private BufferManager bufferManager = null;
private TransportBindingElement bindingElement = null;
private Uri uri = null;
private PendingRead pendingRead;
public CustomChannel(Uri uri, TransportBindingElement bindingElement, MessageEncoderFactory encoderFactory, BufferManager bufferManager, TcpClient tcpClient)
{
this.uri = uri;
this.bindingElement = bindingElement;
this.tcpClient = tcpClient;
this.bufferManager = bufferManager;
state = CommunicationState.Created;
}
public IAsyncResult BeginTryReceive(TimeSpan timeout, AsyncCallback callback, object state)
{
if (this.state != CommunicationState.Opened) return null;
byte[] buffer = bufferManager.TakeBuffer(tcpClient.Available);
NetworkStream stream = tcpClient.GetStream();
pendingRead = new PendingRead { Stream = stream, Buffer = buffer, IsReading = true };
IAsyncResult result = stream.BeginRead(buffer, 0, buffer.Length, callback, state);
return result;
}
public bool EndTryReceive(IAsyncResult result, out Message message)
{
int byteCount = tcpClient.Client.EndReceive(result);
string content = Encoding.ASCII.GetString(pendingRead.buffer)
// framing logic here
Message.CreateMessage( ... )
}
}
So basically the first time around EndTryReceive could just get a piece of the message from the pending read buffer "5 ab". Then the second time around it could get the rest of the message. The problem is when EndTryReceive gets called the first time, I am forced to create a Message object, this means that there will be a partial Message going up the channel stack.
What I really want to do is to make sure that I have my full message "5 abcde" in the buffer, so that when I construct the message in EndTryReceive it is a full message.
Does anyone have any examples of how they are doing custom framing with WCF?
Thanks,
Vadim
Framing at the wire level is not something that the WCF channel model really cares about; it's pretty much up to you to handle it.
What I mean by this is that it is your responsibility to ensure that your transport channel returns "entire" messages on a receive (streaming changes that a bit, but only up to a point).
In your case, it seems you're translating receive operations on your channel directly into receive operations on the underlying socket, and that just won't do, because that won't give you a chance to enforce your own framing rules.
So really, a single receive operation on your channel might very well translate to more than one receive operation on the underlying socket, and that's fine (and you can still do all that async, so it doesn't need to affect that part).
So basically the question becomes: what's your protocol framing model look like? Wild guess here, but it looks like messages are length prefixed, with the length encoded as a decimal string? (looks annoying).
I think your best bet in that case would be to have your transport buffer incoming data (say, up to 64KB of data or whatever), and then on each receive operation check the buffer to see if it contains enough bytes to extract the length of the incoming message. If so, then either read as many bytes as necessary from the buffer, or flush the buffer and read as many bytes from the socket. You'll have to be careful as, depending on how your protocol works, I'm assuming you might end up reading partial messages before you actually need them.
I agree with the thomasr. You can find some basic inspiration in Microsoft Technology Sample "ChunkingChannel".