Bulk transfer sending too much (multiple of usb packet?) - usb

Problem I am trying to solve
I am sending data over usb with libusb_bulk_transfer, with something like this:
int sent = 0;
int bulk_result = libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, buffer,
buffer_len, &sent, 5000);
and I receive those transfers on the other side in Kotlin (Android).
Most of the time, it works: I send a buffer of side, say, 289 bytes, and on the other side I receive 289 bytes.
Sometimes, however, I receive too much. Say I send 1536 bytes, and I receive 1719 bytes.
My solution that does not work
My understanding (e.g. from here) is that "A bulk transfer is considered complete when it has transferred the exact amount of data requested, transferred a packet less than the maximum endpoint size, or transferred a zero-length packet".
And because 1536 is a multiple of 64 (and all the wrong packets I receive are multiples of 64), I thought that this was my issue. So I went for sending a zero-length packet after I send a buffer that is a multiple of the maximum endpoint size. And I duly noted that the maximum endpoint size is not necessarily 64, so I wanted to detect it.
Here is my "solution":
int sent = 0;
int bulk_result = libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, buffer,
buffer_len, &sent, 5000);
if (sent % get_usb_packet_size() == 0) {
libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, nullptr, 0, &sent, 5000);
}
With the simple get_usb_packet_size() below, which happens to be 256:
int get_usb_packet_size() { return endpoint_out->wMaxPacketSize; }
Still, that does not seem to work! The return code of both libusb_bulk_transfer is 0 (success), the first one says it sent buffer_len bytes (as expected), and the second one says it sent 0 bytes (as expected).
But my receiver still receives packets that are longer than what is expected. I tried using 64 instead of 256 (therefore sending more zero-length packets), but I still get that same problem.
What am I missing?

The issue was due to concurrency: two threads were calling my code above, and therefore sometimes one thread would not have time to send the zero-length packet right after its packet.
So this actually seems to work:
int sent = 0;
int bulk_result = libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, buffer,
buffer_len, &sent, 5000);
if (sent % get_usb_packet_size() == 0) {
libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, nullptr, 0, &sent, 5000);
}
with
int get_usb_packet_size() { return endpoint_out->wMaxPacketSize; }

Related

Winsock2, BitCoin Select() returns data to read, Recv() returns 0 bytes

I made a connection to BitCoin node via WinSock2. I sent the proper "getaddr" message and then the server responds, the replied data are ready to read, because Select() notifies this, but when I call Recv() there are 0 bytes read.
My code is working OK on localhost test server. The incomplete "getaddr" message (less than 24 bytes) is NOT replied by BitCoin node, only proper message, but I can't read the reply with Recv(). After returning 0 bytes, the Select() still returns there are data to read.
My code is divided into DLL which uses Winsock2 and the main() function.
Here are key fragments:
struct CMessageHeader
{
uint32_t magic;
char command[12];
uint32_t payload;
uint32_t checksum;
};
CSocket *sock = new CSocket();
int actual; /* Actually read/written bytes */
sock->connect("109.173.41.43", 8333);
CMessageHeader msg = { 0xf9beb4d9, "getaddr\0\0\0\0", 0, 0x5df6e0e2 }, rcv = { 0 };
actual = sock->send((const char *)&msg, sizeof(msg));
actual = sock->select(2, 0); /* Select read with 2 seconds waiting time */
actual = sock->receive((char *)&rcv, sizeof(rcv));
The key fragment of DLL code:
int CSocket::receive(char *buf, int len)
{
int actual;
if ((actual = ::recv(sock, buf, len, 0)) == SOCKET_ERROR) {
std::ostringstream s;
s << "Nie mozna odebrac " << len << " bajtow.";
throw(CError(s));
}
return(actual);
}
If select() reports the socket is readable, and then recv() returns 0 afterwards, that means the peer gracefully closed the connection on their end (ie, sent a FIN packet to you), so you need to close your socket.
On a side note, recv() can return fewer bytes than requested, so your receive() function should call recv() in a loop until all of the expected bytes have actually been received, or an error occurs (same with send(), too).

STM32 USB Tx Busy

I have an application running on STM32F429ZIT6 using USB stack to communicate with PC client.
MCU receives one type of message of 686 bytes every second and receives another type of message of 14 bytes afterwards with 0.5 seconds of delay between messages. The 14 bytes message is a heartbeat so it needs to replied by MCU.
It happens that after 5 to 10 minutes of continuous operation, MCU is not able to send data because
hcdc->TxState is always busy. Reception works fine.
During Rx interruption, application only adds data to ring buffer, so that this buffer is later serialized and processed by main function.
static int8_t CDC_Receive_HS(uint8_t* Buf, uint32_t *Len) {
/* USER CODE BEGIN 11 */
/* Message RX Completed, Send it to Ring Buffer to be processed at FMC_Run()*/
for(uint16_t i = 0; i < *Len; i++){
ring_push(RMP_RXRingBuffer, (uint8_t *) &Buf[i]);
}
USBD_CDC_SetRxBuffer(&hUsbDeviceHS, &Buf[0]);
USBD_CDC_ReceivePacket(&hUsbDeviceHS);
return (USBD_OK);
/* USER CODE END 11 */ }
USB TX is also kept as simple as possible:
uint8_t CDC_Transmit_HS(uint8_t\* Buf, uint16_t Len) {
uint8_t result = USBD_OK;
/\* USER CODE BEGIN 12 */
USBD_CDC_HandleTypeDef hcdc = (USBD_CDC_HandleTypeDef*)hUsbDeviceHS.pClassData;
if (hcdc-\>TxState != 0)
{
ZF_LOGE("Tx failed, resource busy\\n\\r"); return USBD_BUSY;
}
USBD_CDC_SetTxBuffer(&hUsbDeviceHS, Buf, Len);
result = USBD_CDC_TransmitPacket(&hUsbDeviceHS);
ZF_LOGD("TX Message Result:%d\\n\\r", result);
/ USER CODE END 12 \*/
return result;
}
I'm using latest HAL Drivers and software from CubeIDE (1.27.1).
I have tried expanding heap min size from 0x200 to larger values but result is the same.
Also Line Coding is set according to what recommended values:
case CDC_SET_LINE_CODING:
LineCoding.bitrate = (uint32_t) (pbuf[0] | (pbuf[1] << 8) | (pbuf[2] << 16) | (pbuf[3] << 24));
LineCoding.format = pbuf[4];
LineCoding.paritytype = pbuf[5];
LineCoding.datatype = pbuf[6];
ZF_LOGD("Line Coding Set\n\r");
break;
case CDC_GET_LINE_CODING:
pbuf[0] = (uint8_t) (LineCoding.bitrate);
pbuf[1] = (uint8_t) (LineCoding.bitrate >> 8);
pbuf[2] = (uint8_t) (LineCoding.bitrate >> 16);
pbuf[3] = (uint8_t) (LineCoding.bitrate >> 24);
pbuf[4] = LineCoding.format;
pbuf[5] = LineCoding.paritytype;
pbuf[6] = LineCoding.datatype;
ZF_LOGD("Line Coding Get\n\r");
break;
Thanks in advance, any support is appreciated.
I don't know enough about the STM32 libraries to really check your code, but I suspect you are forgetting to read the bytes transmitted by the STM32 on PC side. Try opening a terminal program like PuTTY and connecting to the STM32's virtual serial port. Otherwise, the Windows USB-to-serial driver (usbser.sys) will eventually have its buffers filled with data from your device and it will stop requesting more, at which point the buffers on your device will fill up as well.

Sending and receiving UDP using the same port does not work with the asio library?

I'm trying to send and receive UDP packets through the same endpoint. As far as I know this should be possible. But I can not get it to work with the asio library (version 1.20.0).
This is what I do:
asio::io_context io_context;
asio::ip::udp::socket* udpSendRecvSocket = new asio::ip::udp::socket(io_context, asio::ip::udp::endpoint(asio::ip::udp::v4(), 7782));
asio::error_code ec;
char data[1000];
//
// send packet
//
std::string ipAddress = "127.0.0.1";
asio::ip::address ip_address = asio::ip::address::from_string(ipAddress);
asio::ip::udp::endpoint remoteTarget_endpoint(ip_address, 5500);
udpSendRecvSocket->send_to(asio::buffer(data, 50), remoteTarget_endpoint, 0, ec);
if (ec) {
return 0;
}
//
// receive packets
//
size_t avLen = udpSendRecvSocket->available(ec);
while (avLen) {
asio::ip::udp::endpoint remote_endpoint;
size_t length = udpSendRecvSocket->receive_from(asio::buffer(data, 1000), remote_endpoint, 0, ec);
int p = remote_endpoint.port();
if (ec) {
return 0;
}
avLen -= length;
}
The receive does not work correctly. I do receive a packet that I send (from some other app). I know because avLen gets the right value. But when executing the receive_from(), if fails. And the port number in p gets the value 5500. This is the value of the target port of the send_to() call that was executed before.
The strange thing is that when I remove the send_to() call, the receive does work correctly and the p will reflect the correct port number of the sending application.
Is this a bug?

How to detect termination character in SChannel-based HTTPS client

I've searched StackOverflow trying to find a similar problem, but haven't come across it, so I am posting this question.
I am trying to write an C++ HTTPS client using Microsoft's SChannel libraries, and I'm getting stochastic errors with chunked message transfer. This issue only seems to occur on very long downloads -- short ones generally work OK. Most of the time the code works properly -- even for long downloads -- but occasionally the recv() command gracefully timesout, disconnecting my TLS session, and other times, I get an incomplete last packet. The stochastic errors appear to be the result of the different size chunks and encryption blocks the server is using to pass the data. I know I need to handle this variation, but while this would be easy to solve on an unencrypted HTTP connection, the encryption aspect is causing me problems.
First, the timeout problem, which occurs about 5% of the time I request large HTTP requests (about 10 MB of data from a single HTTP GET request).
The timeout is resulting because on the last chunk I have specified a bigger receive buffer than the data remaining on a blocking socket. The obvious fix to this is to only request exactly the number of bytes I need for the next chunk, and that is what I did. But for some reason, the amount received from each request is less than what I request, yet appears to be missing no data after decryption. I'm guessing this must be due to some compression in the data stream, but I don't know. IN any event, if it is using compression, I have no idea how to translate the size of the decrypted uncompressed byte stream into the size of compressed encrypted byte stream including the encryption headers and trailers to request the exact right number of bytes. Can anyone help me do that?
The alternative approach is for me to just look for two CR+LFs in a row, which would also signal the end of the HTTPS response. But because the data is encrypted, I can't figure out how to look byte by byte. SChannel's DecryptMessage() seems to do its decryptions in blocks, not byte by byte. Can anyone in this forum provide any advice on how to do byte-by-byte decryption to enable me to look for the end of the chunked output?
The second problem is DecryptMessage sometimes erroneously thinks it is done decrypting before I reach the actual end of the message. The resultant behavior is I go on to the next HTTP request, and I get the rest of the previous response where I am expecting to see the header of the new request.
The obvious solution to this is to check the contents of the decrypted message to see if we actually reached the end, and if not, try to receive more data before sending the next HTTP request. But when I do this, and try to decrypt, I get a decryption error message.
Any advice/help anyone can provide on a strategies would be appreciated. I've attached the relevant code sections for the read/decrypt process of the HTTP body -- I'm not including the header read and parsing because that is working without any problems.
do
{
// Note this receives large files OK, but I can't tell when I hit the end of the buffer, and this
// hangs. Need to consider a non-blocking socket?
// numBytesReceived = recv(windowsSocket, (char*)inputBuffer, inputBufSize, 0);
m_ErrorLog << "Next read size expected " << nextReadSize << endl;
numBytesReceived = recv(windowsSocket, (char*)inputBuffer, nextReadSize, 0);
m_ErrorLog << "NumBytesReceived = " << numBytesReceived << endl;
if (m_BinaryBufLen + numBytesReceived > m_BinaryBufAllocatedSize)
::EnlargeBinaryBuffer(m_BinaryBuffer,m_BinaryBufAllocatedSize,m_BinaryBufLen,numBytesReceived+1);
memcpy(m_BinaryBuffer+m_BinaryBufLen,inputBuffer,numBytesReceived);
m_BinaryBufLen += numBytesReceived;
lenStartDecryptedChunk = decryptedBodyLen;
do
{
// Decrypt the received data.
Buffers[0].pvBuffer = m_BinaryBuffer;
Buffers[0].cbBuffer = m_BinaryBufLen;
Buffers[0].BufferType = SECBUFFER_DATA; // Initial Type of the buffer 1
Buffers[1].BufferType = SECBUFFER_EMPTY; // Initial Type of the buffer 2
Buffers[2].BufferType = SECBUFFER_EMPTY; // Initial Type of the buffer 3
Buffers[3].BufferType = SECBUFFER_EMPTY; // Initial Type of the buffer 4
Message.ulVersion = SECBUFFER_VERSION; // Version number
Message.cBuffers = 4; // Number of buffers - must contain four SecBuffer structures.
Message.pBuffers = Buffers; // Pointer to array of buffers
scRet = m_pSSPI->DecryptMessage(phContext, &Message, 0, NULL);
if (scRet == SEC_E_INCOMPLETE_MESSAGE)
break;
if( scRet == SEC_I_CONTEXT_EXPIRED )
{
m_ErrorLog << "Server shut down connection before I finished reading" << endl;
m_ErrorLog << "# of Bytes Requested = " << nextReadSize << endl;
m_ErrorLog << "# of Bytes received = " << numBytesReceived << endl;
m_ErrorLog << "Decrypted data to this point = " << endl;
m_ErrorLog << decryptedBody << endl;
m_ErrorLog << "BinaryData just decrypted: " << endl;
m_ErrorLog << Buffers[0].pvBuffer << endl;
break; // Server signalled end of session
}
if( scRet != SEC_E_OK &&
scRet != SEC_I_RENEGOTIATE &&
scRet != SEC_I_CONTEXT_EXPIRED )
{
DisplaySECError((DWORD)scRet,errmsg);
m_ErrorLog << "CSISPDoc::ReadDecrypt(): " << "Failed to decrypt message--Error=" << errmsg;
if (decryptedBody)
m_ErrorLog << decryptedBody << endl;
return scRet;
}
// Locate data and (optional) extra buffers.
pDataBuffer = NULL;
pExtraBuffer = NULL;
for(i = 1; i < 4; i++)
{
if( pDataBuffer == NULL && Buffers[i].BufferType == SECBUFFER_DATA )
pDataBuffer = &Buffers[i];
if( pExtraBuffer == NULL && Buffers[i].BufferType == SECBUFFER_EXTRA )
pExtraBuffer = &Buffers[i];
}
// Display the decrypted data.
if(pDataBuffer)
{
length = pDataBuffer->cbBuffer;
if( length ) // check if last two chars are CR LF
{
buff = (PBYTE)pDataBuffer->pvBuffer; // printf( "n-2= %d, n-1= %d \n", buff[length-2], buff[length-1] );
if (decryptedBodyLen+length+1 > decryptedBodyAllocatedSize)
::EnlargeBuffer(decryptedBody,decryptedBodyAllocatedSize,decryptedBodyLen,length+1);
memcpy_s(decryptedBody+decryptedBodyLen,decryptedBodyAllocatedSize-decryptedBodyLen,buff,length);
decryptedBodyLen += length;
m_ErrorLog << buff << endl;
}
}
// Move any "extra" data to the input buffer -- this has not yet been decrypted.
if(pExtraBuffer)
{
MoveMemory(m_BinaryBuffer, pExtraBuffer->pvBuffer, pExtraBuffer->cbBuffer);
m_BinaryBufLen = pExtraBuffer->cbBuffer; // printf("inputStrLen= %d \n", inputStrLen);
}
}
while (pExtraBuffer);
if (decryptedBody)
{
if (incompletePacket)
p1 = decryptedBody + lenStartFragmentedPacket;
else
p1 = decryptedBody + lenStartDecryptedChunk;
p2 = p1;
pEndDecryptedBody = decryptedBody+decryptedBodyLen;
if (lastDecryptRes != SEC_E_INCOMPLETE_MESSAGE)
chunkSizeBlock = true;
do
{
while (p2 < pEndDecryptedBody && (*p2 != '\r' || *(p2+1) != '\n'))
p2++;
// if we're here, we probably found the end of the current line. The pattern we are
// reading is chunk length, chunk, chunk length, chunk,...,chunk lenth (==0)
if (*p2 == '\r' && *(p2+1) == '\n') // new line character -- found chunk size
{
if (chunkSizeBlock) // reading the size of the chunk
{
pStartHexNum = SkipWhiteSpace(p1,p2);
pEndHexNum = SkipWhiteSpaceBackwards(p1,p2);
chunkSize = HexCharToInt(pStartHexNum,pEndHexNum);
p2 += 2; // skip past the newline character
chunkSizeBlock = false;
if (!chunkSize) // chunk size of 0 means we're done
{
bulkReadDone = true;
p2 += 2; // skip past the final CR+LF
}
nextReadSize = chunkSize+8; // chunk + CR/LF + next chunk size (4 hex digits) + CR/LF + encryption header/trailer
}
else // copy the actual chunk
{
if (p2-p1 != chunkSize)
{
m_ErrorLog << "Warning: Actual chunk size of " << p2 - p1 << " != stated chunk size = " << chunkSize << endl;
}
else
{
// copy over the actual chunk data //
if (m_HTTPBodyLen + chunkSize > m_HTTPBodyAllocatedSize)
::EnlargeBuffer(m_HTTPBody,m_HTTPBodyAllocatedSize,m_HTTPBodyLen,chunkSize+1);
memcpy_s(m_HTTPBody+m_HTTPBodyLen,m_HTTPBodyAllocatedSize,p1,chunkSize);
m_HTTPBodyLen += chunkSize;
m_HTTPBody[m_HTTPBodyLen] = 0; // null-terminate
p2 += 2; // skip over chunk and end of line characters
chunkSizeBlock = true;
chunkSize = 0;
incompletePacket = false;
lenStartFragmentedPacket = 0;
}
}
p1 = p2; // move to start of next chunk field
}
else // got to end of encrypted body with no CR+LF found --> fragmeneted chunk. So we need to read and decrypt at least one more chunk
{
incompletePacket = true;
lenStartFragmentedPacket = p1-decryptedBody;
}
}
while (p2 < pEndDecryptedBody);
lastDecryptRes = scRet;
}
}
while (scRet == SEC_E_INCOMPLETE_MESSAGE && !bulkReadDone);
TLS does not support byte-by-byte decryption.
TLS 1.2 breaks its input into blocks of up to 16 kiB, then encrypts them into ciphertext blocks that are slightly larger due to the need for encryption IVs/nonces and integrity protection tags/MACs. It is not possible to decrypt a block until the entire block is available. You can find the full details at https://www.rfc-editor.org/rfc/rfc5246#section-6.2.
Since you're already able to decrypt the first few blocks (containing the headers), you should be able to read the HTTP length so that you at least know the plaintext length that you're expecting, which you can then compare to the number of bytes that you've decrypted from the stream. That won't tell you how many bytes of ciphertext you need, though -- you can get an upper bound on the size of a fragment by calling m_pSPPI->QueryContextAttributes() and then should read either at least that number of bytes or until end of stream before trying to decrypt.
Have you tried looking at other examples? http://www.coastrd.com/c-schannel-smtp appears to contain a detailed example of an SChannel-based TLS client.
I was finally able to figure this out. I fixed this by decrypting each TCP/IP packet as it came in to check for the CR+LF+CR+LF in the decrypted packet instead of what I had been doing -- trying to consolidate all of the encrypted packets into one buffer prior to decrypting it.
On the "hang" problem, what I thought was happening was that recv() wasn't returning because the amount of data actually received was smaller than my expected receive size. But what actually happened was I had actually received the entire transmission, but I didn't realize it. Thus, I was making additional recv() calls when there was actually no more data to receive. The fact that there was no more data to receive was what caused the connection to time out (causing a "hang").
The truncation problem was occurring because I couldn't detect the CR+LF+CR+LF sequence in the encrypted stream, and I erroneously thought SChannel returned SEC_E_OK on DecryptMessage() only when the entire response was processed.
Both problems were eliminated once I was able to detect the true end of the message by decrypting in piecemeal fashion vs. in bulk.
In order to figure this out, I had to completely restructure the sample SChannel code from www.coastRD.com. While the www.coastRD.com code was very helpful in general, it was written for SMTP transfers, not chunked HTTP encoding. In addition, the way it was written, it was hard to follow the logic for processing variations in how messages were received and processed. Lastly, I spent a lot of time "hacking" Schannel to understand how it behaves and which codes are returned under which conditions, because unfortunately none of that is discussed in any of the Microsoft documentation (that I've seen).
The first thing I needed to understand was how SChannel tries to decrypt a message. In Schannel, the 1st 13 bytes of an encrypted message are the encryption header, and the last 16 bytes are the encryption trailer. I still don't know what the trailer does, but I did realize that the encryption header is never actually encrypted/decrypted. The 1st 5 bytes are just the TLS record header for "application data" (hex code 0x17), followed by two bytes defining the TLS version used, followed by 2 bytes of the TLS record fragment size, followed by leading 0s and one byte which I still haven't figured out.
The reason this matters is that DecryptMessage() only works if the record type is "application data". For any other record type (such as a TLS handshake "finished message), DecryptMessage() won't even try to decrypt it-- it will just return a SEC_E_DECRYPT_FAILURE code.
In addition, I needed to understand that DecryptMessage() often can't decrypt the entire contents of the receive buffer in one pass when using chunked transfer encoding. In order to successfully process the entire contents of the receive buffer and the remainder of the server HTTPS response, I needed to understand two key return codes from DecryptMessage() -- SEC_E_OK and SEC_E_INCOMPLETE_MESSAGE.
When I received SEC_E_OK, it meant DecryptMessage() was able to successfully decrypt at least part of the receive buffer. When this occurred, the 1st 13 bytes (the encryption header) remained unchanged. However, the bytes immediately following the header were decrypted in-place, followed by the encryption trailer (which is also unchanged). Often, there will be additional encrypted data still in the receive buffer after the end of the encryption trailer, which is also unchanged.
Since I was using the SecBufferDesc output buffer structures and 4 SecBuffer structures described in www.coastRD.com's code, I needed to understand that these are not actually 4 separate buffers -- they are just pointers to different locations within the receive buffer. The first buffer is a pointer to the encryption header. The second buffer is a pointer to the beginning of the decrypted data. The 3rd buffer is a pointer to the beginning of the encryption trailer. Lastly, the 4th buffer is a pointer to the "extra" encrypted data that DecryptMessage() was not able to process on the last call.
Once I figured that out, I realized that I needed to copy the decrypted data (the pointer in the second buffer) into a separate buffer, because the receive buffer would probably be overwritten later.
If there was no "extra" data in the 4th buffer, I was done done for the moment -- but this was the exception rather than the rule.
If there was extra data (the usual case), I needed to move that data forward to the very beginning of the receive buffer, and I needed to call DecryptMessage() again. This decrypted the next chunk, and I appended that data to the data I already copied to the separate buffer, and repeated this process until there was either no more data left in the receive buffer to decrypt, or I received a SEC_E_INCOMPLETE_MESSAGE.
If I received a SEC_E_INCOMPLETE_MESSAGE, the data remaining in the receive buffer was unchanged. It wasn't decrypted because it was an incomplete encryption block. Thus, I needed to call recv() again to get more encrypted data from the server to complete the encryption block.
Once that occurred, I appended newly received data to the receive buffer. I appended it to the contents of the receive buffer vs. overwriting it because the latter approach would have overwritten the beginning of the encryption block, producing a SEC_E_DECRYPT_FAILURE message the next time I called DecryptMessage().
Once I appended this new block of data to the receive buffer, I repeated the steps above to decrypt the contents of the receive buffer, and continued to repeat this whole process until I got a SEC_E_OK message on the last chunk of data left in the receive buffer.
But I wasn't necessarily done yet -- there may still be data being sent by the server. Stopping at this point is what caused the truncation issue I had occasionally encountered.
So I now checked the last 4 bytes of the decrypted data to look for CR+LF+CR+LF. If I found that sequence, I knew I had received and decrypted a complete HTTPS response.
But if I hadn't, I needed to call recv() again and repeat the process above until I saw the CR+LF+FR+LF sequence at the end of the data.
Once I implemented this process, I was able to definitively identify the end of the encrypted HTTPS response, which prevented me from making an unnecessary recv() call when no data was remaining, preventing a "hang", as well as prematurely truncating the response.
I apologize for the long answer, but given the lack of documentation on SChannel and its functions like DecryptMessage(), I thought this description of what I learned might be helpful to others who may have also been struggling to use SChannel to process TLS HTTP responses.
Thank you again to user3553031 for trying to help me with this over 7 months ago -- those attempts helped me narrow down the problem.

Memory use increases when sending messages using ActiveMQ-cpp

When using ActiveMQ-cpp all of the ActiveMQ clients that are created and send messages using cms::MessageProducer gradually increase the memory usage. Right now that looks to be about 4Kb per message send. There does not appear to be any memory leaks with valgrind and the memory increase will continue until the program is terminated or uses all available system memory.
The memory increase happens when the messages are sent and not received by any other ActiveMQ client and when messages are just sent by the producer with no other consumer. It also appears that the act of creating a producer can lead to the memory increase. Here is example code of a call to Publish that leads to the memory increase. I have also tried just using a member session_ variable that is used to create destinations and producers instead of creating a new session every time.
void ActiveMqClient::Publish(std::string type,
void* input, size_t len) {
if(type == "") {
ead::eadwarn() << "ActiveMqClient::Publish() - Attempting to publish to "
"empty string topic. Please check your message topic." << std::endl;
}
cms::Session* session = connection_->createSession(
cms::Session::AUTO_ACKNOWLEDGE);
//creates a destination and producer
cms::Destination* destination(session->createTopic(type));
cms::MessageProducer* producer(session->createProducer(destination));
producer->setDeliveryMode(cms::DeliveryMode::PERSISTENT);
//creates message and sets properties
std::unique_ptr<cms::BytesMessage> message(session->createBytesMessage());
//gets byte array from input
size_t size_to_write = 0;
unsigned char* body = (unsigned char*) input;
if(io_handler_ != nullptr) {
body =
io_handler_->ConvertBodyForPublish(type, input, &len, &size_to_write);
}
//writes the bytes of input
message->writeBytes(const_cast<const unsigned char*>(body), 0,
size_to_write);
//gets byte array from input
unsigned char* payload = (unsigned char*) input;
if(io_handler_ != nullptr) {
payload = io_handler_->ConvertPayloadForPublish(type,
input,
len,
&size_to_write);
}
//writes the bytes of input
if (size_to_write != 0) {
message->writeBytes(payload, 0, size_to_write);
}
//sets the message type of the message
message->setStringProperty("MsgType", type);
//sets the message size
message->setIntProperty("size", len);
//sets the byte pointer to the beginning of the byte array
message->reset();
producer->send(message.get());
//calls sentcallback if it exists
if(io_handler_ != nullptr) {
io_handler_->HandleMessageSent(type, reinterpret_cast<char*>(body), len);
}
//clears memory
delete producer;
delete destination;
delete session;
}
So any ideas on why the memory would steadily keep increasing when utilizing the MessageProducer in this way. No matter how I use this pattern it seems to keep increasing the memory use. Thanks in advance for any help with this!