Memory use increases when sending messages using ActiveMQ-cpp - activemq

When using ActiveMQ-cpp all of the ActiveMQ clients that are created and send messages using cms::MessageProducer gradually increase the memory usage. Right now that looks to be about 4Kb per message send. There does not appear to be any memory leaks with valgrind and the memory increase will continue until the program is terminated or uses all available system memory.
The memory increase happens when the messages are sent and not received by any other ActiveMQ client and when messages are just sent by the producer with no other consumer. It also appears that the act of creating a producer can lead to the memory increase. Here is example code of a call to Publish that leads to the memory increase. I have also tried just using a member session_ variable that is used to create destinations and producers instead of creating a new session every time.
void ActiveMqClient::Publish(std::string type,
void* input, size_t len) {
if(type == "") {
ead::eadwarn() << "ActiveMqClient::Publish() - Attempting to publish to "
"empty string topic. Please check your message topic." << std::endl;
}
cms::Session* session = connection_->createSession(
cms::Session::AUTO_ACKNOWLEDGE);
//creates a destination and producer
cms::Destination* destination(session->createTopic(type));
cms::MessageProducer* producer(session->createProducer(destination));
producer->setDeliveryMode(cms::DeliveryMode::PERSISTENT);
//creates message and sets properties
std::unique_ptr<cms::BytesMessage> message(session->createBytesMessage());
//gets byte array from input
size_t size_to_write = 0;
unsigned char* body = (unsigned char*) input;
if(io_handler_ != nullptr) {
body =
io_handler_->ConvertBodyForPublish(type, input, &len, &size_to_write);
}
//writes the bytes of input
message->writeBytes(const_cast<const unsigned char*>(body), 0,
size_to_write);
//gets byte array from input
unsigned char* payload = (unsigned char*) input;
if(io_handler_ != nullptr) {
payload = io_handler_->ConvertPayloadForPublish(type,
input,
len,
&size_to_write);
}
//writes the bytes of input
if (size_to_write != 0) {
message->writeBytes(payload, 0, size_to_write);
}
//sets the message type of the message
message->setStringProperty("MsgType", type);
//sets the message size
message->setIntProperty("size", len);
//sets the byte pointer to the beginning of the byte array
message->reset();
producer->send(message.get());
//calls sentcallback if it exists
if(io_handler_ != nullptr) {
io_handler_->HandleMessageSent(type, reinterpret_cast<char*>(body), len);
}
//clears memory
delete producer;
delete destination;
delete session;
}
So any ideas on why the memory would steadily keep increasing when utilizing the MessageProducer in this way. No matter how I use this pattern it seems to keep increasing the memory use. Thanks in advance for any help with this!

Related

Winsock2, BitCoin Select() returns data to read, Recv() returns 0 bytes

I made a connection to BitCoin node via WinSock2. I sent the proper "getaddr" message and then the server responds, the replied data are ready to read, because Select() notifies this, but when I call Recv() there are 0 bytes read.
My code is working OK on localhost test server. The incomplete "getaddr" message (less than 24 bytes) is NOT replied by BitCoin node, only proper message, but I can't read the reply with Recv(). After returning 0 bytes, the Select() still returns there are data to read.
My code is divided into DLL which uses Winsock2 and the main() function.
Here are key fragments:
struct CMessageHeader
{
uint32_t magic;
char command[12];
uint32_t payload;
uint32_t checksum;
};
CSocket *sock = new CSocket();
int actual; /* Actually read/written bytes */
sock->connect("109.173.41.43", 8333);
CMessageHeader msg = { 0xf9beb4d9, "getaddr\0\0\0\0", 0, 0x5df6e0e2 }, rcv = { 0 };
actual = sock->send((const char *)&msg, sizeof(msg));
actual = sock->select(2, 0); /* Select read with 2 seconds waiting time */
actual = sock->receive((char *)&rcv, sizeof(rcv));
The key fragment of DLL code:
int CSocket::receive(char *buf, int len)
{
int actual;
if ((actual = ::recv(sock, buf, len, 0)) == SOCKET_ERROR) {
std::ostringstream s;
s << "Nie mozna odebrac " << len << " bajtow.";
throw(CError(s));
}
return(actual);
}
If select() reports the socket is readable, and then recv() returns 0 afterwards, that means the peer gracefully closed the connection on their end (ie, sent a FIN packet to you), so you need to close your socket.
On a side note, recv() can return fewer bytes than requested, so your receive() function should call recv() in a loop until all of the expected bytes have actually been received, or an error occurs (same with send(), too).

STM32 reading variables out of Received Buffer with variable size

I am not really familiar with programming in STM32. I am using the micro controller STM32F303RE.
I am receiving data via a UART connection with DMA.
Code:
HAL_UARTEx_ReceiveToIdle_DMA(&huart2, RxBuf, RxBuf_SIZE);
__HAL_DMA_DISABLE_IT(&hdma_usart2_rx, DMA_IT_HT);
I am writing the value into a Receiving Buffer and then transfer it into a main buffer. This function and declaration is down before the int main(void).
#define RxBuf_SIZE 100
#define MainBuf_Size 100
uint8_t RxBuf[RxBuf_SIZE];
uint8_t MainBuf[MainBuf_Size];
void HAL_UARTEx_RxEventCallback(UART_HandleTypeDef *huart,uint16_t Size){
if( huart -> Instance == USART2){
memcpy (MainBuf, RxBuf, Size);
HAL_UARTEx_ReceiveToIdle_DMA(&huart2, RxBuf, RxBuf_SIZE);
}
for (int i = 0; i<Size; i++){
if((MainBuf[i] == 'G') && (MainBuf[i+1] == 'O')){
RecieveData();
HAL_UART_DMAStop(&huart2);
}
}
}
I receive know the data into a buffer and it stops as soon as "GO" is transmitted. Until this point it is working. The function ReceiveData() should then transform this buffer to the variables. But it isn't working for me.
Now I want to transform this received data with "breakpoints" into variables.
So I want to send: "S2000S1000S1S10S2GO".
I always have 5 variables. (in this case: 2000, 1000, 1, 10, 2) I want to read the data out of the string and transform it into an uint16_t to procude. The size/ length of the variables could be changed. That's why I tried to use like some breakpoint.

Bulk transfer sending too much (multiple of usb packet?)

Problem I am trying to solve
I am sending data over usb with libusb_bulk_transfer, with something like this:
int sent = 0;
int bulk_result = libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, buffer,
buffer_len, &sent, 5000);
and I receive those transfers on the other side in Kotlin (Android).
Most of the time, it works: I send a buffer of side, say, 289 bytes, and on the other side I receive 289 bytes.
Sometimes, however, I receive too much. Say I send 1536 bytes, and I receive 1719 bytes.
My solution that does not work
My understanding (e.g. from here) is that "A bulk transfer is considered complete when it has transferred the exact amount of data requested, transferred a packet less than the maximum endpoint size, or transferred a zero-length packet".
And because 1536 is a multiple of 64 (and all the wrong packets I receive are multiples of 64), I thought that this was my issue. So I went for sending a zero-length packet after I send a buffer that is a multiple of the maximum endpoint size. And I duly noted that the maximum endpoint size is not necessarily 64, so I wanted to detect it.
Here is my "solution":
int sent = 0;
int bulk_result = libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, buffer,
buffer_len, &sent, 5000);
if (sent % get_usb_packet_size() == 0) {
libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, nullptr, 0, &sent, 5000);
}
With the simple get_usb_packet_size() below, which happens to be 256:
int get_usb_packet_size() { return endpoint_out->wMaxPacketSize; }
Still, that does not seem to work! The return code of both libusb_bulk_transfer is 0 (success), the first one says it sent buffer_len bytes (as expected), and the second one says it sent 0 bytes (as expected).
But my receiver still receives packets that are longer than what is expected. I tried using 64 instead of 256 (therefore sending more zero-length packets), but I still get that same problem.
What am I missing?
The issue was due to concurrency: two threads were calling my code above, and therefore sometimes one thread would not have time to send the zero-length packet right after its packet.
So this actually seems to work:
int sent = 0;
int bulk_result = libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, buffer,
buffer_len, &sent, 5000);
if (sent % get_usb_packet_size() == 0) {
libusb_bulk_transfer(handle, endpoint_out->bEndpointAddress, nullptr, 0, &sent, 5000);
}
with
int get_usb_packet_size() { return endpoint_out->wMaxPacketSize; }

rapidJson: crashed in release mode

I used rapidJson to read json data. I can build my application in both Debug and Release mode, but the application crashes in Release mode.
using namespace rapidjson;
...
char *buffer;
long fileSize;
size_t fileReadingResult;
//obtain file size
fseek(pFile, 0, SEEK_END);
fileSize = ftell(pFile);
if (fileSize <= 0) return false;
rewind(pFile);
//allocate memory to contain the whole file
buffer = (char *)malloc(sizeof(char)*fileSize);
if (buffer == NULL) return false;
//copy the file into the buffer
fileReadingResult = fread(buffer, 1, fileSize, pFile);
if (fileReadingResult != fileSize) return false;
buffer[fileSize] = 0;
Document document;
document.Parse(buffer);
When I run it in Release mode, I encounter an Unhanded exception; A heap has been corrupted.
The application breaks at "res = _heap_alloc(size) in malloc.c file
void * __cdecl _malloc_base (size_t size)
{
void *res = NULL;
// validate size
if (size <= _HEAP_MAXREQ) {
for (;;) {
// allocate memory block
res = _heap_alloc(size);
// if successful allocation, return pointer to memory
// if new handling turned off altogether, return NULL
if (res != NULL)
{
break;
}
if (_newmode == 0)
{
errno = ENOMEM;
break;
}
// call installed new handler
if (!_callnewh(size))
break;
// new handler was successful -- try to allocate again
}
It runs fine in Debug mode.
Maybe it could be a memory leak issue with your Malloc since it runs fine one time in Debug, but when you keep the application up longer it crashes.
Do you free your buffer after using it?
The reason is simple. You allocate a buffer of fileSize bytes but after reading the file, you write at the fileSize+1-th position with buffer[fileSize] = 0;
Fix: change allocation with one larger.
buffer = (char *)malloc(fileSize + 1);
Debug builds pad memory allocations with additional bytes so it does not crash.

Embedded: SDHC SPI write issue

I am currently working at a logger that uses a MSP430F2618 MCU and SanDisk 4GB SDHC Card.
Card initialization works as expected, I also can read MBR and FAT table.
The problem is that I can't write any data on it. I have checked if it is write protected by notch, but it's not. Windows 7 OS has no problem reading/writing to it.
Though, I have used a tool called "HxD" and I've tried to alter some sectors (under Windows). When I try to save the content to SD card, the tool pop up a windows telling me "Access denied!".
Then I came back to my code for writing to SD card:
uint8_t SdWriteBlock(uchar_t *blockData, const uint32_t address)
{
uint8_t result = OP_ERROR;
uint16_t count;
uchar_t dataResp;
uint8_t idx;
for (idx = RWTIMEOUT; idx > 0; idx--)
{
CS_LOW();
SdCommand(CMD24, address, 0xFF);
dataResp = SdResponse();
if (dataResp == 0x00)
{
break;
}
else
{
CS_HIGH();
SdWrite(0xFF);
}
}
if (0x00 == dataResp)
{
//send command success, now send data starting with DATA TOKEN = 0xFE
SdWrite(0xFE);
//send 512 bytes of data
for (count = 0; count < 512; count++)
{
SdWrite(*blockData++);
}
//now send tow CRC bytes ,through it is not used in the spi mode
//but it is still needed in transfer format
SdWrite(0xFF);
SdWrite(0xFF);
//now read in the DATA RESPONSE TOKEN
do
{
SdWrite(0xFF);
dataResp = SdRead();
}
while (dataResp == 0x00);
//following the DATA RESPONSE TOKEN are a number of BUSY bytes
//a zero byte indicates the SD/MMC is busy programing,
//a non_zero byte indicates SD/MMC is not busy
dataResp = dataResp & 0x0F;
if (0x05 == dataResp)
{
idx = RWTIMEOUT;
do
{
SdWrite(0xFF);
dataResp = SdRead();
if (0x0 == dataResp)
{
result = OP_OK;
break;
}
idx--;
}
while (idx != 0);
CS_HIGH();
SdWrite(0xFF);
}
else
{
CS_HIGH();
SdWrite(0xFF);
}
}
return result;
}
The problem seems to be when I am waiting for card status:
do
{
SdWrite(0xFF);
dataResp = SdRead();
}
while (dataResp == 0x00);
Here I am waiting for a response of type "X5"(hex value) where X is undefined.
But most of the cases the response is 0x00 (hex value) and I don't get out of the loop. Few cases are when the response is 0xFF (hex value).
I can't figure out what is the problem.
Can anyone help me? Thanks!
4GB SDHC
We need to see much more of your code. Many µC SPI codebases only support SD cards <= 2 GB, so using a smaller card might work.
You might check it yourself: SDHC needs a CMD 8 and an ACMD 41 after the CMD 0 (GO_IDLE_STATE) command, otherwise you cannot read or write data to it.
Thank you for your answers, but I solved my problem. It was a problem of timing. I had to put a delay at specific points.