Still reachable blocks in libmosquitto - ssl

I'm working on below example but there are memory leaks when I run this with valgrind
static struct mosquitto *m = NULL;
int main(){
mosquitto_lib_init();
printf("LIBMOSQUITTO %d\n", LIBMOSQUITTO_VERSION_NUMBER);
if ((m = mosquitto_new("rtr", 1, NULL)) == NULL) {
fprintf(stderr, "Out of memory.\n");
exit(1);
}
int rc = mosquitto_tls_set(m,
"/home/ca.crt", /* cafile */
NULL, /* capath */
"/home/client.crt", /* certfile */
"/home/client.key", /* keyfile */
NULL /* pw_callback() */
);
if (rc != MOSQ_ERR_SUCCESS) {
fprintf(stderr, "Cannot set TLS CA: %s (check path names)\n",
mosquitto_strerror(rc));
exit(3);
}
#if 1
mosquitto_tls_opts_set(m,
SSL_VERIFY_PEER,
NULL, /* tls_version: "tlsv1.2", "tlsv1" */
NULL /* ciphers */
);
mosquitto_tls_insecure_set(m, 1);
#endif
if ((rc = mosquitto_connect(m, "localhost", 8884, 20)) != MOSQ_ERR_SUCCESS) {
fprintf(stderr, "%d: Unable to connect: %s\n", rc,
mosquitto_strerror(rc));
perror("");
exit(2);
}
//mosquitto_loop_forever(m, -1, 1);
mosquitto_destroy(m);
mosquitto_lib_cleanup();
}
Valgrind output:
==4264== HEAP SUMMARY:
==4264== in use at exit: 64 bytes in 2 blocks
==4264== total heap usage: 4,913 allocs, 4,911 frees, 364,063 bytes allocated
==4264==
==4264== LEAK SUMMARY:
==4264== definitely lost: 0 bytes in 0 blocks
==4264== indirectly lost: 0 bytes in 0 blocks
==4264== possibly lost: 0 bytes in 0 blocks
==4264== still reachable: 64 bytes in 2 blocks
==4264== suppressed: 0 bytes in 0 blocks
==4264== Rerun with --leak-check=full to see details of leaked memory
==4264==
==4264== For counts of detected and suppressed errors, rerun with: -v
==4264== Use --track-origins=yes to see where uninitialised values come from
==4264== ERROR SUMMARY: 13582 errors from 542 contexts (suppressed: 0 from 0)
How do I fix these?

Related

Winsock2, BitCoin Select() returns data to read, Recv() returns 0 bytes

I made a connection to BitCoin node via WinSock2. I sent the proper "getaddr" message and then the server responds, the replied data are ready to read, because Select() notifies this, but when I call Recv() there are 0 bytes read.
My code is working OK on localhost test server. The incomplete "getaddr" message (less than 24 bytes) is NOT replied by BitCoin node, only proper message, but I can't read the reply with Recv(). After returning 0 bytes, the Select() still returns there are data to read.
My code is divided into DLL which uses Winsock2 and the main() function.
Here are key fragments:
struct CMessageHeader
{
uint32_t magic;
char command[12];
uint32_t payload;
uint32_t checksum;
};
CSocket *sock = new CSocket();
int actual; /* Actually read/written bytes */
sock->connect("109.173.41.43", 8333);
CMessageHeader msg = { 0xf9beb4d9, "getaddr\0\0\0\0", 0, 0x5df6e0e2 }, rcv = { 0 };
actual = sock->send((const char *)&msg, sizeof(msg));
actual = sock->select(2, 0); /* Select read with 2 seconds waiting time */
actual = sock->receive((char *)&rcv, sizeof(rcv));
The key fragment of DLL code:
int CSocket::receive(char *buf, int len)
{
int actual;
if ((actual = ::recv(sock, buf, len, 0)) == SOCKET_ERROR) {
std::ostringstream s;
s << "Nie mozna odebrac " << len << " bajtow.";
throw(CError(s));
}
return(actual);
}
If select() reports the socket is readable, and then recv() returns 0 afterwards, that means the peer gracefully closed the connection on their end (ie, sent a FIN packet to you), so you need to close your socket.
On a side note, recv() can return fewer bytes than requested, so your receive() function should call recv() in a loop until all of the expected bytes have actually been received, or an error occurs (same with send(), too).

STM32 USB Tx Busy

I have an application running on STM32F429ZIT6 using USB stack to communicate with PC client.
MCU receives one type of message of 686 bytes every second and receives another type of message of 14 bytes afterwards with 0.5 seconds of delay between messages. The 14 bytes message is a heartbeat so it needs to replied by MCU.
It happens that after 5 to 10 minutes of continuous operation, MCU is not able to send data because
hcdc->TxState is always busy. Reception works fine.
During Rx interruption, application only adds data to ring buffer, so that this buffer is later serialized and processed by main function.
static int8_t CDC_Receive_HS(uint8_t* Buf, uint32_t *Len) {
/* USER CODE BEGIN 11 */
/* Message RX Completed, Send it to Ring Buffer to be processed at FMC_Run()*/
for(uint16_t i = 0; i < *Len; i++){
ring_push(RMP_RXRingBuffer, (uint8_t *) &Buf[i]);
}
USBD_CDC_SetRxBuffer(&hUsbDeviceHS, &Buf[0]);
USBD_CDC_ReceivePacket(&hUsbDeviceHS);
return (USBD_OK);
/* USER CODE END 11 */ }
USB TX is also kept as simple as possible:
uint8_t CDC_Transmit_HS(uint8_t\* Buf, uint16_t Len) {
uint8_t result = USBD_OK;
/\* USER CODE BEGIN 12 */
USBD_CDC_HandleTypeDef hcdc = (USBD_CDC_HandleTypeDef*)hUsbDeviceHS.pClassData;
if (hcdc-\>TxState != 0)
{
ZF_LOGE("Tx failed, resource busy\\n\\r"); return USBD_BUSY;
}
USBD_CDC_SetTxBuffer(&hUsbDeviceHS, Buf, Len);
result = USBD_CDC_TransmitPacket(&hUsbDeviceHS);
ZF_LOGD("TX Message Result:%d\\n\\r", result);
/ USER CODE END 12 \*/
return result;
}
I'm using latest HAL Drivers and software from CubeIDE (1.27.1).
I have tried expanding heap min size from 0x200 to larger values but result is the same.
Also Line Coding is set according to what recommended values:
case CDC_SET_LINE_CODING:
LineCoding.bitrate = (uint32_t) (pbuf[0] | (pbuf[1] << 8) | (pbuf[2] << 16) | (pbuf[3] << 24));
LineCoding.format = pbuf[4];
LineCoding.paritytype = pbuf[5];
LineCoding.datatype = pbuf[6];
ZF_LOGD("Line Coding Set\n\r");
break;
case CDC_GET_LINE_CODING:
pbuf[0] = (uint8_t) (LineCoding.bitrate);
pbuf[1] = (uint8_t) (LineCoding.bitrate >> 8);
pbuf[2] = (uint8_t) (LineCoding.bitrate >> 16);
pbuf[3] = (uint8_t) (LineCoding.bitrate >> 24);
pbuf[4] = LineCoding.format;
pbuf[5] = LineCoding.paritytype;
pbuf[6] = LineCoding.datatype;
ZF_LOGD("Line Coding Get\n\r");
break;
Thanks in advance, any support is appreciated.
I don't know enough about the STM32 libraries to really check your code, but I suspect you are forgetting to read the bytes transmitted by the STM32 on PC side. Try opening a terminal program like PuTTY and connecting to the STM32's virtual serial port. Otherwise, the Windows USB-to-serial driver (usbser.sys) will eventually have its buffers filled with data from your device and it will stop requesting more, at which point the buffers on your device will fill up as well.

how can i fix this " valgrind tests failed;"

I got this error while all the malloc nodes are freed when I run the Valgrind test:
in use at exit: 0 bytes in 0 blocks
total heap usage: 30 allocs, 30 frees, 7,520 bytes allocated
All heap blocks were freed -- no leaks are possible
ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
also this with Valgrind -v test:
WARNING: new redirection conflicts with existing -- ignoring it
old: 0x04022e10 (strlen ) R-> (0000.0) 0x580c9ce2 ???
new: 0x04022e10 (strlen ) R-> (2007.0) 0x0483f060 strlen
and this is the error report :
Conditional jump or move depends on uninitialised value(s): (file: dictionary.c, line: 95)
// Represents a node in a hash table
typedef struct node
{
char TEXT[48];
struct node *next;
}
node;
//loop over hash buckets
for (int I = 0; I < N; I++)
{
table [I] = malloc(sizeof(node)); <--- line 37
table [I]-> next = NULL;
}
here is the check function :
int x = hash(word);
node *check_ptr = table[x];
int m = strlen(word);
while (check_ptr != NULL )
{
int n = strlen(check_ptr -> TEXT);<----- line 91
"some code "
}
UPDATE - more detailed message
by 0x401C57: check (dictionary.c:91)
---- by 0x40160B: main (speller.c:113)
---- Uninitialised value was created by a heap allocation at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
---- by 0x4019C6: load (dictionary.c:37)
--- by 0x4012CE: main (speller.c:40)
WORDS IN TEXT: 10
HEAP SUMMARY: in use at exit: 0 bytes in 0 blocks
---- total heap usage: 143,122 allocs, 143,122 frees, 8,024,712 bytes allocated
All heap blocks were freed -- no leaks are possible
---- ERROR SUMMARY: 10 errors from 1 contexts (suppressed: 0 from 0)
//loop over hash buckets to initialize the all the buckets to contain null TEXT value solves the problem
it was because i never initialized the TEXT values while i was trying to access them later in the code.
for (int I = 0; I < N; I++)
{
table [I] = malloc(sizeof(node));
table [I]-> next = NULL;
**for (int u =0; u< 48 ; u++)
{
table [I]-> TEXT[u] = '0';
}**
}

How to fix this memory leaks about strdup

Thanks for your help, the problem has been solved!
I am a new to C, I I wondered why strdup can case memory leaks, because I free it after strup
valgrind code:
==29885==
==29885== HEAP SUMMARY:
==29885== in use at exit: 37 bytes in 2 blocks
==29885== total heap usage: 28 allocs, 26 frees, 17,131 bytes allocated
==29885==
==29885== Searching for pointers to 2 not-freed blocks
==29885== Checked 124,824 bytes
==29885==
==29885== 5 bytes in 1 blocks are indirectly lost in loss record 1 of 2
==29885== at 0x40057A1: malloc (vg_replace_malloc.c:270)
==29885== by 0x2D6B4F: strdup (in /lib/tls/i686/nosegneg/libc-2.3.4.so)
==29885== by 0x804CD3C: new_node (parser.c:355)
==29885== by 0x804C263: identifier (parser.c:75)
==29885== by 0x804940D: yyparse (vtl4.y:111)
==29885== by 0x8049FD4: main (vtl4.y:225)
==29885==
==29885== 37 (32 direct, 5 indirect) bytes in 1 blocks are definitely lost in loss record 2 of 2
==29885== at 0x40057A1: malloc (vg_replace_malloc.c:270)
==29885== by 0x804CCEA: new_node (parser.c:347)
==29885== by 0x804C263: identifier (parser.c:75)
==29885== by 0x804940D: yyparse (vtl4.y:111)
==29885== by 0x8049FD4: main (vtl4.y:225)
==29885==
==29885== LEAK SUMMARY:
==29885== definitely lost: 32 bytes in 1 blocks
==29885== indirectly lost: 5 bytes in 1 blocks
==29885== possibly lost: 0 bytes in 0 blocks
==29885== still reachable: 0 bytes in 0 blocks
==29885== suppressed: 0 bytes in 0 blocks
==29885==
==29885== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 12 from 8)
--29885--
--29885-- used_suppression: 12 Ubuntu-stripped-ld.so
==29885==
==29885== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 12 from 8)
new_node (parser.c:355):
struct simpleNode *new_node(JJT jjt,char *node_image){
struct simpleNode *a = malloc(sizeof(struct simpleNode));
if (a==NULL) {
yyerror("FUNC[%s]error:Init a new simple error,out of space!",__func__);
exit(0);
}
a->info.astn = jjt;
a->info.node_name = jjtNodeName[jjt];
**355>>**a->info.image = strdup(node_image);
a->info.already_rendered = cJSON_False;
a->parent = NULL;
a->firstChild = NULL;
a->nextSibling = NULL;
return a;
}
identifier (parser.c:75):
struct simpleNode* identifier(char *identifier){
printf("%s node!key:%s\n",__func__,identifier);
**75>>**struct simpleNode *a = new_node(JJTIDENTIFIER,identifier);
free(identifier);//I free it after use strdup in new_node
return a;
}
the struct simpleteNode :
struct nodeInfo {
char *image;
int already_rendered;//是否已经被渲染过了,防止子节点被重复渲染,默认为cJSON_False
JJT astn;//node编号
const char * node_name;
char *current_tpl_name;
};
struct simpleNode {
struct nodeInfo info;
struct simpleNode* firstChild;//第一个孩子
struct simpleNode* nextSibling;//右兄弟
struct simpleNode* parent;//父亲
};
at last I will free all the pointers
void free_tree(struct simpleNode* n){
//遍历
struct simpleNode *x = n;
if (x) {
//printf("==========Begin to free %s, image=%s=============\n",x->info.node_name,cJSON_Print(x->info.image));
//free_tree(x->firstChild);
//__free(x);
free_tree(x->firstChild);
free_tree(x->nextSibling);
__free(x);
//__free(x);
}
}
void free_nodeInfo(struct simpleNode* n){
if (n!=NULL) {
printf("==============begin delete %s node!\n",n->info.node_name);
struct nodeInfo ni = n->info;
free(ni.image);
// if (ni.current_tpl_name!=NULL) {
// free(ni.current_tpl_name);
// }
free(n);
printf("==============delete node done!\n");
}
}
void __free(struct simpleNode *n){
//printf("==========__free %s, image=%s=============\n",n->info.node_name,cJSON_Print(n->info.image));
//dump_tree(n);
if (n) {
//printf("==============begin delete %s node!\n",n->info.node_name);
free_nodeInfo(n);
}
}
I checked more related issues, but does not solve,please help me!
struct simpleNode* reference_index(struct simpleNode* identifier_n,
struct simpleNode* index_n) {
- struct simpleNode *a = new_node(JJTIDENTIFIER, identifier_n->info.image);//the wrong code
+ struct simpleNode *a = identifier_n;//the right code
struct simpleNode *index = new_node(JJTINDEX, "index");
addChild(index, index_n);
addChild(a, index);
}
I should not be repeated to create a new node a ,because the parameter is already a node identifier_n,when I free identifier_n ,the info.image of node a is already freed,so At this time when I free a->info.image this will cause problems.
I am sorry for my poor English.

Posix Error 14 (bad address) on open read stream in Cocoa. Hints?

I'm following the sample code in CFNetwork Programming Guide, specifically the section on Preventing Blocking When Working with Streams. my code is nearly identical to theirs (below) but, when I connect to my server, I get posix error 14 (bad address -- is that bad IP address (except it's not)? Bad memory address for some call I made? or what?!.
I have no idea how to go about debugging this. I'm really pretty new to the whole CFNetworking thing, and was never particularly expert at networks in the first place (the one thing I really loved about Java: easy networks! :D)
Anyway, log follows, with code below. Any hints would be greatly appreciated.
Log:
[6824:20b] [DEBUG] Compat version: 30000011
[6824:20b] [DEBUG] resovled host.
[6824:20b] [DEBUG] writestream opened.
[6824:20b] [DEBUG] readstream client assigned.
[6824:20b] [DEBUG] readstream opened.
[6824:20b] [DEBUG] *** Read stream reported kCFStreamEventErrorOccurred
[6824:20b] [DEBUG] *** POSIX error: 14 - Bad address
[6824:20b] Error closing readstream
[6824:20b] [DEBUG] Writing int: 0x09000000 (0x00000009)
Code:
+ (BOOL) connectToServerNamed:(NSString*)name atPort:(int)port {
CFHostRef theHost = CFHostCreateWithName (NULL, (CFStringRef)name);
CFStreamError error;
if (CFHostStartInfoResolution (theHost, kCFHostReachability, &error))
{
NSLog (#"[DEBUG] resovled host.");
CFStreamCreatePairWithSocketToCFHost (NULL, theHost, port, &readStream, &writeStream);
if (CFWriteStreamOpen(writeStream))
{
NSLog (#"[DEBUG] writestream opened.");
CFStreamClientContext myContext = { 0, self, NULL, NULL, NULL };
CFOptionFlags registeredEvents = kCFStreamEventHasBytesAvailable |
kCFStreamEventErrorOccurred | kCFStreamEventEndEncountered;
if (CFReadStreamSetClient (readStream, registeredEvents, readCallBack, &myContext))
{
NSLog (#"[DEBUG] readstream client assigned.");
CFReadStreamScheduleWithRunLoop(readStream, CFRunLoopGetCurrent(),
kCFRunLoopCommonModes);
if (CFReadStreamOpen(readStream))
{
NSLog (#"[DEBUG] readstream opened.");
CFRunLoopRun();
// Lots of error condition handling snipped.
[...]
return YES;
}
void readCallBack (CFReadStreamRef stream, CFStreamEventType event, void *myPtr)
{
switch (event)
{
case kCFStreamEventHasBytesAvailable:
{
CFIndex bytesRead = CFReadStreamRead(stream, buffer, kNetworkyBitsBufferSize); // won't block
if (bytesRead > 0) // <= 0 leads to additional events
{
if (listener)
{
UInt8 *tmpBuffer = malloc (sizeof (UInt8) * bytesRead);
memcpy (buffer, tmpBuffer, bytesRead);
NSLog(#"[DEBUG] reveived %d bytes", bytesRead);
[listener networkDataArrived:tmpBuffer count:bytesRead];
}
NSLog(#"[DEBUG] reveived %d bytes; no listener", bytesRead);
}
}
break;
case kCFStreamEventErrorOccurred:
NSLog(#"[DEBUG] *** Read stream reported kCFStreamEventErrorOccurred");
CFStreamError error = CFReadStreamGetError(stream);
logError(error);
[NetworkyBits shutDownRead];
break;
case kCFStreamEventEndEncountered:
NSLog(#"[DEBUG] *** Read stream reported kCFStreamEventEndEncountered");
[NetworkyBits shutDownRead];
break;
}
}
void logError (CFStreamError error)
{
if (error.domain == kCFStreamErrorDomainPOSIX) // Interpret error.error as a UNIX errno.
{
NSLog (#"[DEBUG] *** POSIX error: %d - %s", (int) error.error, strerror(error.error));
}
else if (error.domain == kCFStreamErrorDomainMacOSStatus)
{
NSLog (#"[DEBUG] *** MacOS error: %d", (int) error.error);
}
else
{
NSLog (#"[DEBUG] *** Stream error domain: %d, error: %d", (int) error.error);
}
}
Olie, where does
buffer
that you supply to
CFReadStreamRead()
come from? EFAULT is a bad buffer address... are you sure you've actually initialized this buffer to point to something valid? It's obviously a global or sometime... which itself is a pretty bad idea. You should allocate it in your function or it should be an ivar (if you're using Obj-C).
I'm not familiar with Cocoa or Objective-C, but I can tell you that POSIX error code 14 is called EFAULT, and it means you made a system call with an invalid pointer value. This is almost certainly some user-supplied buffer pointer to a read or write system call of some sort. Check all of the buffer pointers and make sure they're not NULL. Check the return value from malloc() - you might be failing to allocate a buffer.