How to get peer address in event handler on error during bufferevent_socket_connect? - libevent

I didn't find a way to print peer's address in event handler on error during bufferevent_socket_connect. Can anybody help?
What I tried:
Using getpeername() on bufferevent_getfd() fails becouse connection is not established
Transmitting peer's address in the last parameter of event handler fails too, because of this pointer changes on next connection attempt to another peer.
code example:
void eventcb(struct bufferevent *bev, short events, void *ptr)
{
if (events & BEV_EVENT_ERROR)
{
// I whant to print peer's address on error in bufferevent_socket_connect
bufferevent_free(bev);
}
}
int main()
{
...
struct sockaddr_storage obj;
...
while(get_next_obj(obj))
{
evutil_socket_t sock = socket(obj.ss_family, SOCK_STREAM, 0);
evutil_make_socket_nonblocking(sock);
evutil_make_listen_socket_reuseable(sock);
evutil_make_socket_closeonexec(sock);
struct bufferevent *evbev =
bufferevent_socket_new(evbase, sock, 0);
bufferevent_set_timeouts(evbev, &sec5, &sec5);
bufferevent_setcb(evbev, readcb, NULL, eventcb, evbase);
bufferevent_enable(evbev, EV_READ|EV_WRITE);
bufferevent_write(evbev, some_data.data(), some_data_sz);
if(bufferevent_socket_connect(evbev, (struct sockaddr *)&obj, sizeof(sockaddr_storage)) < 0) {
continue;
}
...
}
...
}

Related

UDP directed broadcast (WinSock2) failure

Let me start by saying this is my first foray into the world of C after 20+ years of assembly programming for PLCs and MicroControllers.
I'm trying to send a UDP datagram to the network broadcast address, in this particular case, 192.168.1.255.
The error I'm getting is a bind failure with error code 10049 (from WSAGetLastError()). As you can see from the attached code, I've created the socket, populated sockaddr_in, and setsockopt() to SO_BROADCAST.
For the life of me I can't figure out what I'm doing wrong and any pointers would be gratefully received.
iResult = WSAStartup(MAKEWORD(2, 2), &wsaTxData);
if (iResult != NO_ERROR)
{
WSAErrorString("WSAStartup for TX failed");
return(-1);
}
XPLMDebugString("UDP Server: WSAStartup TX complete.\n");
if ((BeaconSocket = socket(AF_INET, SOCK_DGRAM, 0)) == INVALID_SOCKET) {
WSAErrorString("UDP Server: Could not create BECN socket");
return(-1);
}
// setup the sockaddr_in structure
//
si_beacon.sin_family = AF_INET;
si_beacon.sin_addr.s_addr = inet_addr("192.168.1.255");
si_beacon.sin_port = htons(_UDP_TX_PORT);
// setup to broadcast
//
char so_broadcast_enabled = '1';
if (setsockopt(BeaconSocket, SOL_SOCKET, SO_BROADCAST, &so_broadcast_enabled, sizeof(so_broadcast_enabled)) == SOCKET_ERROR) {
WSAErrorString("Error in setting Broadcast option");
closesocket(BeaconSocket);
return(-1);
}
// bind our socket
//
if (bind(BeaconSocket, (struct sockaddr *)&si_beacon, sizeof(si_beacon)) == SOCKET_ERROR)
{
char buf[256];
WSAErrorString("Bind to socket for UDP beacon failed");
sprintf(buf, "Port %u, address %s\n", ntohs(si_beacon.sin_port), inet_ntoa(si_beacon.sin_addr));
XPLMDebugString(buf);
return(-1);
}
// start the UDP beacon
//
udp_becn_thread_id = CreateThread(NULL, 0, BeaconThread, NULL, 0, NULL);
if (!udp_becn_thread_id) {
WSAErrorString("UDP Server: Error starting UDP Beacon");
return (-1);
}
XPLMDebugString("UDP Server: bind complete. beacon ACTIVE.\n");
return(0);
The issue is the IP address itself.
I copied the code to my computer (changed it a bit to get it to compile) and I got the error:
UDP Server: WSAStartup TX complete.
Bind to socket for UDP beacon failed
Port 47977, address 192.168.1.255
I then changed the line:
si_beacon.sin_addr.s_addr = inet_addr("192.168.1.255");
To
si_beacon.sin_addr.s_addr = inet_addr("192.168.0.127");
And when I ran it again, everything worked:
UDP Server: WSAStartup TX complete.
Done successfully
The issue is that the "bind" address needs to be your computers address on the local network. Not the remote client.
Another alternative is to use the address:
si_beacon.sin_addr.s_addr = inet_addr("0.0.0.0");
which binds to all network interfaces on the computer at once.
For reference, here's the version of the code that I used:
#define _WINSOCK_DEPRECATED_NO_WARNINGS
#include <stdio.h>
#include <string.h>
#include <WinSock2.h>
#include <WS2tcpip.h> // For inet_pton
#pragma comment(lib, "ws2_32.lib")
int main()
{
{
WSADATA wsaTxData;
memset(&wsaTxData, 0, sizeof(WSADATA));
const int iResult = WSAStartup(MAKEWORD(2, 2), &wsaTxData);
if (iResult != NO_ERROR)
{
printf("%s", "WSAStartup for TX failed.\n");
return -1;
}
printf("%s", "UDP Server: WSAStartup TX complete.\n");
}
SOCKET BeaconSocket;
memset(&BeaconSocket, 0, sizeof(SOCKET));
if ((BeaconSocket = socket(AF_INET, SOCK_DGRAM, 0)) == INVALID_SOCKET) {
printf("%s", "UDP Server: Could not create BECN socket\n");
return -1;
}
// setup the sockaddr_in structure
//
sockaddr_in si_beacon;
memset(&si_beacon, 0, sizeof(sockaddr_in));
si_beacon.sin_family = AF_INET;
si_beacon.sin_addr.s_addr = inet_addr("0.0.0.0");
const unsigned short port_num = 0xbb69;
si_beacon.sin_port = htons(port_num);
// setup to broadcast
//
char so_broadcast_enabled = '1';
if (setsockopt(BeaconSocket, SOL_SOCKET, SO_BROADCAST, &so_broadcast_enabled, sizeof(so_broadcast_enabled)) == SOCKET_ERROR) {
printf("%s", "Error in setting Broadcast option\n");
closesocket(BeaconSocket);
return(-1);
}
// bind our socket
//
if (bind(BeaconSocket, (struct sockaddr*)&si_beacon, sizeof(si_beacon)) == SOCKET_ERROR)
{
char buf[256];
printf("%s", "Bind to socket for UDP beacon failed\n");
sprintf_s(buf, "Port %u, address %s\n", ntohs(si_beacon.sin_port), inet_ntoa(si_beacon.sin_addr));
printf("%s", buf);
return(-1);
}
printf("%s", "Done successfully");
return 0;
}

OpenSSL 1.0.1 SSL_read() function return 0 byte on certain https Websites

I'm trying to make a https client by openssl 1.0.1u that can visit websites with ssl protocol.
When visiting most of https websites (like google.com, yahoo.com, facebook.com, ...), it works well and the home page content is returned. However, there are certain websites (relatively small websites), the server returns me 0 bytes, here are some details:
I use SSLv23_method() to create my openssl context:
this->_sslContext = SSL_CTX_new(SSLv23_method()); // SSLv23_method: Negotiate highest available SSL/TLS version
Then I found that in the following calling sequence (listed forwardly):
(ssl_lib.c) SSL_read(SSL *s, void *buf, int num) ---->
(s3_lib.c) ssl3_read(SSL *s, void *buf, int len) ---->
(s3_lib.c) ssl3_read_internal(SSL *s, void *buf, int len, int peek) ---->
(s3_pkt.c) int ssl3_read_bytes(SSL *s, int type, unsigned char *buf, int len, int peek)
With some website (failed case), the function SSL_read() return 0 bytes because inside the function ssl3_read_bytes(), I got a alert_descr set to SSL_AD_CLOSE_NOTIFY then the function simply return 0, here is the source code:
...
if (alert_level == SSL3_AL_WARNING)
{
s->s3->warn_alert = alert_descr;
if (alert_descr == SSL_AD_CLOSE_NOTIFY) {
s->shutdown |= SSL_RECEIVED_SHUTDOWN;
return (0);
}
Anyone can give me any hint to fix this problem? Thanks.
=== UPDATE ===
Upon Steffen Ullrich's suggestion, I post source code that sends request / gets respone. My small experimental https client is composed of Socket and SSLSocket classes and a helper WebpageFetcher class. The function WebpageFetcher::fetchPage is used to send the https request and get the respond from private function WebpageFetcher::_getResponse():
wchar_t * WebpageFetcher::fetchPage(wchar_t * url, int port, bool useSSL)
{
wchar_t * response = NULL;
Socket * socket = Socket::createSocket(false, useSSL);
if (socket == nullptr)
{
response = String(L"Connection failed. Unable to create a SSLSocket!\n").toCharArray();
return response;
}
if (!socket->connect(url, port))//Connection failed
{
response = String(L"Connection failed. Possible reason: Wrong server URL or port.\n").toCharArray();
}
else //Connection succeeded
{
//Send request to server socket
static const char * REQUEST = "GET / \r\n\r\n";
static const int REQUEST_LEN = (const int)strlen(REQUEST);
socket->send((void *)REQUEST, REQUEST_LEN);
//Get the response from server
response = _getResponse(socket);
socket->shutDown();
socket->close();
}
delete socket;
return response;
}
// ============================================================================
wchar_t * WebpageFetcher::_getResponse(Socket * socket)
{
static const int READSIZE = 1024; //Reading buffer size, the larger the better performance
int responseBufferSize = READSIZE + 1;
char * readBuf = new char[READSIZE];
char * responseBuf = new char[responseBufferSize];
int bytesReceived;
int totalBytesReceived = 0;
while ((bytesReceived = socket->recv(readBuf, READSIZE)) > 0)
{
// Check if need to expand responseBuf size
if (totalBytesReceived + bytesReceived >= responseBufferSize)//No enough capacity, expand the response buffer
{
responseBufferSize += READSIZE;
char * tempBuf = new char[responseBufferSize];
memcpy(tempBuf, responseBuf, totalBytesReceived);
delete[] responseBuf;
responseBuf = tempBuf; //Response buffer expanded
}
// Append data from readBuf
memcpy(responseBuf + totalBytesReceived, readBuf, bytesReceived);
totalBytesReceived += bytesReceived;
responseBuf[totalBytesReceived] = '\0';
}
wchar_t * response = (wchar_t *)(totalBytesReceived == 0 ? //Generate the response as a C wide string
String(L"Received nothing from server. Possible reason: Wrong port.\n").toCharArray() :
StringUtil::charsToWchars(responseBuf));
delete[] readBuf;
delete[] responseBuf;
return response;
}
I passed argument useSSL with true when call factory function Socket::createSocket() so that the socket I got is a SSLSocket instance, which overrides the default functions connect(), _send() and _recv() to let openssl to do the actual job. Here is the constructor of my SSLSocket class, which derives from class Socket:
SSLSocket::SSLSocket(bool isServerSocket, int port, int socketType, int socketProtocol, int uOptions, wchar_t * strBindingAddress, wchar_t * cerPath, wchar_t * keyPath, wchar_t * keyPass) :
Socket(isServerSocket, port, socketType, socketProtocol, uOptions, strBindingAddress)
{
// Register the error strings
SSL_load_error_strings();
// Register the available ciphers and digests
SSL_library_init();
// Create an SSL_CTX structure by choosing a SSL/TLS protocol version
this->_sslContext = SSL_CTX_new(SSLv23_method()); // Use SSL 2 or SSL 3
// Create an SSL struct (client only, server does not need one)
this->_sslHandle = (this->_isServer ? NULL : SSL_new(this->_sslContext));
bool success = false;
if (!this->_isServer) // is Client socket
{
success = (this->_sslHandle != NULL);
}
else if (cerPath != NULL && keyPath != NULL) // is Server socket
{
success = ......
}
if (!success)
this->close();
}
And the followings are the functions override the virtual functions in parent class Socket, which lets openssl to do the relevant job:
bool SSLSocket::connect(wchar_t * strDestination, int port, int timeout)
{
SocketAddress socketAddress(strDestination, port);
return this->connect(&socketAddress, timeout);
}
bool SSLSocket::connect(SocketAddress * sockAddress, int timeout)
{
bool success =
(this->_sslHandle != NULL &&
Socket::connect(sockAddress, timeout) && // Regular TCP connection
SSL_set_fd(this->_sslHandle, (int)this->_hSocket) == 1 && // Connect the SSL struct to our connection
SSL_connect(this->_sslHandle) == 1); // Initiate SSL handshake
if (!success)
this->close();
return success;
}
int SSLSocket::_recv(void * lpBuffer, int size, int flags)
{
MonitorLock cs(&_mutex);
return SSL_read(this->_sslHandle, lpBuffer, size);
}
int SSLSocket::_send(const void * lpBuffer, int size, int flags)
{
return SSL_write(this->_sslHandle, lpBuffer, size);
}

Usage difference between SSL_add0_chain_cert and SSL_add1_chain_cert?

In OpenSSL documentation it says:
All these functions are implemented as macros. Those containing a 1 increment the reference count of the supplied certificate or chain so it must be freed at some point after the operation. Those containing a 0 do not increment reference counts and the supplied certificate or chain MUST NOT be freed after the operation.
But when I tried to look at examples of cases about which one should be used where I'm confused.
First OpenSSL:
It uses SSL_add0_chain_cert itself in the SSL_CTX_use_certificate_chain_file function of ssl_rsa.c. Here is the source:
static int use_certificate_chain_file(SSL_CTX *ctx, SSL *ssl, const char *file) {
if (ctx)
ret = SSL_CTX_use_certificate(ctx, x);
else
ret = SSL_use_certificate(ssl, x);
......
while ((ca = PEM_read_bio_X509(in, NULL, passwd_callback,
passwd_callback_userdata))
!= NULL) {
if (ctx)
r = SSL_CTX_add0_chain_cert(ctx, ca);
else
r = SSL_add0_chain_cert(ssl, ca);
......
}
Second usage I see is OpenResty Lua:
It uses SSL_add0_chain_cert in one way of setting certificate (ngx_http_lua_ffi_ssl_set_der_certificate), see here:
int ngx_http_lua_ffi_ssl_set_der_certificate(ngx_http_request_t *r,
const char *data, size_t len, char **err) {
......
if (SSL_use_certificate(ssl_conn, x509) == 0) {
*err = "SSL_use_certificate() failed";
goto failed;
}
......
while (!BIO_eof(bio)) {
x509 = d2i_X509_bio(bio, NULL);
if (x509 == NULL) {
*err = "d2i_X509_bio() failed";
goto failed;
}
if (SSL_add0_chain_cert(ssl_conn, x509) == 0) {
*err = "SSL_add0_chain_cert() failed";
goto failed;
}
}
BIO_free(bio);
*err = NULL;
return NGX_OK;
failed:
.......
}
Yet uses SSL_add1_chain_cert in another way (ngx_http_lua_ffi_set_cert), see here:
int ngx_http_lua_ffi_set_cert(ngx_http_request_t *r,
void *cdata, char **err) {
......
if (SSL_use_certificate(ssl_conn, x509) == 0) {
*err = "SSL_use_certificate() failed";
goto failed;
}
x509 = NULL;
/* read rest of the chain */
for (i = 1; i < sk_X509_num(chain); i++) {
x509 = sk_X509_value(chain, i);
if (x509 == NULL) {
*err = "sk_X509_value() failed";
goto failed;
}
if (SSL_add1_chain_cert(ssl_conn, x509) == 0) {
*err = "SSL_add1_chain_cert() failed";
goto failed;
}
}
*err = NULL;
return NGX_OK; /* No free of x509 here */
failed:
......
}
Yet I don't see a clear difference of what changes when calling these two in Lua, and it doesn't seem like the cert X509, when set successfully, gets freed in either case. According to my understanding of the OpenSSL doc, I should expect X509_free(x509) gets called somewhere after SSL_add1_chain_cert called on that x509. Is that the correct understanding?
Last, the Openssl implementation of ssl_cert_add1_chain_cert (what boils down from SSL_add1_chain_cert macro) does indeed show it's just a wrapper of ssl_cert_add0_chain_cert with reference count incremented on the cert, but how should that be reflected in the calling process?
int ssl_cert_add1_chain_cert(SSL *s, SSL_CTX *ctx, X509 *x)
{
if (!ssl_cert_add0_chain_cert(s, ctx, x))
return 0;
X509_up_ref(x);
return 1;
}
Now Nginx only deals with another function SSL_CTX_add_extra_chain_cert which leaves the burden of such choice behind, as it does not deal with switching cert per SSL connection basis. In my case I need to patch Nginx with this capability, switching cert per connection (but without using Lua).
So I'm not sure which one I should be using, SSL_add0_chain_cert or SSL_add1_chain_cert? And what's the freeing practice here?

How to check forwarded Packets in UDPBasicApp in Omnet

How can I modify UDPBasicApp to find duplicates in the messages recieved?
I made these changes to the class UDPBasicApp.cc to add an extra step to check recieved udp data packets like below, but I see no effect in .sca/.vec and does not even show bubbles.
Where could the error be?
void UDPBasicApp::handleMessageWhenUp(cMessage *msg)
{
if (msg->isSelfMessage()) {
ASSERT(msg == selfMsg);
switch (selfMsg->getKind()) {
case START:
processStart();
break;
case SEND:
processSend();
break;
case STOP:
processStop();
break;
default:
throw cRuntimeError("Invalid kind %d in self message", (int)selfMsg->getKind());
}
}
else if (msg->getKind() == UDP_I_DATA) {
// process incoming packet
//-----------------------------------------------------Added step
//std::string currentMsg= "" + msg->getTreeId();
std::string currentPacket= PK(msg)->getName();
if( BF->CheckBloom(currentPacket) == 1) {
numReplayed++;
getParentModule()->bubble("Replayed!!");
EV<<"----------------------WSNode "<<getParentModule()->getIndex() <<": REPLAYED! Dropping Packet\n";
delete msg;
return;
}
else
{
BF->AddToBloom(currentPacket);
numLegit++;
getParentModule()->bubble("Legit.");
EV<<"----------------------WSNode "<<getParentModule()->getIndex() <<":OK. Pass.\n";
}
//-----------------------------------------------------------------------------
processPacket(PK(msg));
}
else if (msg->getKind() == UDP_I_ERROR) {
EV_WARN << "Ignoring UDP error report\n";
delete msg;
}
else {
throw cRuntimeError("Unrecognized message (%s)%s", msg->getClassName(), msg->getName());
}
if (hasGUI()) {
char buf[40];
sprintf(buf, "rcvd: %d pks\nsent: %d pks", numReceived, numSent);
getDisplayString().setTagArg("t", 0, buf);
}
}
Since I don't have enough context about the entities participating in your overall system, I will provide the following idea:
You can add a unique ID to each message of your application by adding the following line to your applications *.msg:
int messageID = simulation.getUniqueNumber();
Now on the receiver side you can have an std::map<int, int> myMap where you store the <id,number-of-occurences>
Each time you receive a message you add the message to the std::map and increment the number-of-occurences
if(this->myMap.count(myMessage->getUniqueID) == 0) /* check whether this ID exists in the map */
{
this->myMap.insert(std::make_pair(myMessage->getUniqueID(), 1)); /* add this id to the map and set the counter to 1 */
}
else
{
this->myMap.at(myMessage->getUniqueID())++; /* add this id to the map and increment the counter */
}
This will allow you to track whether the same message has been forwarded twice, simply by doing:
if(this->myMap.at(myMessage->getUniqueID()) != 1 ) /* the counter is not 1, message has been "seen" more than once */
The tricky thing for you is how do you define whether a message has been seen twice (or more).

Calling methods and receiving signals using low-level APIs

I am trying to call the method ReadLocalBdAddrReq and receive its signal ReadLocalBdAddrCfm on dbus using the dbus low level APIs.
I have written the following code with the help of some forum posts and a dbus tutorial.
The thing is, I am not able to receive the signals back. The code is incomplete at some places as I didn't know what should be done.
So please help me so that I can receive the signals for the methods called.
Here the code I have written. Please correct any mistakes I've made.
#include <stdlib.h>
#include <stdio.h>
#include <dbus/dbus.h>
#define OBJ_PATH "/bt/cm"
static dbus_bool_t add_watch(DBusWatch *watch, void *data)
{
if (!dbus_watch_get_enabled(watch))
return TRUE;
int fd = dbus_watch_get_unix_fd(watch);
unsigned int flags = dbus_watch_get_flags(watch);
int f = 0;;
if (flags & DBUS_WATCH_READABLE) {
f |= DBUS_WATCH_READABLE;
printf("Readable\n");
}
if (flags & DBUS_WATCH_WRITABLE) {
printf("Writeable\n");
f |= DBUS_WATCH_WRITABLE;
}
/* this should not be here */
if (dbus_watch_handle(watch, f) == FALSE)
printf("dbus_watch_handle() failed\n");
return TRUE;
}
static void remove_watch(DBusWatch *watch, void *data)
{
printf("In remove watch with fd = [%d]\n",dbus_watch_get_unix_fd(watch));
}
static void toggel_watch(DBusWatch *watch, void *data)
{
printf("In toggel watch\n");
/*
if (dbus_watch_get_enabled(watch))
add_watch(watch, data);
else
remove_watch(watch, data);
*/
}
/* timeout functions */
static dbus_bool_t add_time(DBusTimeout *timeout, void *data)
{
/* Incomplete */
printf("In add_time\n");
if (!dbus_timeout_get_enabled(timeout))
return TRUE;
//dbus_timeout_handle(timeout);
return 0;
}
static void remove_time(DBusTimeout *timeout, void *data)
{
/* Incomplete */
printf("In remove_time\n");
}
static void toggel_time(DBusTimeout *timeout, void *data)
{
/* Incomplete */
printf("In toggel_time\n");
/*
if (dbus_timeout_get_enabled(timeout))
add_timeout(timeout, data);
else
remove_timeout(timeout, data);
*/
}
/* message filter -- handlers to run on all incoming messages*/
static DBusHandlerResult filter (DBusConnection *connection, DBusMessage *message, void *user_data)
{
printf("In filter\n");
char *deviceaddr;
if (dbus_message_is_signal(message, "com.bluegiga.v2.bt.cm", "ReadLocalBdAddrCfm")) {
printf("Signal received is ReadLocalBdAddrCfm\n");
if ((dbus_message_get_args(message,NULL,DBUS_TYPE_STRING, &deviceaddr,DBUS_TYPE_INVALID) == FALSE))
{
printf("Could not get the arguments from the message received\n");
return -2;
}
printf("Got Signal and device address is [%s]\n", deviceaddr);
}
return 0;
}
/* dispatch function-- simply save an indication that messages should be dispatched later, when the main loop is re-entered*/
static void dispatch_status(DBusConnection *connection, DBusDispatchStatus new_status, void *data)
{
printf("In dispatch_status\n");
if (new_status == DBUS_DISPATCH_DATA_REMAINS)
{
printf("new dbus dispatch status: DBUS_DISPATCH_DATA_REMAINS [%d]",new_status);
}
}
/* unregister function */
void unregister_func(DBusConnection *connection, void *user_data)
{
}
/* message function - Called when a message is sent to a registered object path. */
static DBusHandlerResult message_func(DBusConnection *connection, DBusMessage *message, void *data)
{
printf("Message [%s] is sent to [%s] from interface [%s] on path [%s] \n",dbus_message_get_member(message),dbus_message_get_destination(message),
dbus_message_get_interface(message),dbus_message_get_path(message)); return 0;
}
DBusObjectPathVTable table = {
.unregister_function = unregister_func,
.message_function = message_func,
};
int main(void) {
DBusMessage* msg;
DBusMessageIter args;
DBusConnection* conn;
DBusError err;
DBusPendingCall* pending;
int ret;
//unsigned int level;
char* appHandle = NULL;
//int *context;
int msg_serial;
int open;
char *deviceaddr;
dbus_error_init(&err);
// connect to the system bus and check for errors
conn = dbus_bus_get(DBUS_BUS_SYSTEM, &err);
if (dbus_error_is_set(&err)) {
fprintf(stderr, "Connection Error (%s)\n", err.message);
dbus_error_free(&err);
}
if (NULL == conn) {
exit(1);
}
if (!dbus_connection_set_watch_functions(conn, add_watch, remove_watch, toggel_watch, NULL, NULL))
{
printf("Error in dbus_set_watch_functions\n");
dbus_connection_unref(conn);
return -1;
}
/* These functions are responsible for making the application's main loop aware of timeouts */
if (!dbus_connection_set_timeout_functions(conn, add_time, remove_time, toggel_time, NULL, NULL))
{
printf("Error in dbus_set_timeout_functions\n");
dbus_connection_unref(conn);
return -1;
}
/* Used to register the handler functions run on incoming messages*/
if (!dbus_connection_add_filter(conn, filter, NULL, NULL))
{
printf("Error in adding filter\n");
dbus_connection_unref(conn);
return -1;
}
/* Filter added for incoming messages */
/* Set a function to be invoked when the dispatch status changes */
dbus_connection_set_dispatch_status_function(conn, dispatch_status, NULL ,NULL);
/* Register a handler for messages sent to a given path */
if(!dbus_connection_register_object_path(conn, OBJ_PATH, &table, NULL))
{
printf("Error in registering object\n");
return -1;
}
/* sending messages to the outgoing queue */
msg = dbus_message_new_method_call("com.bluegiga.v2.bt.cm", // target for the method call
OBJ_PATH, // object to call on
"com.bluegiga.v2.bt.cm", // interface to call on
"ReadLocalBdAddrReq"); // method name
if (NULL == msg) {
fprintf(stderr, "Message Null\n");
exit(1);
}
dbus_message_iter_init_append(msg, &args);
if (!dbus_message_iter_append_basic(&args, DBUS_TYPE_UINT16,&appHandle)) {
fprintf(stderr, "Out Of Memory!\n");
exit(1);
}
fprintf(stderr, "Sending the connections\n");
// send message and get a handle for a reply
if (!dbus_connection_send (conn, msg, &msg_serial)) {
fprintf(stderr, "Out Of Memory!\n");
exit(1);
}
fprintf(stderr, "Connection sent and the msg serial is %d\n",msg_serial);
/* Message sent over */
/* not sure whether this should be here or above watch */
while (dbus_connection_get_dispatch_status(conn) == DBUS_DISPATCH_DATA_REMAINS)
{
//printf("Entered in dispatch\n");
/* Processes any incoming data. will call the filters registered by add_filer*/
dbus_connection_dispatch(conn);
}
return 0;
}
After I run this program it has the following output:
Readable
Sending the connections
Connection sent and the msg serial is
2(DBUS_MESSAGE_TYPE_METHOD_RETURN)
If the connection was sent to the object path then message_func should have been called correctly, but it never is called. Have I made any mistake in sending the method call?
You are missing the event loop which is otherwise available by default if you choose to go with one of the bindings. When you get a call to add_watch, libdbus expects that the application will attach an IO handler to it. The IOHandler added by application will watch for an activity on the fd (filedescriptor) queried for the watch. Whenever there is an activity on that file descriptor, the IOHandler will trigger a callback with appropriate flags that you need to convert to DBUS flags before calling dbus_watch_handle.
Suggest that you use glib if you don't know how to use event loops. I am able to get it if I use libUV or libEV as low footprint event loop.