how to register for a Packet-in-message change-event-notfication? - sdn

I am trying to get a notification (over REST connection) when any host is trying to communicate in my network. I registered for a Packet-in-message change-event-notification that I found in the packet-processing module. when I start listening using my websocket client, I receive nothing!!!!!
I was expecting some notification with each packet-in reaching the controller. am I miss-understanding the use of this module? what is the purpose of this packet-in-message?
Is there a way to get a notification when any host is trying to set a communication with another host (over REST)?
I built my topology in mininet. It contains some openflow switches and hosts. The Opendaylight controller has l2switching, restconf, openflowplugin and dlux features enabled.

Related

For UDP (Multicasts), multiple applications can subscribe to the same port. If one packet has arrived from client, which application receive it?

For UDP (Multicasts), multiple applications can subscribe to the same port.
If one packet has arrived from client, which application receive it?
Nowadays, mDNS discovery is very popular in LAN D2D use cases. Different applications may bundle different mDNS discovery modules, which leads to the fact that multiple applications listen to the same port. When one packet comes, which application receive it? Or all get notified?

Load balancing WebSocket with Redis and RabbitMQ

Consider a small chat server. In this server, the actual processing of messages is done by nodes of a service called "chat". Communications of this service along with a "user" service are then aggregated via a "gateway" service in front that is the only service that actually communicates with the users and is in charge of passing requests received to other services via the RabbitMQ channel they share.
In a system designed like this, each user is connected to one of the instances of the "gateway" service and when sending and receiving messages indirectly communicates with the private "chat" or "user" services behind. To load balance this, we have an Nginx reverse-proxy on the edge that tries to distribute requests to different "gateway" instances. But since WebSocket connection is real-time, "chat" instances should also be able to send messages to the right instance of the "gateway" in charge of that specific user for user-specific messages and to all "gateway" instances for site-wide messages. This is a problem since with RabbitMQ I don't believe we can target a specific subscriber and even if we could, we don't know to which instance that specific user is connected right now.
Therefore, since we are using Socket.io for WebSocket connection, I am thinking of adding a new Redis node to the stack to allow this communication between different instances of the "gateway" service. This is directly supported by Socket.io and works alright and removes all sorts of limitations imposed by the RabbitMQ, however, we are still using RabbitMQ to route a message from a "chat" instance to a "gateway" instance that then will propagate through the Redis service and when the right "gateway" instance having access to the user is found, delivered to them.
This adds unnecessary lag to user-specific outbound messages. So here I am asking if anyone has a better idea of how this problem should be approached and how to decrease this lag.
Personally, I have this idea of adding Socket.io to "chat" services (with no client access) and use its backend to send the message directly to the Redis store so that the instance of the "gateway" connected to it can route it directly to the user, going over the whole RabbitMQ thing for this type of messages.
It might be important to mention that none of these services are here just to do this specific thing, RabbitMQ is heavily used for communication between different services acting as the message broker and the "gateway" service works with multiple other services for data aggregation, authentication and data validation and transformation. The above example was a simplified version of the problem at hand with the minimum number of moving parts that I could easily describe here.
Edit: To send messages directly to socket.io redis store, the following library can be used apparently not to load the whole socket.io library:
https://github.com/socketio/socket.io-redis-emitter

SignalR Hub Acts as TCP Client

Here is my situation:
I have a windows service that constantly runs and receives data from connected hardware on TCP/IP port. This data needs to be pushed to Asp.net website for real-time display as well as stored in database. The windows service, asp.net website and database are all located on the same server.
For Widnows Service: I can't use WCF because it only supports netTCP which is not going to work with socket communication over TCP. So I have to use TCP socket communication for TCP Server/Client.
For Real-Time Updates to the website: I am planning to use SignalR library. I can create a hub which should send new data to clients whevener it becomes available.
My problem is: What's the best way for signalR hub to retrieve data from TCP Server/Client located in Windows service? One simple solution is to first store data in database and retrieve from there. But I am not sure if that is going to slow down the whole system as data received is going to be every second.
Please advise a best solution for this problem.
Thanks.

Integrating UDP server in eclipse rcp

I want to make a tool that monitors a couple of tcp and udp ports which will then be visualized in different views in an eclipse rcp application.
How should one go about doing this?
I have some trouble figuring out how to attach the TCP and UDP servers to the eclipse framework so that multiple views can listen on them and handle the information accordingly.
Each view can register itself as a listener to your network monitor using one of these methods:
Accessing directly network monitor singleton instance (like you've done):
NetworkMonitor.getInstance().addMonitorListener(this)
Make an OSGI service from your network monitor, then access it from your view using:
nmServiceTracker = new ServiceTracker(bundleContext, NetworkMonitor.class.getName(), null);
nmServiceTracker.open();
((NetworkMonitor) debugTracker.getServiceReference()).addMonitorListener(this)
See simple OSGI service tutorial for more info.
Create extension point for "Network Monitor Listeners". For more on creating extension points, refer to this great article.

WCF Intermediary to enable calls between 2 endpoints behind routers without router configuration

I'm developing a synchronization service using WCF and Sync Framework, and I have it working when the endpoints can communicate directly.
The next step I need to implement is to synchronize 2 endpoints where they both are behind routers and the router ip is changing constantly. I am thinking about a publicly available intermediary that should forward the calls between the 2 endpoints. My biggest problem is that I cannot rely on the users to configure the port forwarding on routers so I cannot directly open a connection from the other endpoint or the intermediary.
My idea is based on FogCreek's CoPilot, and other remote assistance solutions (LogMeIn, TeamViewer, etc) which works without any router configuration.
How would you implement it?
You need something like relay in Azure. I would try implement it this way:
Your intermediary will provide two operations:
Push - client will call this operation when publishing new data for synchronization. Data will be stored on service until other client downloads them.
Pull - client will call this operation regulary to download any published data stored on intermediary.
Routers with changing IP should not be a problem, because client will be always initiating connection.
If you are not limited to HTTP protocol you can implement this with Net.Tcp binding and use duplex communication. In such case your intermediary will be able to forward synchronized data immediately. But this solution can have additional complexity when dealing with sessions and connections.