Integrating UDP server in eclipse rcp - eclipse-plugin

I want to make a tool that monitors a couple of tcp and udp ports which will then be visualized in different views in an eclipse rcp application.
How should one go about doing this?
I have some trouble figuring out how to attach the TCP and UDP servers to the eclipse framework so that multiple views can listen on them and handle the information accordingly.

Each view can register itself as a listener to your network monitor using one of these methods:
Accessing directly network monitor singleton instance (like you've done):
NetworkMonitor.getInstance().addMonitorListener(this)
Make an OSGI service from your network monitor, then access it from your view using:
nmServiceTracker = new ServiceTracker(bundleContext, NetworkMonitor.class.getName(), null);
nmServiceTracker.open();
((NetworkMonitor) debugTracker.getServiceReference()).addMonitorListener(this)
See simple OSGI service tutorial for more info.
Create extension point for "Network Monitor Listeners". For more on creating extension points, refer to this great article.

Related

Scout Eclipse Notification for server

I ask early about notification on client here.
Now I am interesting in notification for server side. In particular I am interesting in fact that notification inform all servers.
My problem is cluster of servers. I have some database elements cashed on all servers. If some user on any server update database element cash need to be refresh. Notification could do the job.
Or is there another way to deal with cluster of servers ?
Marko
There is no complete tutorial on this topic, I'm afraid.
Scout however does have the functionality you are looking for in the form of the IClusterSynchronizationService.
You can use it to register listeners and send messages between Eclipse Scout servers.
For it to work, you'll need a message passing system (message queue) such as ApacheMQ or RabbitMQ. You simply have to install the necessary connector from the Eclipse Marketplace and register them for integration in your application. A detailed explanation is in this tutorial. (You need to add the new connector as dependencies to your product files, register the cluster synchronization service, and configure it with properties for host, port, ...).
The "BahBah" Demo chat application on GitHub has an implementation of these listeners and how to register them.
The (inofficial) fork of the BahBah Chat demo has some of these changes already built in.

SignalR Hub Acts as TCP Client

Here is my situation:
I have a windows service that constantly runs and receives data from connected hardware on TCP/IP port. This data needs to be pushed to Asp.net website for real-time display as well as stored in database. The windows service, asp.net website and database are all located on the same server.
For Widnows Service: I can't use WCF because it only supports netTCP which is not going to work with socket communication over TCP. So I have to use TCP socket communication for TCP Server/Client.
For Real-Time Updates to the website: I am planning to use SignalR library. I can create a hub which should send new data to clients whevener it becomes available.
My problem is: What's the best way for signalR hub to retrieve data from TCP Server/Client located in Windows service? One simple solution is to first store data in database and retrieve from there. But I am not sure if that is going to slow down the whole system as data received is going to be every second.
Please advise a best solution for this problem.
Thanks.

Mule Inter - App communication in same instance

I have explored the web on MULE and got to understand that for Apps to communicate among themselves - even if they are deployed in the same Mule instance - they will have to use either TCP, HTTP or JMS transports.
VM isn't supported.
However I find this a bit contradictory to ESB principles. We should ideally be able to define EndPoints in and ESB and connect to that using any Transport? I may be wrong.
Also since all the apps are sharing the same JVM one would expect to be able to communicate via the in-memory VM queue rather than relying on a transactionless HTTP protocol, or TCP where number of connections one can make is dependent on server resources. Even for JMS we need to define and manage another queue and for heavy usage that may have impact on performances. Though I agree if we have distributed and clustered systems may be HTTP or JMS will be only options.
Is there any plan to incorporate VM as a inter-app communication protocol or is there any other way one Flow can communicate with another Flow Endpoint but in different app?
EDIT : - Answer from Mulesoft
http://forum.mulesoft.org/mulesoft/topics/concept_of_endpoint_and_inter_app_communication
Yes, we are thinking about inter-app communication for a future release.
Still is not clear when we are going to do it but we have a couple of ideas on how we want this feature to behave. We may create a server level configuration in which you can define resources to use in all your apps. There you would be able to define a VM connector and use it to send messages between apps in the same server.
As I said, this is just an idea.
Regarding the usage of VM as inter-app communication, only MuleSoft can answer if VM will have a future feature or not.
I don't think it's contradictory to the ESB principle. The "container" feature is pretty well defined in David A Chappell's "Enterprise Service Bus book" chapter 6. The container should try it's best to keep the applications isolated.
This will provide some benefits like "independently deployable integration services" (same chapter), easier clusterization, and other goodies.
You should approach same VM inter-app communications as if they where between apps placed in different servers.
Seems that Mule added in 3.5 version, a feature to enable communication between apps deployed in the same server. But sharing a VM connector is only available in the Enterprise edition.
Info:
http://www.mulesoft.org/documentation/display/current/Shared+Resources#SharedResources-DefiningDomains
Example:
http://blogs.mulesoft.org/optimize-resource-utilization-mule-shared-resources/

AMQP AmqpBinding IIS/WAS problems?

The setup at the current employer has one set of back office functions on a Java platform and another group of functions on two separate .NET-based platforms. There is no overall architect.
The Java guys decided to go for Apache QPID and AMQP for messaging, presumably amongst themselves, with the .NET systems and other external systems.
.NET architecture involves WCF services hosted in IIS/WAS and Windows Server AppFabric.
Does anyone have any experience of AmqpBinding and IIS/WAS, if there are any possible pitfalls?
I think your first problem will be IIS/WAS/AppFabric because non HTTP services hosted in WAS have additional requirements for infrastructure which consists of additional process (listener) running usually as as a windows service and communicating with worker process. This process is responsible for receiving and sending messages and allows service activation in WAS. I don't think that the QPID project has the listener process already created. You will most probably have to implement the listener yourselves - check this sample for custom UDP activator.

How to connect to ActiveMQ on startup with WCF and IIS

What is the best way to combine a single instance WCF service that uses ActiveMQ and runs within IIS/AppFabric?
Our Services need to support both HTTP transports and ActiveMQ (listening and sending messages). We've elected not to use MSMQ, and will use Spring.Net.NMS. The fundamental issue I have now is that ActiveMQ needs to connect to the queue(s) at startup and remain connected, but WAS is getting in the way with it's message-activation feature. If the service is not activated until a message arrives (HTTP/MSMQ, etc) then there is no trigger to have the connection to AMQ occur.
I know I can disable the recycling behavior, and I know I can do self-hosting with a Windows Service. But I want to take advantage of the monitoring and other features in AppFabric. I've already been down the route with IServiceBehavior and will use that for other nice things. But that interface is not called until a (non-AMQ) message arrives. So it won't work for this. What I was hoping for was something along the line of how ServletContextListeners work in Java, where you get both the start up and shutdown events. But it seems no such thing exists in WAS... it is driven only by messages arriving.
I've scoured every inch of web info for 3 days and the only thing I came across was to use a static class construction (C#) trick as the trigger. That's a hack, but i can live with it. It still leaves the issue of cleanly shutting down, which I can figure out later.
Anyone have a solid solution to this?
The direct WCF support for ActiveMQ that Ladislav mentions is still being supported. There just hasn't been an official release for the module in a while. However, you can still get the latest version of it from the 1.5.x branch or trunk and compile it yourself.
1.5.x branch for use with Apache.NMS 1.5.0:
https://svn.apache.org/repos/asf/activemq/activemq-dotnet/Apache.NMS.WCF/branches/1.5.x/
Check out instructions:
http://activemq.apache.org/nms/source.html
There was direct WCF support for ActiveMQ but I guess it is not developed anymore. Your problem actually is the IIS / WAS (provides hosting for non-http protocols) hosting architecture. Services in WAS are always activated when message arrives - there is no global startup. The reason for this is that WAS hosting expects that there is separate process (windows service) running the listener all the time and this process has adapter which calls WAS and uses message level activation. I guess you don't have such process for ActiveMQ and because of that you will have trouble to use ActiveMQ endpoint hosted in WAS. Developing such listener can be challenging task (example for UDP).
Creating custom listener can be probably avoided by using IIS 7.5 / AppFabric auto start feature. There is also not very well documented way to run the code when the application starts.