Is there an equivalent to Apache Camel in Rails ?!
I'm creating an application that needs to "listen" to messages from one source (for example: email (POP3)) and sends them to another source (for example: logfile or email (using smtp)).
Any ideas ?
I am not sure about a complete equivalent to Apache Camel. But, to just listen for mails from a POP3 server and send to another source, try the mailman gem
EDIT: You should also look at mailcatcher gem
I am pretty sure there are no ports of Apache Camel to other languages, including ruby (but others as well, there was a question about .net recently). However, you can use Apache Camel with your application. You can treat Camel as an independent daemon that you need to configure and you can do that conveniently via xml. If you need some of your ruby code to be invoked during processing you can use the Camel org.jruby:jruby support. It may be less than ideal, but it works well. To interact with external systems Camel supports a large number of protocols already (including the ones you mentioned), but one could plug in her own.
Given Camel's support for many languages, protocols and data formats, I doubt anybody will go through the significant effort of porting it to other languages, but you never know.
You should definitely look at Llama.
They are in an early stage, but seems like they are going to build "an integration-framework on top of EventMachine that helps with tying together various backend services", which is what Camel is.
Related
I would like to know are there feature wise same or different? Could you also mention any pros and cons about both of these? Also please mention real-world use case for both Embedded BrokerService vs installed ActiveMQ broker. Thanks in advance!
ActiveMQ is just a Java application, and the embedded version offers essentially the same features as the stand-alone version. In fact, you can configure an embedded broker to take its configuration from an XML file, in which case it will look very similar to the stand-alone broker.
Embedding a broker is a reasonable thing to do if you need the benefit of programmatic configuration; that is, you want to configure things according to rules which are hard to implement in an XML file. It also makes sense if you want close-coupled operation between the broker and the application components, with message data being passed in memory. This might be the situation if you're using JMS as an inter-module communication mechanism within the application.
Embedding a broker has the disadvantage -- and it can be a profound one -- of making it difficult to disentangle problems in the broker from problems in your application. Figuring out the cause of, say, runaway memory consumption could be very difficult. You can get commercial support for ActiveMQ, should you need it, but it will be hard for any commercial organization to support a hybrid broker+application installation.
I have some subscribers that they are listening to "mytopic"...I send message to them by browser UI just like attached image
but now I want to do this work in code environment
what should I do and whats that code and methods?
-----> attached image
Thanks
You should have mentioned what programming language you want to use. Every language may offer a different API and specification.
In case of using Java, you can simply program according to the JMS specification
https://docs.oracle.com/javaee/7/api/javax/jms/package-summary.html
Also, the Apache ActiveMQ installation comes with a few examples that you can use as a reference. E.g. see
examples/openwire/swissarmy/src/TopicPublisher.java
You should have a look at ActiveMQ Protocols - for example you can use the ActiveMQ REST API to POST messages to the topic or use JMS.
Then you can look at how to use these from a "client" (presumably your central server) perspective.
For REST from Java (say) you could look at this article
I am working on a project which has following requirements:
Perform sticky based load balancing(based on SOAP session ID) onto multiple backend servers.
Possibility to plugin my own custom based load balancer.
Easy to write and deploy.
A central configuration file(Possibly an XML), to take care of all the backend servers.
Easy extraction of a node from this configuration file(Possibly with xpath).
I tried working with camel for a while but, wasn't able to do perform certain task with it.
So thought of giving a try to Akka.
Will akka be possibly able to satisfy the above requirements?
If so is there a load balancing example in akka or proxy example?
Would really appreciate some feedbacks.
You can do everything you've described with Akka.
You don't mention what language you're working with, Scala or Java. I've included links to the Scala documentation.
Before you do anything with Akka you HAVE TO read the documentation and understand how Akka works.
http://doc.akka.io/docs/akka/2.0.3/
Doing so, you'll find Akka is perfect for the project you've described with some minor caveats.
Once you read the documentation the following answers should make a lot of sense.
Perform sticky based load balancing(based on SOAP session ID) onto multiple backend servers.
Load balancing is already part of the framework (it's called Routing in Akka http://doc.akka.io/docs/akka/2.0.3/scala/routing.html) and Remoting (http://doc.akka.io/docs/akka/2.0.3/scala/remoting.html) will take care of the backend servers. You can easily combine the two.
To my knowledge the idea of sticky load balancing is not a part of Akka but I can envision this being accomplished with a Map using the session ID as the key and the Actor name (or path) as the value. A quick actorFor will take care of the rest. Not well thought out but should give you a good idea of where to start.
Possibility to plugin my own custom based load balancer.
Refer to the Routing documentation.
Easy to write and deploy.
This depends on your aptitude and effort but after you read certain parts of the documentation you should be build a proof of concept in a couple of hours.
Deployment can be a bit frustrating mostly because the documentation isn't really great with respect to deploying Akka networks with remote components. However, there are enough examples on the web that you can figure out how to get it done...eventually. Once you do it once it's no big deal.
A central configuration file(Possibly an XML), to take care of all the backend servers.
Akka uses Typesafe Config (https://github.com/typesafehub/config) which is a lot easier to work with than XML (but I hate XML so take that with a grain of salt). As far as a central configuration, I'm not sure what you're trying to accomplish but it sounds like something that can be solved using remote actor creation. Again, see the Remoting documentation.
Easy extraction of a node from this configuration file(Possibly with xpath).
Akka provides a lookup method .actorFor. There's no need to go to the configuration file once the system is up and running.
If so is there a load balancing example in akka or proxy example?
Google is your friend.
We would like to evaluate whether the SVN protocol works better for our team than HTTP, but we don't want to commit to a full switch just yet.
Right now we have an Apache sever serving up our main repository. Can we safely use svnserve.exe to with the same repository so that a few of our developers can test it? My initial guess is that we can, but we don't want to risk corrupting our repository.
Yes, it's possible. The official SVN book has a chapter devoted to this situation:
http://svnbook.red-bean.com/en/1.5/svn.serverconfig.multimethod.html . There are some pitfalls but they have more to do with permission settings.
Exactly, Subversion is designed to support concurrent access via multiple protocols, something which causes major problems with CVS. Not only can you use http:// and svn://, but also file:// (if you happen to be working locally on the machine, for example with a continuous integration tool or other post-commit hook) https://, svn+ssh://, etc.
In my experience, one method hasn't proven to be objectively "better" than the other, but there are certain benefits to each. For example, Apache is extremely adept at handling lots of accesses at one. On the other hand, if you're not already using Apache, or don't want to make it handle SVN traffic, the svnserve daemon is lightweight and quite performant. On my Macs, I set up svnserve using launchd to start up only when a request comes in, so it doesn't use any resources when there is no repository activity. What works best will largely be a factor of the access patterns you see in practice.
I'm looking for an LDAP libracy in C or C++ that allows me to specify a list of LDAP hostnames instead of a single hostname. The library should then use the first one it can connect to in case one or more of the servers is/are down. I'm sure it'd be easy to wrap an existing library to create this, but why reinvent the wheel?
Use multiple A records, each with a different IP.
ldapserver.example.com. IN A 1.2.3.4
ldapserver.example.com. IN A 2.3.4.5
The OpenLDAP client libs will try each host in turn. Failover is (unfortunately) as slow as your TCP connection timeout...
The novell cldap libraries (and java libraries) support a list of space separated hosts when connecting. It'll try each one in turn, as noted in the ldap_init() page.
The openldap libldap library also supports a space separated list of hosts passed to ldap_open() or a comma separated list passed to ldap_initialize().
The only catch is to make sure to handle the LDAP_SERVER_DOWN error that gets returned after a connection goes away. I usually write a wrapper function that tries an operation (ie: a search), and tries to reconnect if LDAP_SERVER_DOWN occurs, and then does the operation again.
I can't say I've ever heard of one. Furthermore, most LDAP-capable software I've used supported failover poorly or not at all. You might be better off trying to implement the failover at the server, by putting it behind a load balancer or similar.