Where we should place RabbitMq? - rabbitmq

Suppose that if we are creating the project on PHP and going to use RabbitMq for Queuing.Then which one will be the better solution?
Putting `Apache` and `RabbitMq` on the same server?
Putting `Apache` and `RabbitMq` on different servers?

Surely second option to put the apache and rabbitMq on different server is better.
Because you will get 2 different server(processors). If one is overloaded then it will not affect the other. And all queue related load will be on different server.

Related

IBM MQ Consume from one queue in a Gateway load balanced setup

I tried looking for a solution, but could not find in any forums. I wouldnt define this as a problem, but trying to check if there's a better way of connecting to two different QMs(Gateway QM Load balanced) using one queue. Our IBM MQ setup is exactly as in the link Gateway loadbalancer
This is a well working setup for us, but specially for production, we have to make sure to deploy two consumers to consume from two different local queues in (QM1,QM2) which is an overhead. Is it possible to create something like an alias so we just have one consumer, pointing to one queue. This makes maintenance much easier, considering the number of services we have. If anyone has accomplished this, I would appreciate if you could point me in the right direction.

Bottle neck in using Activemq

I am working on a project which uses activemq as broker.
My problem is that there are many request and many data to be put on Activemq queue. Is there a way to somehow have more than one Activemq instance?I know that we can have multiple instance.But I don't know how to manage them, that when one broker is busy, we use the ther instance.
Yes, there are multiple ways you can scale. The best way is hard to tell with so little information about your case.
Add more resources to the broker server may be one solution.
Another may be to create multiple instances and connect them with network of brokers. Make sure you simply not duplicate all messages to two brokers, but distribute consumers among the brokers and only pipe published messages between broker. Your milage may vary.
You may find the rebalanceClusterClients on the transport connector convenient to automatically distribute clients in your cluster. However, there is no magic - you need optimize for your own scenario.

Replicate SQL Server table using ActiveMQ

I hope you can help me with this:
I have two database tables in separate servers, and I want them to be synchronized, I mean that when one of them is modified (Insert, delete, update), the other one is modified too. I´ve been searching for a while now and I´ve found that this can be acomplished with ActiveMQ, but, I haven´t found the way of doing It, can anybody give me a clue or a tutorial or something?.
I really appreciate your help.
Thanks in advance.
Is there any particular reason you want to mix in ActiveMQ for the task?
ActiveMQ is a message broker to send event messages around. There is no out-of-the-box database synchronization of DB events with ActiveMQ. You probably have to use Apache Camel (or custom code) to read and write the databases. That would be a non trivial task, non the less, since there are things as transactions, table locks you need to take into account.
If replication is all that is needed for HA or backup, you should really look at the built in mechanisms of SQL server.

Consideration before creating a single Redis instance

I currently have some different project that works on different redis instance ( consider the sample where I've 3 different asp.net application that are on different server each one with its redis server).
We've been asked to virtualize and to remove useless instances so I was wondering what happens if I have only one redis server and all the 3 asp.net points to the same redis instance.
For the application key I think there's no problem, I can prefix my own key with the application name , for example "fi-agents", "ga-agents", and so on... but I was wondering for the auth session what happens?
as far as I've read the Prefix is used as internal and it can't be used by final user to separate... it's just enought to use different Db?
Thanks
Generally and unless there are truely compelling reasons, you don't want to mix different applications and their data in the same database. Yes, it does lower ops costs initially but it can quickly deteriorate to scaling and performance nightmare. This, I believe, is true for any database.
Specifically with Redis, technically yes - you could use a key prefix or the shared/numbered database approach. I'm not sure what you meant by "auth" sessions but you can probably apply the same approach to them. But you really shouldn't... since Redis is a single-threaded process you can end up where one of the apps is blocking the other two. Since Redis by itself is so lightweight, just spin up dedicated servers - one per app - even in the same VM if you must. You can read more background information on why you don't want to opt for the shared approach here: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances

Running the same web app on 2 or more physically separate servers?

I am not sure if I should be posting this question here or over at ServerFault so apologies if it is in the wrong place.
I have a small web app that is starting to get some more business.
Currently I have a single dedicated LAMP server for this, and this has worked well - the single server is able to handle all of our traffic.
However... Recently I have been approached by some potential customers who are interested in using the app, but only if their data can be stored on a server in the same province as they are (legal reasons).
I could migrate the server, but I am reluctant to do this. I like where it is now.
So, I am wondering what is involved in having multiple servers in physically separate datacentres far apart, running the same web app? Data between the servers would not need to stay synced, necessarily.
I have never done anything like this before, and am not sure how complicated a job it is. Any suggestions on how and where to start looking into this would be much appreciated.
Thanks (in advance) for your advice.
As long as each customer has their own set of data you can just install another copy of the application in the other datacenter. It will require you to get some structure to your source control and deployment process, but it works. This option will give you two separate databases.
If you have to have one common database for all the customers (e.g. some kind of booking/reservation system of common resources) then you're up to a completely other level of complexity with replicating databases etc. It's doable, but it's hard.