RabbitMQ - Basic newbie questions - rabbitmq

Our scenario: dozens of Windows laptops which are occasionally connected to the network. Need to store simple data records on each laptop, then have these reliably transferred to a service running on the network once connection is available. Considering RabbitMQ on each laptop, feeding data to a "main" RabbitMQ on the network. This is a Fortune 100, and packaging etc is a concern.
Question 1: In general, does Rabbit make sense here? If not, any suggestions for an approach?
Question 2: When I installed on Win I had to manually install Erlang first. Are there packaging/deployment options which are simpler/more friendly? (Their IT people can do all the normal deployment stuff including create win service, but installing Erlang on user machines might raise eyebrows...)
Thanks for any help from those of you who've been there, done that with Rabbit.

Question 1: What you need is a store and forward mechanism. RabbitMQ can be used for that, actually by using the Shovel plug-in to take care of moving messages from the local Rabbit to the remote one (handling reconnection, retries, etc... for you).
Question 2: The answer is related to question 1. RabbitMQ+Shovel is conceptually suitable for your store and forward needs but if, alas, not technologically acceptable, you may want to consider simpler/cruder approaches like... SMTP!

If the Windows laptops are backed by a windows infrastructure, the most logical choice is MSMQ, which offers this "Out-of-the-box"; e.g. store and forward from clients to server(s). Easy to install by policy and administrate.

Related

oVirt Multipathing MPIO Fibre Channel how to?

I have a question regarding oVirt and multipathing.
I have a cluster with 4 hosts and a storage system (Dell EMC) connected via fibre channel. At the moment I have a SAN Switch between the hosts and the storage system, but I want to attach the hosts and storage system directly via two fibre channel paths on each host.
Therefore, I need multipathing. the hosts run centos 7 minimal and multipath is installed and active. do i need to change the multipath.conf file, or does centos recognize the two paths automatically? Is it active/passiv or active/active with loadbalancing? The documentation of oVirt does only explain very little and more about iSCSI.
I am new to this topic so bear with me please. :)
Why don't you want to set up another SAN switch and configure the second fabric instead of crushing existing one? Having SAN with redundant fabrics (so called dual-fabric configuration) is preferable to direct-attachment because of scalability, flexibility, manageability, etc. Multipathing must be configured on hosts as well.
What is the model of your DELL/EMC storage? The most modern storage systems that are able to run in FC-SAN environments are active/active or at least support Asymmetric Logical Unit Access (ALUA). So yes, again, multipathing is in the list of best practices.
And obviously, it's not a complete answer because I know nothing about oVirt virtualization platform, but I have too few reputation points to post a comment.

Filter incoming TCP packets in a web service on a PaaS environment

Advanced Attacks Detection in a Platform-as-a-Service(PaaS) Environment
In the first part of this project, i'm supposed to monitor incoming packets
in a web service, accept only HTTP & HTTPS (TCP)packets for later analysis and, drop the rest.
I was thinking doing this in JAVA, because i think it's a very flexible and
complete language and, it's present in every PaaS Environment! So, my idea is
to build a simple web page in JSP/JSF with a bean to attend this first step
of the project.
This is where i need some guidance! Because i've started considering
libpcap JAVA wrappers like jNetPcap, Jpcap and Pcap4J. But none of them is able to drop packets!
Forgetting JAVA, i also have red about other libraries like: libnet, libdnet and libcrafter.
libnet can not handle the task!
libdnet has network firewall rule manipulation capabilities, but it's a very old library and, i'm not sure it can handle integration with iptables!
libcrafter is the best! Because it's an actual updated project and, it allows the use of iptables rules in the code.
And, of course, working directly with netfilter would be the ideal scenario!
But working with libcrafter or netfilter, to follow my simple idea of a web service with a JAVA bean, i would have to write my own java wrapper by JNI! Which i assume NOT to be a simple task!
Now, what is raising many doubts in my mind, is the fact that this has to be
done in a PaaS environment! None of them (PaaS providers) seem to have the
same restrictions. There are some more flexible like AWS and Microsoft Azure that let you choose and manage a VM with the OS distro you want. Others like OpenShift, BlueMix or Cloud Foundry, in a project, only give you the option of defining the programming language, application server and, that's it! So, one might not have permissions to install libraries and control network & transport layers to manage the packets! Since the hole OS administration is handled by the provider.
Considering only the main purpose of this project, which is managing the packet flow pointed to a domain located in a PaaS environment, without the help of other servers like tcp proxies, i am desperately in need of someone pointing me a direction to start from! Because with that, i can dig as deep as needed to get a solution. Please HELP!
Thank you very much for your time and consideration.

Setting up Apache cluster

I am new to web developing, so my question might be silly.
I want to set up an Apache cluster. I have four hardware machines and I want to distribute http requests load between them. Now on every machine I have installed Fedora.
For now it can be simple load balance cluster without any recovery techniques (in case of some hardware error on some servers). And of course I need open source (free to commercial use) software.
Any suggestions on what soft/tutorials/books I should look to learn how to set up environment like this?
The O'Reilly book "Server Load Balancing" by Tony Bourke (now runs lbdigest.com) is the classic. Unfortunately it's a little dated. Maybe Tony will consider an update.
If you really want "no recovery techniques", basic DNS Round-robin might work for you but it's pretty crude. There's an open source project called Ultramonkey, but I haven't had a chance to mess with it. A lot of development has gone into commercial solutions which offer load balancing, high-availability, etc. Including the Coyote Point product (which I helped write). The appliance based products are actually quite affordable today.

What's the best way to monitor rabbitmq to make sure everything is running smoothly?

Many times, I get:
-Frozen, load goes to 5.0. Can't use my box.
-Just doesn't work.
Do following steps:
1.rabbitmq-plugins enable rabbitmq_management
2.service rabbitmq-server restart
3.browse to http://rabbitmq-server-ip:15672
4.login with
username: guest
password: guest
Dont forget to change your password later.
As sheki notes, rabbitmqctl is your first port of call for diagnostics, and for building monitoring on top of, but it's not suitable for actual monitoring directly being a manual command line.
I've found DataDog very good to monitor both the MQ details, plus the host platform in parallel. e.g. you can watch the queue levels and set alerts on queues backing-up, while also watching the CPU/memory/IO inflicted by these queue levels. It really helps to get ratios of resource usage, and the alerts are good. Having a uniform platform for both infrastructure and application level monitoring is surprisingly rare, but speeds up diagnoses of production issues hugely.
NewRelic is similar and also has a RabbitMQ plugin, although I've not used this plugin specifically, I've used NR for years and found it invaluable in diagnosing operational issues.
AppDynamics is another example. Similarly this allows you to drill down into your app from a high-level dashboard, and visually navigate from problems to causes. It's especially good with visualising the network of a distributed application across various services/servers. I've used this, for example, to find complex problems in .NET applications and SQL Server clusters using 3rd party Web Services (e.g. latency and its consequences to your app over chatty protocols). These things are very difficult to diagnose, especially for developers who are limited to checking their code. Diagnosing operational issues requires a much broader picture.
I gave up trying to even install and configure Nagios. I know it's the 'best' but it's the best of an old breed of self-configured beasts which we don't have time to manage. I didn't even get it going... and eventually turned to the more 'modern' cloud approach. Once you get over the trust factor, it's pretty liberating.
I'm using these APM platforms together* to aggregate data from:
Windows O/S level Event Logs/Services
Linux O/S level
AWS console level
RDS, EC2
Apache
MySQL
App integrations / custom NR plugins I've written
Rabbit MQ
*NewRelic can feed into Datadog! So if you are already using NR you don't need to install DD on those hosts as well.
Being able to view all these levels together gives you a view on the publishers, middleware, MQ servers, workers and front-end app - all in one dashboard.
I would highly recommend an approach like this, because just looking at one server alone leads you to a lot of head-scratching. Seeing an entire stack in one customisable dashboard is just so illuminating it takes most of the guesswork out of it.
Worried about installing these things? I found New Relic to be especially light-weight and unobtrusive. AppDynamics seemed to stress the host a bit more, but mostly that's because you had to run the visualisation tools on the host! (this may have changed). DataDog seems performant, but creates a lot of control panels/icons on the target host (perhaps just a visual impression).
To a four year old question - this answer probably wasn't available in 2011, but in 2015 these once 'startup' style APM services are just tens or hundred dollars a month for an unbelievably rich enterprise-level solution.
There are bunch of RabbitMQ monitoring plugins available for different monitoring systems like Nagios, Zabbix etc.
Look at http://www.rabbitmq.com/how.html#management
Using rabbitmqctl is the most straight forward solution to check the status of the node.
$ rabbitmqctl status
This should tell you the status of the RabbitMQ node.
If you have PRTG (or any probe system with a HTTP sensor check), you can check the server status described at the following page:
https://blog.cdemi.io/monitoring-rabbitmq-in-prtg/
In particular you have to
Enable Management Plugin
The rabbitmq-management plugin provides an HTTP-based API for management and monitoring of your RabbitMQ
server, along with a browser-based UI and a command line tool,
rabbitmqadmin. The management plugin is included in the RabbitMQ
distribution. To enable it, we need to run: rabbitmq-plugins enable
rabbitmq_management on the RabbitMQ nodes. For more details on the
Management plugin refer to RabbitMQ Documentation.
The web UI is located at: http://server-name:15672/ The HTTP API and
its documentation are both located at: http://server-name:15672/api/
Once done, you can check the overview of your server with the API:
http://server-name:15672/api/overview
Where you have a JSON with all details about the server, active connections, queues, etc.
This cmd will help you service rabbitmq-server status
OR try theseservice rabbitmq-server stop and service rabbitmq-server start then service rabbitmq-server status.

Best way to simulate a WAN network

Simplified, I have an application where data is intended to flow over the internet between two servers. Ideally, I'd like to test at what point the software ceases to function. At what lowerbound limit (bandwidth, latency, dropped packets) do things stop working to test the reliability of the software.
What I thought I would do was the following:
Setup up 3 machines (VMware instances)
Install the 2 applications on two of the servers.
Setup up the 3rd server to sit between the two machines by doing some sort of magic with Routing and Remote Access on Windows 2003
Install either Traffic Shaper XP or NetLimiter to limit the bandwidth
Run something like TMnetSim Network Simulator to simulate a bad connection.
Does this sound like a good idea or are there easier/better ways of doing this? I'm not that comfortable on Linux and my team mates are even less so.
WANem does exactly this. We have used it both in a virtual machine on the desktop and on a dedicated old pc and it worked great. It can simulate all sorts of broken connectivity.
FreeBSDs ipfw has provisions to simulate links with a given bandwith, latency or error rate. You could use that FreeBSD machine as your machine "in the middle" in your above setup.
You probably can also run at least one of the endpoints on the same machine if you want to reduce the amount of servers involved.
Someone actually packaged up the settings and whatnot necessary for the FreeBSD solution to this problem and they call it DUMMYNET.
It simulates/enforces queue and bandwidth limitations, delays, packet losses, and multipath effects. It also implements a variant of Weighted Fair Queueing called WF2Q+. It can be used on user's workstations, or on FreeBSD machines acting as routers or bridges.
It can simulate exactly what you want, and its free and will boot onto commodity hardware. They even have a canned install of it that is small enough to put on a floppy disk (!) that you can download at that link.
Maybe it is time to learn a bit about Linux because adding a 50ms delay on every outgoing packet can be done in typing just one line:
tc qdisc add dev eth0 root netem delay 50ms
For more see the Linux Traffic Control HOWTO
We had a similar requirement some ten years ago - I'll see if I can recall how we managed it.
If I remember, we wrote a socket proxy program which was controlled by inetd on a UNIX box. This socket would accept connections from a client and open equivalent sessions through to the server. It would then loop, passing messages in both directions.
The way we achieved WAN characteristics was to introduce random delays (with upper and lower limits) in both the connection establishment and the passing of data once the link was up.
It also had the feature to drop the link occasionally as WAN links were less reliable for us than local traffic.
I recall we had to make it threaded to stop the delays from affecting reverse traffic on the link.
There is a very good (and free) Microsoft solution for that, we use it for quite some time and it works great, it can very easily simulate every thing(packet loss, low bandwidth, disconnection, latency....)
This is the best solution i found for a windows environment
More information and a download link can be found here: MARCO blog post
this product has gone some evolution and it is now integrated into visual studio as part of the automation testing, but i found the use of the standalone(that is quite hard to find, so keep a local copy) to work much better. keep in mind that you need at least two computers(or VMs) since you need to pass through a network adapter in order for the application to work its magic.