Disable "ping" response in Apache - apache

I saw that many sites don't respond at "ping call"; I read that is possible disable this function in Server Apache.
My questions are very easy:
- Is it a good choose? If yes, why?
Thanks

this has nothing to do with the apache software. The network service is answering the ICMP-Protocol, which means the PING-command. This happens on the operation systems level.
There are tutorials how to do this on Linux. (e.g. http://www.tech-recipes.com/rx/40/).
Also there are tutorials how to do this on Windows (e.g. http://technet.microsoft.com/en-us/library/cc786463(v=ws.10).aspx)
There are lots of goals to disable it. To pronounce an example, there could be a lower threat of flooding.

Related

How to capture packets and put them into a database in real time?

I have a project on DNS reflection prevention and I need somehow to capture incoming and outgoing packets in real time... I am working on Linux Debian 8.0... can you please tell me how to do that... I looked in many websites and watched many tutorials but they were confusing and didn't help at all.. could you please help me
Thank you
You can use Wireshark for listening for network traffic and capturing the packets. A command-line version - T-shark can output structured XML, which you can store into the database using the programming language / tools of your choice.

Setting up Apache cluster

I am new to web developing, so my question might be silly.
I want to set up an Apache cluster. I have four hardware machines and I want to distribute http requests load between them. Now on every machine I have installed Fedora.
For now it can be simple load balance cluster without any recovery techniques (in case of some hardware error on some servers). And of course I need open source (free to commercial use) software.
Any suggestions on what soft/tutorials/books I should look to learn how to set up environment like this?
The O'Reilly book "Server Load Balancing" by Tony Bourke (now runs lbdigest.com) is the classic. Unfortunately it's a little dated. Maybe Tony will consider an update.
If you really want "no recovery techniques", basic DNS Round-robin might work for you but it's pretty crude. There's an open source project called Ultramonkey, but I haven't had a chance to mess with it. A lot of development has gone into commercial solutions which offer load balancing, high-availability, etc. Including the Coyote Point product (which I helped write). The appliance based products are actually quite affordable today.

Server Setup: Based on Apache and Tomcat needs

I'm trying to setup a server based on our needs for a new website. Basically, I need to build a website based on social engine, and according to the platform's requirements (found here: http://www.socialengine.net/support/documentation/article?q=152&question=SocialEngine-Requirements) it requires the webserver to be Apache based.
Now my issue comes with the addition of a web application that needs to be included in the site. The web application requires the server to be capable of Asynchronous Request Processing, and is currently only supported by Tomcat or GlassFish.
I found a couple tutorials such as this one http://www.serverwatch.com/tutorials/article.php/2203891/Integrating-Tomcat-with-Apache.htm that explain how to "integrate" Tomcat into Apache. Would a server running Tomcat alone be able to handle the applet needs as well as serve the Apache (assuming HTTP) needs from the Social Engine platform? Are there any hosting providers any of you would reccomend?
Although I've done alot of front end stuff before, this is the first time i have to deal with any of the back end details, so my knowledge of server side functionality is really garbage. Please let me know if I'm not asking the right questions.
Thanks
You wouldn't really be able to use Tomcat for both apps, since the other one needs PHP. It's pretty common to have both Tomcat and Apache running on the same server. You might want to look up more recent documentation on mixing them, even this but definitely have a look at mod_proxy_ajp.
What's the other application? It's a little tricky to set up Asynchronous Request Processing if you are new to server apps, but there is also a lot of documentation, so if you're game, you can probably figure it out OK. You might also want to see if that app would work with node.js (hosting info here)
If you want to set it all up yourself, you could get a virtual private server from Rackspace Cloud or similar host or get a shared host that has the required apps already set up, which would limit your ability to customize the environment and may require 2 hosting plans, but would be easier to set up. It also somewhat depends on if both apps need to be on the same machine for any reason and/or on the same domain.
A regular LAMP stack will run SE4 just fine, however, you will need to do some tuning to get the page loads under 3 seconds. You will want to remove any Apache modules that you aren't using with a2dismod. For instance, if you're not using any Ruby on the site, a2dismod ruby. This will help get memory usage under control. APC is a must.
For a much more in depth read on tuning php/apache, please read this: Performance tuning on Apache, PHP, MySQL, WordPress v1.1 – Updated

RabbitMQ - Basic newbie questions

Our scenario: dozens of Windows laptops which are occasionally connected to the network. Need to store simple data records on each laptop, then have these reliably transferred to a service running on the network once connection is available. Considering RabbitMQ on each laptop, feeding data to a "main" RabbitMQ on the network. This is a Fortune 100, and packaging etc is a concern.
Question 1: In general, does Rabbit make sense here? If not, any suggestions for an approach?
Question 2: When I installed on Win I had to manually install Erlang first. Are there packaging/deployment options which are simpler/more friendly? (Their IT people can do all the normal deployment stuff including create win service, but installing Erlang on user machines might raise eyebrows...)
Thanks for any help from those of you who've been there, done that with Rabbit.
Question 1: What you need is a store and forward mechanism. RabbitMQ can be used for that, actually by using the Shovel plug-in to take care of moving messages from the local Rabbit to the remote one (handling reconnection, retries, etc... for you).
Question 2: The answer is related to question 1. RabbitMQ+Shovel is conceptually suitable for your store and forward needs but if, alas, not technologically acceptable, you may want to consider simpler/cruder approaches like... SMTP!
If the Windows laptops are backed by a windows infrastructure, the most logical choice is MSMQ, which offers this "Out-of-the-box"; e.g. store and forward from clients to server(s). Easy to install by policy and administrate.

Best way to simulate a WAN network

Simplified, I have an application where data is intended to flow over the internet between two servers. Ideally, I'd like to test at what point the software ceases to function. At what lowerbound limit (bandwidth, latency, dropped packets) do things stop working to test the reliability of the software.
What I thought I would do was the following:
Setup up 3 machines (VMware instances)
Install the 2 applications on two of the servers.
Setup up the 3rd server to sit between the two machines by doing some sort of magic with Routing and Remote Access on Windows 2003
Install either Traffic Shaper XP or NetLimiter to limit the bandwidth
Run something like TMnetSim Network Simulator to simulate a bad connection.
Does this sound like a good idea or are there easier/better ways of doing this? I'm not that comfortable on Linux and my team mates are even less so.
WANem does exactly this. We have used it both in a virtual machine on the desktop and on a dedicated old pc and it worked great. It can simulate all sorts of broken connectivity.
FreeBSDs ipfw has provisions to simulate links with a given bandwith, latency or error rate. You could use that FreeBSD machine as your machine "in the middle" in your above setup.
You probably can also run at least one of the endpoints on the same machine if you want to reduce the amount of servers involved.
Someone actually packaged up the settings and whatnot necessary for the FreeBSD solution to this problem and they call it DUMMYNET.
It simulates/enforces queue and bandwidth limitations, delays, packet losses, and multipath effects. It also implements a variant of Weighted Fair Queueing called WF2Q+. It can be used on user's workstations, or on FreeBSD machines acting as routers or bridges.
It can simulate exactly what you want, and its free and will boot onto commodity hardware. They even have a canned install of it that is small enough to put on a floppy disk (!) that you can download at that link.
Maybe it is time to learn a bit about Linux because adding a 50ms delay on every outgoing packet can be done in typing just one line:
tc qdisc add dev eth0 root netem delay 50ms
For more see the Linux Traffic Control HOWTO
We had a similar requirement some ten years ago - I'll see if I can recall how we managed it.
If I remember, we wrote a socket proxy program which was controlled by inetd on a UNIX box. This socket would accept connections from a client and open equivalent sessions through to the server. It would then loop, passing messages in both directions.
The way we achieved WAN characteristics was to introduce random delays (with upper and lower limits) in both the connection establishment and the passing of data once the link was up.
It also had the feature to drop the link occasionally as WAN links were less reliable for us than local traffic.
I recall we had to make it threaded to stop the delays from affecting reverse traffic on the link.
There is a very good (and free) Microsoft solution for that, we use it for quite some time and it works great, it can very easily simulate every thing(packet loss, low bandwidth, disconnection, latency....)
This is the best solution i found for a windows environment
More information and a download link can be found here: MARCO blog post
this product has gone some evolution and it is now integrated into visual studio as part of the automation testing, but i found the use of the standalone(that is quite hard to find, so keep a local copy) to work much better. keep in mind that you need at least two computers(or VMs) since you need to pass through a network adapter in order for the application to work its magic.