How can you test your web server speed? - testing

Our website seems to be slower than it used to be, how can I test that? And is there a way to find the cause? (eg too many visitors).
Thanks.

There is a rather good tool for performance benchmarking of web servers: Jakarta Jmeter, which is an Apache project, so it's rather well supported and tested.
The key to be able to pinpoint the cause would be to do benchmarking regularly, so you can actually match changes in your benchmark results with events on your server: upgrades, code changes, variations in the number of visitors...

The Firebug add on for Firefox has a Net tab which is useful for debugging issues and testing. Also Fiddler on Windows is nice. And then there is the age old tradition of checking your server error logs for any problems.

A good first step is to make sure you are keeping fairly complete server logs and feed them into a log analyser. This is helpful for giving you a general idea of how long things take and which pages are slowest. It's also a good idea to check your error logs to make sure things are working properly.
Beyond that, things get more complicated as you may need to isolate your webserver, code and database to see if one of these is the bottleneck. Also, Jeff's blog, coding horror had a recent entry on server optimization.

Use Google Analytics to track your site's visitors over time to find out if you are getting more traffic.
You tagged your question with shared-hosting - being on a shared host means that someone else's code running on the same machine as your may be affecting your site's performance.
I'd suggest going with Varkhan and apphacker's suggestion to make sure your site's code is reasonably quick. use Analytics to get some stats and the possibly, depending on how many visitors you are getting and how slow the site is, consider moving away from a shared host.

Try the server speed checker at Bitcatcha.com. The tool ping your website server and record the time needed to get a response. It also pings from 8 different nodes to your server. You are able to at least find out whether it's your server that is slowing your website.

Related

ColdFusion 11 to 2018 Upgrade -- Server Locking Up, How to Test Better?

We are currently testing an upgrade from CF11 to CF2018 for my company's intranet. To give you an idea how long this site has been running, our first version of CF was 3.1! It is still using application.cfm, and there is code from 1998, when I started writing this thing. Yes, 21 years -- I'm astonished, too. It is a hodgepodge of all kinds of older frameworks, too, including Fusebox.
Anyway, we're running Win 2012 VM connected to a SQL 2016 farm. Everything looked OK initially, but in the Week I've been testing, the server has come to a slowdown once (a page took more than 5 seconds to run, something that usually takes 100ms, no DB involvement), and another time, the server came to a grinding halt. The only way I could restart CF App service was by connecting to the server with another server via Services, because doing it via Remote Desktop was so slow.
Now keep in mind -- it's just me testing. This is a site that doesn't have a ton of users, but still, having 5 concurrent connections is normal and there are upwards of 200-400 users hitting this thing every day.
I have FusionReactor running on this thing now, so the next time a lockup happens, I will be able to take a closer look, but what do you think is the best way I can test this? Our site is mostly transactional, users going and filling out forms to put internal orders through. We also connect to XML web services and REST services; we also provide REST services, too. Obviously there's no way to completely replicate a production server's requests onto a test server, but I need to do more thorough testing. Any advice would be hugely appreciated.
I realize your focus for now is trying to recreate the problem on test. That may not be as easy as hoped. Instead, you should be able to understand and resolve it in production. FusionReactor can help, but the answer may well be in the cf logs.
You don't mention assessing the logs at the time of the hangup. See especially the coldfusion-error log, for outofmemory conditions.
You mention raising the heap, but the problem may be with the metaspace instead. If so, consider simply removing the maxmetaspace setting in the jvm args. That may be the sole and likely cause of such new and unexpected outages.
Or if it's not, and there's nothing in the logs at the time, THEN do consider FR. Does IT show anything happening at the time?
If not then consider a need to tune the cf/web server connector. I assume you're using iis. How many sites do you have? And how many connectors (folders in the cf config/wsconfig folder)? What are the settings in their workers.properties file? Are they optimized for the number of sites using that connector?
Also, have you updated cf2018? Are there any errors in the update error log? Did you update the web server connector also?
Are you running the cf2018 pmt (performance monitoring tool set)? Have you updated it?
There could be still more to consider, but let's see how it goes with those. I have blog posts on these and many more topics that would elaborate on things, both at my site (carehart.org) and the Adobe cf portal (coldfusion.adobe.com).
But let's hear if any of this gets you going.

Is there any way to determine visitor statistics from Rails 3 log files?

Recently a Rails 3 app we built and host had some issues with the Google Analytics tracker installed. This resulted in vastly diminished statistics being tracked during the last month. We have our production logs from the app and I'm wondering if anyone knows of any way to parse these to produce visitor statistics (similar to what web analytics packages would provide). We need to deliver a stats report this week and would like to have some account for the missing visitors. Any suggestions or help would be greatly appreciated!
Probably the better place to look would be your web server logs. 5 or 10 years ago all the popular analytics software gobbled up web server logs, and there are a few free ones our there. Google "web log analytics" and see if there's anything suitable.
The problem is, web logs contain all traffic, and for many websites, this can be from all sorts of sources you don't care about, like GoogleBot and others that crawl your site to add to search indexes ... and many more. Look for software that will try to filter these out, and will also know to ignore assets (JS, CSS, images, etc.). Analytics doesn't have to worry about this kind of stuff since it's based on cookies and javascript running in a real visitor's browser.
No matter how good these programs are, there are two things you'll need to take into account.
Numbers will not align with GA, and you'll go crazy if you try to make them add up -- the differences can be astonishingly large, as much as 20% or more.
It may be more work than it's worth to get the software configured -- even if you do, the level of detail pales in comparison to GA.
If you're handy with grep, the Rails log might help you get some quick-and-dirty counts (although they also record all traffic, unless users need to log in, in which case logs may be a little less noisy).
A different approach might be to look in your database -- is there anything you can track that acts as a proxy for a visit or any other goal you have been tracking? How useful this is depends entirely on your app and what you store in the database.
Some combination of the above may be the best way to get at something, but I hate to be the bearer of bad news -- it's very likely that what you're able to glean from logs creates more confusion than it's worth. Been there, tried that :-(
Tom

serverside vs client

First let me say i am only a novice programmer, and by no means an sql guru. We have an app at work that is and has been under heavy dev from the vendor for sometime (2+ years). It runs as a MSSQL instance on one of our servers, and there is a client install for the desktops. The client software is making direct sql calls to the database.(it also has a local mysql instance to handle the client settings) there is 6-12 ports that had to be opened up for the communication. Looking at the sql manager, i can see direct sql calls from various clients.
Seems to me this is entirely the wrong approach. the closest thing i have done to this, was a webpage + php+ mysql. The webpage would make requests, and all the processing would be serverside, then simply display the results. The sluggishness my users feel i think is from the clientside request+ processing of the sql data.
ps: i realize that if they have not done it by now, switching to another paradigm seems out of the question. i just want to know if i am way off base.
You are way off base.
The client side has much more processing power.
Consider the case of one server and 5 clients. Even is the server has 3 times the power of a client the clients as a whole are still 5:3 more powerful.
If the application is sluggish it was probably poorly written. You need to investigate the root cause. Client / Server is a leading practice in design, I'm guessing it is not the root cause. It might be badly implemented or there might be other reasons. Your comment about having a local mysql sounds very fishy to me -- there should be no need for this.

How can i protect my server from multiple queries on port 80?

i have a very simple server running WAMP on a windows machine, with a php code who is a simple API for my clients that returns an XML. The things is that the hardware is very modest, and if a user calls the link to the API and hits F5 many times (calls the link repeatedly) the server performance goes down a little (response time goes up). Is there a way to limit the queries on port 80?
I know how to limit this in the the php code, but i think it is not good practice because even if you limit the queries on the php code the query is already made and I'm consuming resource checking with php if the user is making many queries.
Well, if you want to catch it before it reaches PHP, an Apache module would be one approach, e.g. mod_cband. Other than that, your firewall might help you, but I don't know if the default Windows one is up for that.
Other than that, handling it in your PHP code wouldn't be that bad. Yes, checking a DB consumes time, but it's still faster than collecting and returning XML.
Implement access control to the resources, keep track of active sessions and don't initiate heavy tasks while that particular user has a task open...?

Top & httpd - demystifying what is actually running

I often use the "top" command to see what is taking up resources. Mostly it comes up with a long list of Apache httpd processes, which is not very useful. Is there any way to see a similar list, but such that I could see which PHP scripts etc. those httpd processes are actually running?
If you're concerned about long running processes (i.e. requests that take more than a second or two to execute), you'll be able to get an idea of them using Apache's mod_status. See the documentation, and an example of the output (from www.apache.org). This isn't unique to PHP, but applies to anything running inside an apache process.
Note that the www.apache.org status output is publicly available presumably for demonstration purposes -- you'd want to restrict access to yours so that not everyone can see it.
There's a top-like ncurses-based utility called apachetop which provides realtime log analysis for Apache. Unfortunately, the project has been abandoned and the code suffers from some bugs, however it's actually very much usable. Just don't run it as root, run it as any user with access to the web server log files and you should be fine.
The php scripts happen so fast, top wouldn't show you very much. Or it would zip by quite quickly. Most webrequests are quite quick.
I think your best bet would be to have some type of real time log processor, that kept an eye on your access logs and updates stats for you of average run time, memory usage and stuff like that.
You could make your PHP pages time themselves and write their path and execution time to file or database. Note that would slow everything down while you were monitoring, but it would serve as a good measuring method.
It wouldn't be that interactive though. You'd be able to get daily or weekly results from it, but it'd be hard to see something meaningful within minutes or hours.