My web application architecture is: Apache 2 (as load balancer) + JBoss 3.2.3 + MySQL 5.0.19.
I want to: measure the request processing time (for every individual request) spent on the JBoss server only (i.e., excluding time spent on Web and database servers.
I've been researching about how to log request processing time on an application tier only. I found *mod_JK logging*, *Apache's mod_log_config* and Tomcat AccessLogValve as two possible methods.
Using *mod_JK logging*: my understand mod_jk logging provides request processing time for each request and calculate as time difference between time when a request leaves the Apache server and time when the corresponding response received by the Apache server. Please correct me if this not accurate/correct.
Using Apache's mod_log_config model (http://www.lifeenv.gov.sk/tomcat-docs/jk/config/apache.html): by adding "%{JK_REQUEST_DURATION}n" in the LogFormat (the JKLogFile construct) construct (see the above link). The "JK_REQUEST_DURATION" capture overall Tomcat processing time from Apache perspective.
The times (in the above cases) includes Tomcat/JBoss + MySQL processing time. It won't help in my case as it includes MySQL processing time- I want to record request processing time on JBoss only. Any suggestions/ideas are appreciated?
Using AccessLogValve: it can log "time taken to process request, in millis" by setting %D in the pattern attribute of the AccessLogValve XML construct. It is not very clear if this
Time if this time is the time required by tomcat/JBoss to serve a
request (e.g., allocate thread worker to handle it)
Time taken to process a request and send it to the database server
(overall time on Tomcat/JBoss server)
Time taken to process a request by Tomcat/JBoss and send a response
back to a Web server/client
Any idea/clue?
This is my experience/research I want to share. It would be appreciated if anyone has similar problem/know a way to do it to share their experience/pointers/thoughts where a better solution can be found.
looking forward for your thoughts/suggestions
Why do you want to exclude the database time? Time spent on the database is time your application is waiting, exactly as it could be waiting for other resources e.g lucene indexing to finish, a remote http request to complete etc.
If you really want to exclude the db access time you need to instrument your application with timer start/stop instructions. This will definitelly need to go inside your application (either "cleanly" via AOP or manually via start/stop statements in critical points in the app) and cannot simply be a configuration from the outside world (e.g an apache module).
So, you'll need to start the timer when you receive your request in the very start of the processing chain (a filter works well here) and stop every time you send a query. Then start again exactly after the query. This of course cannot be 100% complete especially in the case you use a transparent ORM such as hibernate. That's because sometimes you could be executing queries indirectly, i.e when traversing a collection or an association via plain Java.
I think you are looking for profiling tools, like java visualVM, JProfiler or others ..
Related
I have VPS on hetzner. Server is located in Germany.
It has 256GB RAM, 6CPUs (12 threads).
I have a file which since yesterday, is requested about 30 times in one second. File has 2 Select, 2 Update, 2 Insert queries, so I assumed (not sure how this works) from this file server has about 180 requests per second. So right after this requests started, all the websites on the server just started loading poorly. I made this file run just one select query and than die. This didn't help. In WHM load is aboiut 0.02.
I've checked for error logs and there is no max_user_connection or any error there.
I have enabled slow query log and checked log file. there is nothing (I've tested it with select sleep(10) and this query was logged).
This is visit statistics, please bring your attention to may 30th:
Bandwidth stats for last 24 hours:
There are many errors like this in ssl_log (diff IPs of course):
188.121.206.150 - - [30/May/2018:19:50:03 +0200] "-" 408 - "-" "-"
I've been searching web a lot and couldn't find any solution. Could anyone at least tell what should I monitor or where. I have full access to anything there is possible inside the server. Any help is appreciated.
UPDATE 1
I have subdomain: banners.analyticson.com (access allowed for now) and there I have all the images and html5 files that are requested.
Take one image for example : https://banners.analyticson.com/img/suy8G1S6RU.jpg
It needs too much time to load. As I noticed, this sub domain has some issue.
Script, that I mentioned earlier (with 6 queries) just tries to get one of those banners to the user, so result of that script is to return one banner from banners.analyticson.com.
UPDATE 2
I've checked my script, it is fine. It takes less than 1 second to complete.
I've also checked Top command and there is a result. I'm not sure if $MEM value is fine.
You're going to have to narrow the problem down...
There are multiple potential issues.
First thing to eliminate would be the performance of your new script on a development laptop - I assume you're using PHP, so use the profiling tools to work out what is going on. If it's a database query, you'll see which one by looking at the profiler.
If your PHP script and database queries are fine, the next thing to look at: it sounds like you've hit some bottleneck resource on your infrastructure. In these cases, scripts that run fine as a single request start queueing for the bottleneck resource, and every new request adds to the queue until the whole server starts to crawl. This can be a bit of a puzzle - start with top and keep digging.
Next, I'd look at configuration of Apache to make sure everything is squeaky clean - Apache used to have a default to do a reverse DNS lookup for every request, which slows the server down rather impressively on production. You may also want to look at your SSL configuration - the error you report is linked to a load balancer issue.
If it's not as simple as memory, CPU etc., you're into more esoteric issues. You may need to ramp up a load testing rig so you can experiment without affecting the live site - typically, I do this on a machine as similar to live as possible, using Apache JMeter to generate load, and find the "inflection point". Typically, you see response times increase linearly with the number of concurrent requests, until you hit the bottleneck resource, at which point the response time increases rapidly. As a simple example, if you have 10 database connections available, response time should increase linearly up to 10 concurrent connections, and then become much larger from 11 up.
Knowing where the inflection point is and being able to recreate it allows you to use PHP profiling tools under load. This is a lot of work.
UPDATE
You're using php-cgi; this is easily the most inefficient way of running PHP scripts. Your server is barely breaking a sweat - CPU and memory basically idle. Here's a comparison for how to run PHP; consider changing to mod_php.
The application is using Apache Server as a web server and Tomcat as an application server.
Operations/requests can be triggered from the UI, which can take time to return from the server as it does some processing like fetching data from the database and performing calculations on that data. This time depends on the amount of data in the database and the duration of data it is processing. It could be as long as 30min to an hour or 2 min's based on the parameters.
Apart from this, there are some other calls which fetche small amount of data from the database and return immediately.
Now when I have multiple, say 4 or 5 of these long heavy calls to the server, and they are currently running, when I make a call that is supposed to be smaller and return immediately, this call also hangs as it never reaches my controller.
I am unable to find a way to debug this issue or find a resolution. Please let me know if in case you happen to know how to proceed with this issue.
I am using Spring, with c3p0 connection pooling with Hibernate.
So I figured out what was wrong with the application, and thought about sharing it in case someone somewhere faces the same issue. It turns out nothing was wrong with the application server or the web server, when technically speaking it was the browsers fault.
I found out that the browser can only have a limited number of open concurrent calls to a domain. In the case of the latest version of chrome at the time of writing is 6. This is something all the browsers do to prevent DDOS attacks.
As in my application, the HTTP calls take a lot of time to return until the calculations are completed several HTTP calls accumulate concurrently and as a result, the browser stops sending any further calls after the 6th concurrent call and it feels like the application is unresponsive. You can read about the maximum no of concurrent calls by a browser in SO.
A possible solution I have thought is either polling or even better Long Polling. I would have used WebSockets but then we would need to make a lot of changes.
Well, as the question simply explains itself, let me clear it up little more.
I am running MongoDB primarily for read-only purposes at back-end. My crons do the writes and they don't really push it hard when they are triggered. Some updates, some new documents etc.
The thing is requests usually do not even hit the application level because of entire page caching handled within MemCached by Nginx. So the application doesn't query database for another hour per page.
But so far as I can see in my process list, there are 21 MongoDB worker processes that are using none of the CPU but reasonably large amount of memory because of the previous queries.
I checked the configuration settings and googled around but couldn't find any answer.. So, is there any way to limit those processes or at least to tell MongoDB reduce/empty its memory usage after a while?
Workers are using for talking to config server and other replica as well apart from just serving user request. This is documented in here.
we can limit net.maxIncomingConnections as par recommendation on this page to limit the number of workers processing user request. But this should be used with precaution as setting this number too low and then sending more concurrent calls will result in some calls being queued.
This project uses http clilent libraries to poll a http server for an xml file containing data gathered from hardware. Polling happens relatively fast. The data changes with time. Only one xml file is polled ever.
Is there a testing method/tool that can be used as the http server and feed the client an xml file based on the time it is polled?
Basically, what I'm trying to do is send xml data that may change on each poll. Each version of data is pre-determined for testing.
An idea I've thought is having a log rotator script cron'ed at polling frequency to check out and replace each version of the data into /var/log/www and let apache handle the rest. However, this does not tightly control which version will be served when it is polled as network delay may cause files to be replaced before the data is served. Each version of the data must be served and no versions may be skipped.
Any solutions/thoughts/methods/ideas will be appreciated.
Thanks
If you are attempting to perform Unit tests of specific functionality, I would suggest mocking the HTTP response and go from there. Relatively easy to setup and then very easy to modify.
i have a very simple server running WAMP on a windows machine, with a php code who is a simple API for my clients that returns an XML. The things is that the hardware is very modest, and if a user calls the link to the API and hits F5 many times (calls the link repeatedly) the server performance goes down a little (response time goes up). Is there a way to limit the queries on port 80?
I know how to limit this in the the php code, but i think it is not good practice because even if you limit the queries on the php code the query is already made and I'm consuming resource checking with php if the user is making many queries.
Well, if you want to catch it before it reaches PHP, an Apache module would be one approach, e.g. mod_cband. Other than that, your firewall might help you, but I don't know if the default Windows one is up for that.
Other than that, handling it in your PHP code wouldn't be that bad. Yes, checking a DB consumes time, but it's still faster than collecting and returning XML.
Implement access control to the resources, keep track of active sessions and don't initiate heavy tasks while that particular user has a task open...?