Please let me know whether it's possible to configure weblogic response requests time. As of new we have configuration on HTTPD, but due to performance issues on weblogic/java side we would like to have response time info from weblogic
You can use weblogic's http logs to achieve this. Before, you just have to update your servers' configuration to setup access logs in extended format to have the time-taken information. Have a look to this product documentation : https://docs.oracle.com/cd/E24329_01/web.1211/e24432/web_server.htm#CNFGD204
Related
We are running multiple Tomcat JVMs under a single Apache cluster. If we shut down all the JVMs except one, sometime we get 503s. If we increase the
retry interval to 180(from retry=10), problem goes away. That bring me
to this question, how does Apache detects a stopped Tomcat JVM? If I
have a cluster which contains multiple JVMs and some of them are down,
how Apache finds that one out? Somewhere I read, Apache uses a real
request to determine health of a back end JVM. In that case, will that
request failed(with 5xx) if JVM is stopped? Why higher retry value is
making the difference? Do you think introducing ping might help?
If someone can explain a bit or point me to some doc, that would be awesome.
We are using Apache 2.4.10, mod_proxy, byrequests LB algorithm, sticky session,
keepalive is on and ttl=300 for all balancer members.
Thanks!
Well let's examine a little what your configuration is actually doing in action and then move to what might help.
[docs]
retry - Here either you 've set it 10 or 180 what you specify is how much time apache will consider your backend server down and thus won't send him requests. So the higher the value, you gain the time for your backend to get up completely but you put more load to the others since you are -1 server for more time.
stickysession - Here if you lose a backend server for whatever reason all the sessions are on it get an error.
All right now that we described the relevant variables for your situation let's clear that apache mod_proxy does not have a health check mechanism embedded, it updates the status of your backend based on responses on real requests.
So your current configuration works as following:
Request arrives on apache
Apache send it to an alive backend
If request gets an error http code for response or doesn't get a response at all, apache puts that backend in ERROR state.
After retry time is passed apache sends to that backend server requests again.
So reading the above you understand that the first request that will reach a backend server which is down will get an error page.
One of the things you can do is indeed ping, according to the docs will check the backend before send any request. Consider of course the overhead that produces.
Although I would suggest you to configure mod_proxy_ajp which is offering extra functionality (and configuration ofc) to your tomcat backend failover detection.
My understanding on aws xray is, xray is similar to dynatrace and I am trying to use xray for monitoring apache performance. I do not see any document related to xray with apache except below.
https://mvnrepository.com/artifact/com.amazonaws/aws-xray-recorder-sdk-apache-http
Can anyone please suggest if it is possible to use aws xray with apache and if yes can you also point some document related to it. Thanks.
I assume that by "apache" you mean the Apache Tomcat servlet container, since you are referring to a maven artifact which is a Java build tool.
Disclamer: I don't know what "dynatrace" is and I don't know which logging you specifically want.
But as far as the Apache Tomcat servlet container and X-Ray goes - here is the link to get started:
http://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java.html
Start by adding AWSXRayServletFilter as a servlet filter to trace incoming requests. A servlet filter creates a segment While the segment is open you can use the SDK client's methods to add information to the segment and create subsegments to trace downstream calls. The SDK also automatically records exceptions that your application throws while the segment is open.
As for the mentioned maven artifact:
aws-xray-recorder-sdk-apache-http – Instruments outbound HTTP calls made with Apache HTTP clients
So, you'll need this if, let's say, a client makes a request to your Tomcat server and your Tomcat server makes a request to another server thus acting as a client in this case.
How would you monitor a server performance in sense of :
Count requests that has been timedout without processing at all (client was starved)
Count requests that has been timedout while in process
Count requests that failed because of error at least in the apache lvl
Thanks
Count requests that has been timedout without processing at all (client was starved)
It depends on what platform you are operating and what the apache server is used for. In case the apache server is used as a back-end for some website you could add timestamps to each request made by the client (website user), or let the client keep track of the requests it performed with their associated timestamps. Send this data to the server and let the server compare this data to it's own logs.
Thus I would advise to keep track both client-sided and server-sided of all requests received and sent with their additional status (success or failure, timestamp).
For more specific info I think more context on the actual implementation is a must.
As per my knowledge, Apache do not support this kind of feature other than server status. But that doesn't include enough metrics to match your requirement.
But nginx provides more metrics which almost include what you need.
Nginx open source version support following metrics,
accepts / accepted
handled
dropped
active
requests / total
Please refer this article. If you are trying to host a php web app, you can move to nginx in that case.
I am not an expert in that case but here is my take on this,
Request time out generate 408 error in logs which is countable and apache provide a variable %D to measure the process time.
Count requests that has been timedout without processing at all
(client was starved)
If there is no process time or minimal then you can assume request is not processed at all.
Count requests that has been timedout while in process
Opposite goes for previous theory you will get some time logged for processing.
Count requests that failed because of error at least in the apache lvl
You will surely get the error log for any reason apache have encountered.
And what will be the role of keep alive in this case is another thing.
Logging methods are different in apache 2 & 2.4 keep that in mind but common logging format will lead you to result.
Edit:
If you are looking for tools to give you some insights try with below, and apache httpd server does provide all the necessary insights which nginx and other server out there can provide.
https://logz.io/
http://goaccess.prosoftcorp.com/
http://awstats.sourceforge.net/
Refrences:
http://httpd.apache.org/docs/current/mod/mod_log_config.html
https://httpd.apache.org/docs/2.4/mod/mod_reqtimeout.html
https://httpd.apache.org/docs/2.4/logs.html
I'm working to enable the use of Spring Cloud Config and have everything working. However, I'm seeing INFO messages in my service app logs that shows that the Cloud Config Client is looking to re-load the configuration from the server about every 30 seconds. I cannot find anything in the docs or even in the code to suggest why this is happening. I really don't want my services polling the config server nearly that often, and ideally I'd like to turn it off, so I have some more control over when a config refresh happens.
Anyone have any ideas?
When the \health endpoint is called on a config client, it reaches out to the config server to pull new configuration. Service discovery tools like Eureka or Consul poll said endpoint, causing this issue.
You can stop the config client from reaching out to the config server by setting this property in your bootstrap.yml/bootstrap.properties:
health.config.enabled=false
Taken from here
I am looking for a simple HTTP server which accepts (only GET) commands, queries the redis DB with a key and sends the reply (value) back in text format. My only requirement is that the server is very lightweight and can access a backend DB. Thanks in advance.
Try Webdis.
Webdis is a simple HTTP server which forwards commands to Redis and sends the reply back using a format of your choice.
I recently looked for the same kind of HTTP server.
As mentioned by Evandro you can try Webdis or you can go for Nginx with some modules.
In your case, for GET requests only, you can install Nginx with the HttpRedis module.
If later your requirements evolve, you can always go for HttpRedis2Module which support all the Redis commands.
I personally use the HttpLuaModule with the lua-resty-redis module and lua-cjson.
Once you got the HttpLuaModule running it's really easy to add new lua-modules and to extend the capabilities of Nginx. The resty-redis module let you add some logic between the HTTP request handling and your Redis queries using lua. You also have a large number of examples on the module setup and usage on github.
Adding cjon let you return JSON instead of raw text.
Use Webdis as suggested or Mod_redis (module for Redis) with nginx or apache2 server as per your requirements.