How to know if images are really cached into memory ohs 11.1.1.7 (based on apache 2.2) - apache

We have configured the module mod_file_cache, example configuration:
...
LoadModule file_cache_module "${ORACLE_HOME}/ohs/modules/mod_file_cache.so"
...
include "/opt/oraas/appweb/estaticos/media/conf/mmap_media.conf"
...
Example of content of mmap_media.conf:
...
MMapFile /opt/oraas/appweb/estaticos/media/img_fmj_cu1.jpg
MMapFile /opt/oraas/appweb/estaticos/media/img_fmj_cu2.jpg
...
The total number of images to be cached is 48100, it is around 3.5 Gbs.
OHS instance takes around 50 seconds in starting, we see memory of server increases and no error in logs but we do not know how to confirm all those images are really cached in memory.
How can we confirm explicitly that those images are cached in memory of the OHS server?

Related

Apache Archiva file upload size

If I upload an artifact (~23MB file size) in Apache-Archiva 2.2.1, I get an upload-error in the UI:
fileupload.errors.Request Entity Too Large
This occurs also with mvn deploy:
Return code is: 413, ReasonPhrase: Request Entity Too Large
Archiva is running on tomcat 9.0.0.M21, deployed as war.
So, how can I increase the upload file size limit in apache archiva? I can't find any appropriate properties to set in archiva.xml.
So finally i could resolve this problem. It was not an Archiva setting, increasing the upload size in nginx.conf to client_max_body_size 64M; fixed it (Tomcat is running behind a SSL-Proxy).
Thx for watching.

apache2 processes stuck in sending reply - W

I am hosting multiple sites on a server with 7.5gb RAM. Using apache2 mpm_prefork.
Following command gives me a value of 200-300 in production
ps aux|grep -c 'apache2'
Using top i see only some hundred megabytes of RAM is free. Error log show nothing unusual. Is this much apache2 process normal?
MaxRequestWorkers is set to 512
Update:
Now i am using mod-status to check apache activity.
I have a row like this
Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
0-0 29342 2/2/70 W 0.07 5702 0 3.0 0.00 1.67 XXX XXX /someurl
If i check again after sometime PID not changes and i get SS with greater value that previous time. M of this request is in 'W` sending reply state. So that means apache2 process locked in for that request?
On my VPS and root servers, the situation is partially similar. AFAIK the os tries to distribute most of the processing power/RAM to running processes and frees the resources for other processes as the need arises.

apache + mod_perl + couchbase = occasional connection problems

We use couchbase as session storage for mod_perl scripts. To avoid delays on clients caused by waiting for a new connection we do preconnect to couchbase on child_init apache stage. So during apache restart / new child creation it connects to couchbase automatically and later use that connection during apche child lifetime.
Generally everything works fine, but sometimes we got the following errors during that preconnection:
Couldn't connect: 0x13 (Operation not supported) at /perl/lib64/perl5/Couchbase/Bucket.pm line 38.
Usually it appears during apache restart and on several (or dozens) of childs, and almost never on one child only. Usually restarting apache again solves the problem.
What can cause such a problems? Is it a problem with code / server configuration / couchbase server itself?
May be it caused somehow with a lot of reconnections at the same time? Some ulimits stuff / or selinux restrictions?
UPD: versions
OS:
Centos 6, 2.6.32-358.2.1.el6.x86_64
libcouchbase:
libcouchbase-devel.x86_64 2.4.7-1.el6
libcouchbase2-core.x86_64 2.4.7-1.el6
libcouchbase2-libevent.x86_64 2.4.7-1.el6
couchbase server:
2.2.0 community edition (build-837)
SDK:
perl (Couchbase::Core v2.0.2)
connection code (isolated & simplified):
# in mod_perl environment
use Couchbase;
use Couchbase::Bucket;
use Couchbase::Document;
use Apache2::ServerUtil ();
my $cb = undef;
# connection handler, initialized once, used during apache child lifetime
sub connect_couchbase_on_child_init {
my ($child_pool, $s) = #_;
my $dsn = 'couchbase://192.168.0.1,192.168.0.2/my_bucket_name?detailed_errcodes=1';
eval { $cb = Couchbase::Bucket->new($dsn); };
# here we get the occasional warnings during apache restarts
if ($#) { warn "COUCHBASE CONNECTION ERROR! $#"; $cb = undef; }
return Apache2::Const::OK;
}
Apache2::ServerUtil->server->push_handlers(PerlChildInitHandler => \&connect_couchbase_on_child_init);
# in request handlers it used with the following calls (only if connected):
# $doc = Couchbase::Document->new($key);
# $cb->get($doc);
# ...
# $cb->replace($doc);
# ...
# $cb->insert($doc);
# ...
# $cb->remove($doc);
Because you are using server 2.2.0 and because this seems to happen when you are connecting many clients at once, my theory is that you are receiving the last error from the server. The current client bootstrap process attempts using bootstrap over memcached (which is only supported from version >= 2.5.0 of the server), that fails and it attempts to use 'terse' bootstrapping (again, only supported on >= 2.5.0 of the server) and finally 'classic' HTTP (which is available on all versions).
Add the following options to your DSN/connection string to cut out some of the steps for your server. Note that should you ever upgrade to >= 2.5 these options should be removed:
bootstrap_on=http Does not try memcached bootstrap
http_urlmode=2 Uses the pre-2.5 style of bootstrapping by default
These two options will not necessarily fix your issue, but they will at least cut out some of the initial connection time, and perhaps show a clearer reason for the error (you can also set LCB_LOGLEVEL=5 in the environment to get actual logging).
In your case, the connection string would be:
couchbase://192.168.0.1,192.168.0.2/my_bucket_name?detailed_errcodes=1&bootstrap_on=http&http_urlmode=2

Apache configuration fine tuning

I run a very simple website (basically a redirect based on a php database) which gets on average 5 visits per second throughout the day, but at peak times (usually 2-3 times a day), this may go up to even 300 visits/s or more. I've modified the default apache settings as follows (based on various info found online as I'm not an expert):
Start Servers: 5 (default) / 25 (now)
Minimum Spare Servers: 5 (default) / 50 (now)
Maximum Spare Servers: 10 (default) / 100 (now)
Server Limit: 256 (default) / 512 (now)
Max Clients: 150 (default) / 450 (now)
Max Requests Per Child: 10000 (default)
Keep-Alive: On (default) / Off (now)
Timeout: 300 (default)
Server (VPS) specs:
4x 3199.998 MHz, 512 KB Cache, QEMU Virtual CPU version (cpu64-rhel6)
8GB RAM (Memory: 8042676k/8912896k available (5223k kernel code, 524700k absent, 345520k reserved, 7119k data, 1264k init)
70GB SSD disk
CENTOS 6.5 x86_64 kvm – server
During average loads the server handles just fine. Problems occur almost every day during peak traffic times, as in http time-outs or extremely long response/load times.
Question is, do I need to get a better server or can I improve response times during peak traffic by further tuning Apache config? Any help would be appreciated. Thank you!
maybe you need to enable mod_cache with mod_mem_cache, another parameter that i always configure is ulimits:
nofile to get more sockets
nproc to get more processes
http://www.webperformance.com/load-testing/blog/2012/12/setting-apache2-ulimit-for-maximum-prefork-performance/
finally TCP Tuning and Network, check all net.core and net.ipv4 parameters to get less latency
http://fasterdata.es.net/host-tuning/linux/

Apache + Tomcat with mod_jk: maxThread setting upon load balancing

I have Apache + Tomcat setup with mod_jk on 2 servers. Each server has its own Apache+Tomcat pair, and every request is being served by Tomcat load balancing workers on 2 servers.
I have a question about how Apache's maxClient and Tomcat's maxThread should be set.
The default numbers are,
Apache: maxClient=150, Tomcat: maxThread=200
In this configuration, if we have only 1 server setup, it would work just fine as Tomcat worker never receives the incoming connections more than 150 at once. However, if we are load balancing between 2 servers, could it be possible that Tomcat worker receives 150 + (some number from another server) and make the maxThread overflow as SEVERE: All threads (200) are currently busy?
If so, should I set Tomcat's maxThread=300 in this case?
Thanks
Setting maxThreads to 300 should be fine - there are no fixed rules. It depends on whether you see any connections being refused.
Increasing too much causes high memory consumption but production Tomcats are known to run with 750 threads. See here as well. http://java-monitor.com/forum/showthread.php?t=235
Have you actually got the SEVERE error? I've tested on our Tomcat 6.0.20 and it throws an INFO message when the maxThreads is crossed.
INFO: Maximum number of threads (200) created for connector with address null and port 8080
It does not refuse connections until the acceptCount value is crossed. The default is 100.
From the Tomcat docs http://tomcat.apache.org/tomcat-5.5-doc/config/http.html
The maximum queue length for incoming
connection requests when all possible
request processing threads are in use.
Any requests received when the queue
is full will be refused. The default
value is 100.
The way it works is
1) As the number of simultaneous requests increase, threads will be created up to the configured maximum (the value of the maxThreads attribute).
So in your case, the message "Maximum number of threads (200) created" will appear at this point. However requests will still be queued for service.
2) If still more simultaneous requests are received, they are queued up to the configured maximum (the value of the acceptCount attribute).
Thus a total of 300 requests can be accepted without failure. (assuming your acceptCount is at default of 100)
3) Crossing this number throws Connection Refused errors, until resources are available to process them.
So you should be fine until you hit step 3