I have this Flask application running on top of Apache using mod_wsgi.
from flask import Flask
app = Flask(__name__)
#app.route('/<int:a>')
def mytest(a):
print('starting test:', a)
import time
time.sleep(60)
print('ending test:', a)
return 'done'
(This is a MWE.)
When the user accesses the URL /<int>, it performs a low-intensity task for a minute, then returns.
I then open 7 tabs, each in a different URL /1, /2, ..., /7. When I listen to what is going on (sudo tail -f /var/log/apache2/error.log), I see the following messages:
starting test: 1
starting test: 2
starting test: 3
starting test: 4
starting test: 5
starting test: 6
ending test: 1
starting test: 7
...
Clearly, it only supports concurrent requests up to 6. How do I increase this limit?
I am using the default options from the Apache2 that comes with Ubuntu 18.04, and using the default options of mod_wsgi. I have already gone through /etc/apache2/apache2.conf and see no 6 limit. My .wsgi file is configured as:
WSGIDaemonProcess app user=rpcruz group=app home=/var/www/app processes=25 restart-interval=86400 graceful-timeout=3600
I found the issue: The limit was not on the server side. I was testing with Chrome which has a limit of 6 connections per hostname.
Related
Currently I have a Selenium Grid running on AWS Fargate that autoscales based on desired sessions on the hub. I have a service that runs the hub task and a service for the node tasks. I currently use a one session per node approach because of the necessary resources required, and the fact that overall execution speed is not the primary goal of this test suite. I also always keep at least one node running.
The actual autoscaling will work; the hub sees it needs more nodes and scales the node service up to the needed scale. The hub will hold the session until a node is available and correctly place it there when it is.
The tests work perfectly if I'm just running one at a time, but the problem I'm running into is that when I try to run a group in parallel and need the grid to scale up, I get a 504 Gateway Time-out after 30 seconds of calling the WebDriver. I've tried to change every setting possible to bump this timeout but to no avail.
My hub config looks like
browserTimeout : 0
debug : false
jettyMaxThreads : -1
host : XXXXXXXXX
port : 4444
role : hub
timeout : 180000
cleanUpCycle : 5000
maxSession : 5
hubConfig : /opt/selenium/config.json
capabilityMatcher : org.openqa.grid.internal.utils.DefaultCapabilityMatcher
newSessionWaitTimeout : -1
throwOnCapabilityNotPresent : true
registry : org.openqa.grid.internal.DefaultGridRegistry
The node config looks like
browserTimeout: 0
debug: false
jettyMaxThreads: -1
host: XXXXXXXXX
port: 5555
role: node
timeout: 1800
cleanUpCycle: 5000
maxSession: 1
capabilities: Capabilities {applicationName: , browserName: chrome, maxInstances: 1, platform: LINUX, platformName: LINUX, seleniumProtocol: WebDriver, server:CONFIG_UUID: ..., version: 66.0.3359.170}
downPollingLimit: 2
hub: http://XXXXXXXXX:4444/grid/register
id: http://XXXXXXXXX:5555
nodePolling: 5000
nodeStatusCheckTimeout: 5000
proxy: org.openqa.grid.selenium.proxy.DefaultRemoteProxy
register: true
registerCycle: 5000
remoteHost: http://XXXXXXXXX:5555
unregisterIfStillDownAfter: 10000
I'm calling my selenium tests via Jruby for some certain business reasons and the basic configuration looks like
co = Java::OrgOpenqaSeleniumChrome::ChromeOptions.new
co.add_arguments(["--disable-extensions"].to_java(:string))
co.add_arguments(["no-sandbox"].to_java(:string))
co.add_arguments("--headless")
chrome_prefs = {}
chrome_prefs["profile.default_content_settings.popups"] = 0.to_s
chrome_prefs["safebrowsing.enabled"] = "true"
co.set_experimental_option("prefs", chrome_prefs)
cap = Java::OrgOpenqaSeleniumRemote::DesiredCapabilities.chrome
cap.set_capability("Capability", co)
$grid_url = ENV['GRID_URL']
$driver = Java::OrgOpenqaSeleniumRemote::RemoteWebDriver.new(Java::JavaNet::URL.new($grid_url), cap)
# Get timeout after the RemoteWebDriver.new call
Does anyone have any idea how to change the timeout here?
This had absolutely nothing to do with the Selenium setup, so if anyone else happens to run into this specifically when using Fargate or ECS in general and you're running the Hub behind a load balancer...
If you happened to base your CloudFormation off of the AWS Fargate examples they have on their Github, really make sure you change what they had idle_timeout.timeout_seconds for the Load Balancer set to.
We have a namespace configured to store data in memory only with couple of minutes default ttl. After starting putting some data into it, when expiration kicks in, we're getting these messages in the log (a lot, for ~30% of expired records):
WARNING (namespace): (namespace.c::762) set_id 1 - n_bytes_memory went negative!
I have simple client app with server config that can reproduce this: https://github.com/akkomar/aerospike-test (it's based on docker and is very easy to start)
Any advice what might be the reason?
Edit:
I checked this on versions 3.6.4, 3.7.0.1 and 3.7.4
Configuration file used for testing (from https://github.com/akkomar/aerospike-test/blob/master/etc/aerospike.conf):
service {
user root
group root
paxos-single-replica-limit 1
pidfile /var/run/aerospike/asd.pid
service-threads 4
transaction-queues 4
transaction-threads-per-queue 4
proto-fd-max 1024
}
logging {
file /var/log/aerospike/aerospike.log {
context any info
}
console {
context any info
context namespace detail
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002
mesh-port 3002
interval 150
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace test_ns {
replication-factor 2
memory-size 1G
default-ttl 10S
storage-engine memory
}
Edit2:
It seems that it's happening only if I update records via UDF. The simplest one that reproduces this:
local VAL_KEY = "v"
function add_data(rec, val_to_add, ttl_to_set)
if aerospike:exists(rec) then
rec[VAL_KEY] = val_to_add
aerospike:update(rec)
else
rec[VAL_KEY] = val_to_add
aerospike:create(rec)
end
end
When I execute the same operation via Java API - everything seems to work fine (example github repo mentioned earlier is updated with Java API example)
The meaning of the error message is that the space we have accounted for the set in memory went to a negative number which should not be possible.
This has been logged in our internal bug tracking system for resolution in future releases
It turned out it was a bug in Aerospike.
It's fixed in version 3.7.4.1 (detailed explanation in https://discuss.aerospike.com/t/problem-with-expiring-records-in-memory-only-namespace-n-bytes-memory-went-negative/2560/6)
We use couchbase as session storage for mod_perl scripts. To avoid delays on clients caused by waiting for a new connection we do preconnect to couchbase on child_init apache stage. So during apache restart / new child creation it connects to couchbase automatically and later use that connection during apche child lifetime.
Generally everything works fine, but sometimes we got the following errors during that preconnection:
Couldn't connect: 0x13 (Operation not supported) at /perl/lib64/perl5/Couchbase/Bucket.pm line 38.
Usually it appears during apache restart and on several (or dozens) of childs, and almost never on one child only. Usually restarting apache again solves the problem.
What can cause such a problems? Is it a problem with code / server configuration / couchbase server itself?
May be it caused somehow with a lot of reconnections at the same time? Some ulimits stuff / or selinux restrictions?
UPD: versions
OS:
Centos 6, 2.6.32-358.2.1.el6.x86_64
libcouchbase:
libcouchbase-devel.x86_64 2.4.7-1.el6
libcouchbase2-core.x86_64 2.4.7-1.el6
libcouchbase2-libevent.x86_64 2.4.7-1.el6
couchbase server:
2.2.0 community edition (build-837)
SDK:
perl (Couchbase::Core v2.0.2)
connection code (isolated & simplified):
# in mod_perl environment
use Couchbase;
use Couchbase::Bucket;
use Couchbase::Document;
use Apache2::ServerUtil ();
my $cb = undef;
# connection handler, initialized once, used during apache child lifetime
sub connect_couchbase_on_child_init {
my ($child_pool, $s) = #_;
my $dsn = 'couchbase://192.168.0.1,192.168.0.2/my_bucket_name?detailed_errcodes=1';
eval { $cb = Couchbase::Bucket->new($dsn); };
# here we get the occasional warnings during apache restarts
if ($#) { warn "COUCHBASE CONNECTION ERROR! $#"; $cb = undef; }
return Apache2::Const::OK;
}
Apache2::ServerUtil->server->push_handlers(PerlChildInitHandler => \&connect_couchbase_on_child_init);
# in request handlers it used with the following calls (only if connected):
# $doc = Couchbase::Document->new($key);
# $cb->get($doc);
# ...
# $cb->replace($doc);
# ...
# $cb->insert($doc);
# ...
# $cb->remove($doc);
Because you are using server 2.2.0 and because this seems to happen when you are connecting many clients at once, my theory is that you are receiving the last error from the server. The current client bootstrap process attempts using bootstrap over memcached (which is only supported from version >= 2.5.0 of the server), that fails and it attempts to use 'terse' bootstrapping (again, only supported on >= 2.5.0 of the server) and finally 'classic' HTTP (which is available on all versions).
Add the following options to your DSN/connection string to cut out some of the steps for your server. Note that should you ever upgrade to >= 2.5 these options should be removed:
bootstrap_on=http Does not try memcached bootstrap
http_urlmode=2 Uses the pre-2.5 style of bootstrapping by default
These two options will not necessarily fix your issue, but they will at least cut out some of the initial connection time, and perhaps show a clearer reason for the error (you can also set LCB_LOGLEVEL=5 in the environment to get actual logging).
In your case, the connection string would be:
couchbase://192.168.0.1,192.168.0.2/my_bucket_name?detailed_errcodes=1&bootstrap_on=http&http_urlmode=2
I'm trying to set up automated testing with PhantomJS, Behat and Sahi on my vagrant machine.
I'm getting the following output, when trying to run a test with behat:
[Behat\SahiClient\Exception\ConnectionException]
Exception has been thrown in "afterStep" hook, defined in FeatureContext::afterStep()
Connection time limit reached
Here is my userdata.properties:
# dirs. Relative paths are relative to userdata dir. Separate directories with semi-colon
scripts.dir=scripts;
# default log directory.
logs.dir=logs
# Directory where auto generated ssl cerificates are stored
certs.dir=certs
# Use external proxy server for http
ext.http.proxy.enable=false
ext.http.proxy.host=
ext.http.proxy.port=
ext.http.proxy.auth.enable=false
ext.http.proxy.auth.name=kamlesh
ext.http.proxy.auth.password=password
# Use external proxy server for https
ext.https.proxy.enable=false
ext.https.proxy.host=
ext.https.proxy.port=
ext.https.proxy.auth.enable=false
ext.https.proxy.auth.name=kamlesh
ext.https.proxy.auth.password=password
# There is only one bypass list for both secure and insecure.
ext.http.both.proxy.bypass_hosts=localhost|127.0.0.1|*.internaldomain.com
# Mark this property true to disable the proxy alert
proxy_alert.disabled=false
And my browswer_types.xml:
<browserTypes>
<browserType>
<name>phantomjs</name>
<displayName>PhantomJS</displayName>
<icon>safari.png</icon>
<path>/usr/bin/phantomjs</path>
<options>--ignore-ssl-errors=yes --proxy=localhost:9999 --ssl-protocol=any /usr/local/sahi/phantomjs-sahi.js</options>
<processName>phantomjs</processName>
<capacity>100</capacity>
<force>true</force>
</browserType>
</browserTypes>
behat.yml:
default:
extensions:
Behat\MinkExtension\Extension:
javascript_session: sahi
browser_name: phantomjs
goutte: ~
sahi:
host: localhost
port: 9999
Sahi run output:
--------
SAHI_HOME: ..
SAHI_USERDATA_DIR: ../userdata
SAHI_EXT_CLASS_PATH:
--------
Sahi properties file = /usr/local/sahi/config/sahi.properties
Sahi user properties file = /usr/local/sahi/userdata/config/userdata.properties
Added shutdown hook.
>>>> Sahi OS v5.0 started. Listening on port: 9999
>>>> Configure your browser to use this server and port as its proxy
>>>> Browse any page and CTRL-ALT-DblClick on the page to bring up the Sahi Controller
-----
Reading browser types from: /usr/local/sahi/userdata/config/browser_types.xml
-----
I've tried reinstalling a bunch of stuff, tried playing around with the ports, processes, proxy settings, nothing.
your vagrant comes with an empty or no db. so when you try to connect to your app, e.g log in with some known user it will crash cause it won't find it!
all the best ;)
Since version 4.3.2 of BrowserType change settings. Since there is no tag force. please check.
https://sahipro.com/docs/using-sahi/sahi-headless-execution-with-phantomjs.html#Documentation since Sahi Pro V4.3.2
mod_wsgi Exception occurred processing WSGI script '/usr/share/graphite-web/graphite.wsgi'
I copied only apache-graphite.conf to /etc/apache/sites-available, why does it complain about graphite.wsgi?
Content of apache-graphite.conf:
import os, sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'graphite.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
from graphite.logger import log
log.info("graphite.wsgi - pid %d - reloading search index" % os.getpid())
import graphite.metrics.search
graphite.wsgi is the wsgi application callled by your apache webserver to answer incoming requests.
The apache-graphite.conf site defines a wsgi application running django which will process requests using graphite code. I guess it looks more like this : https://github.com/graphite-project/graphite-web/blob/0.9.x/examples/example-graphite-vhost.conf
graphite.wsgi usually looks like : https://github.com/graphite-project/graphite-web/blob/0.9.x/conf/graphite.wsgi.example