I have been trying to test openfire server for Load testing over BOSH but I have been getting the following error after few minutes of run.
1)
11/4/11 3:49:33 PM (thread 3 run 0 test 601): Aborted run due to Java exception calling TestRunner
Java exception calling TestRunner
File "D:\grinder\projects\loadtest\bin\..\tests\..\tests\one2one.py", line 144, in changePresence
File "D:\grinder\projects\loadtest\bin\..\tests\..\tests\one2one.py", line 208, in __call__
Caused by: java.net.BindException: Address already in use: connect
2) I have also been getting 404 Invalid SID errors.
Initially I had set up openfire on Windows 2003 Server but later I set it up on ubuntu 11.10 (RAM 2.0 GiB Intel Core Duo T2400 # 1.83GHz)
1) Firstly, I ran php curl fetch script to add users to using the userservices plugin to add something around 10,000 uses (during which I got a lot of blank responses , so may be this is related to the problem but I will not focus on this misbehaviour now)
2) But I needed to test this for 400 users so i had the following grinder.properties set:
grinder.processes=4
grinder.threads=100
grinder.runs=1
grinder.consoleHost=192.168.1.205
grinder.consolePort=6372
grinder.logDirectory=../logs
grinder.numberOfOldLogs=0
grinder.jvm.arguments=-Dpython.cachedir=../tmp
grinder.script=../tests/one2one.py
(this strangely ended up starting only 103 concurrent users)
(I have tried testing this using one agent)
3) I did a bit of a research and found that I could configure openfire for bosh; so i added the following system.properties
xmpp.httpbind.client.idle 360
xmpp.httpbind.client.requests.max 400
badly need help!!!!! anyone has an insight about how can i resolve this?
The "Address already in use" problem is odd. You may want to try with
grinder.processes=1
grinder.threads=400
As far as only seeing 103 concurrent users, how long does it take for a single of your grinder runs to execute? My thinking is the earliest threads the JVM executes are completing before the final threads have a chance to fully initialize and do work. If you try this:
grinder.runs=100
You will be more likely to achieve the full level of concurrency you are looking for.
Related
Our app services are experiencing the problem, that they can’t be restarted by the hosting environment (ANCM).
The user is getting the following screen in that case:
Http Error 500.37
Our production subscription consists of up to 8 different app services and the problem can randomly harm one of them ore some of them.
The problem can occur several times a week, or just once a month.
The bootstrapping procedure of our app services is not time consuming.
The last occurrence of the problem has this log entries within the eventlog:
Failed to gracefully shutdown application 'MACHINE/WEBROOT/APPHOST/XXXXXXXXX'.
followed by:
Application '/LM/W3SVC/815681839/ROOT' with physical root 'D:\home\site\wwwroot' failed to load coreclr. Exception message: Managed server didn't initialize after 120000 ms
In most cases the problem can be resolved by manually stopping and starting the app service. In some cases we had to do that twice.
We are not able to reproduce that behavior locally.
The App Service Plan is S2 and we actually use just one instance.
The documentation of the Http error 500.37 recommends:
"You may need to stagger the startup process of multiple apps."
But there is no hint of how to do that.
How can we ensure that our app services are restarted without errors.
HTTP Error 500.37 - ANCM Failed to Start Within Startup Time Limit
You can try following approaches:
Approach 1: If possible, can try to move one app into a new App Service with a separate App Service plan, then check whether it can start as expected.
Please note that creating and using a separate App Service plan would be charged.
Approach 2: Increasing the startupTimeLimit attribute of the aspNetCore element.
For more information about the startupTimeLimit attribute, please check: https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/aspnet-core-module?view=aspnetcore-3.1#attributes-of-the-aspnetcore-element
with a new project we encountered some strange behaviour on our ColdFusion application.
Whenever a single request is initiated from the browser, the code of the cfml-templates is
executed multiple times. Upon viewing the corresponding log-files we found out, that indeed
for some reason the same request fires the evaluation in our application multiple times. One request
generates several entries. This is especially the case for long-running requests, such as database imports.
The ColdFusion application implements a REST-service, but even on manually requesting a resource,
such as a certain cfml page, on the same application - the code gets executed an unknown amount of times(variable initializations, database write-operations etc. take place), and if the request runs too long (cap at around ~4-6 seconds) there is no response to the browser.
About the infrastructure:
The application is Coldfusion18 with Tomcat Standard Edition
The webserver is an Apache (2.4.6).
Everything runs on a Linux machine with Cent OS 7.7
The corresponding Java version is 11.0.4
Our best guess is that there might be some misscommunication between the coldfusion connector with
the apache webserver. We actually searched for some configuration parameters that could cause the
problem, without success. Upon an installation on a windows machine we did not encounter that error.
Anyone got any idea?
we just found our answer in the following post:
Link to Solution
I'm using the utl_http package to make HTTP GET requests to an IIS site on the same server (local) as Oracle. Sometimes it works and I get the response, but more often than not it hangs for about 15 seconds and then I get this error:
ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1722 ORA-29263: HTTP protocol error
As a test, I've got a small static text file in the IIS site, so this is how I'm testing it:
select utl_http.request('http://domain.com/test.txt') from dual
I get the same problem if I run it in Oracle Apex instead of direct on the db.
The other thing I've tried is to create a package of my own that does the HTTP request using the long utl_htp.begin_request() method, instead of the utl_http.request() shortcut. This gives the exact same problem (works sometimes but errors mostly - same error).
The pattern I'm seeing is if I wait a while and then try, it works for the first 2-10 times, and then begins erroring. When it does work, I get the response instantly and when it errors, there is always the delay before the error.
If I request the text file URL (or any other resource in the site) using a remote web browser then I get the correct response every time.
I have tried setting a timeout like below but it doesn't have any effect. For example instead of timing out after 3 seconds it continues for 10 or 15 seconds before the error is shown.
UTL_HTTP.set_transfer_timeout(3);
I think I can rule out ACL because it works sometimes.
Does anyone know what might cause this behaviour?
Possible reasons
-> You may have a problem with your TNS-Listener.
From the command prompt window, try to run TNSPING service_name .. try to run it quickly several times and check if it fails in some of them.
I had once a similar problem. Try to re-configure your TNS-Listener.
There must be also an option in which you can give an IP number in the TNS-listener definition. This also solves sometimes these kind of problems.
-> IIS problem.
Read about SET_PERSISTENT_CONN_SUPPORT Procedure:
https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_http.htm#i1027673
Using: utl_http.set_persistent_conn_support(true, 30);
Could you be exceeding the limit of of concurrent HTTP connections? I vaguely remembering that I run into a similar problem when I forgot to close the HTTP connection.
I am running 340 concurrent users to load test on server using jmeter.
But on most of the cases jmeter hangs up and won' t return, even if I try to close the connection it just hangs up. and eventually I have to close the application.
Any idea how to check what is holding the requests and how to check the requests sent by jmeter and find the bottleneck.
Got the following message on closing the thread
Shutting down thread please be patient message
I've hit this several times over the past few years. In each of my cases (may not be in your's) the issue was with the Load Balance (F5) I was sending my traffic through. Basically a property called OneConnect was holding the connections in a time-wait state and never killing the connection.
Run a pack tool like wireshark and see what's happening with the requests.
Try distributed testing, 340 concurrent users is not a big deal, but still you can try if that decreases your pain. Also take a look at the following link:
http://jmeter.apache.org/usermanual/best-practices.html#lean_mean
First check you script is ok with one user.
Ensure you use assertions.
Then run you test following jmeter best practices:
no gui
no costly listeners
You should then be able to see in csv output the longest request and be able to fix your issue.
I also encountered this problem before when I run my JMeter on my laptop(Core 2 Duo 1.5Ghz) it always hang-up in the middle of the processing. I tried to run on another pc which is more powerful than my laptop and its works now smoothly. Therefore, JMeter will run effectively if your pc or laptop has a better specs.
Note: It is also advisable to run your JMeter in non-gui mode.
Example to run JMeter in Linux box:
$ ./jmeter -t test.jmx -n -l /Users/home/test.jtl
I had the
one or more test threads won't exit
because of a firewall blocking some requests. So I had to leap in the firewalls timeout for all blocked request... then it returned.
You are getting this error probably because JVM is not capable of running so many threads. If you take a look at your terminal, you will see the exception you get:
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread. See log file for details.
You can solve this by doing Remote Testing and have multiple clusters running, instead of one.
I have a CakePHP app which I have been developing on a remote server. Everything is working fine on the remote server.
I'm now trying to install it on a machine with a fresh install of XAMPP. Cake is timing out. Apache is working - other things, such as phpMyAdmin, work fine. I am running Apache through port 8000, as IIS is using port 80. The OS is Windows Server 2003.
When trying to access the app, it times out, with the following error:
Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\htdocs\cake\libs\debugger.php on line 247
I have turned URL rewriting off, that hasn't solved the problem.
I have been trying to track down the source of the problem by echoing things and then exiting the script in the cake core. I found that the script was timing out (with the same error) when the components were trying to load. I commented out my components array in app_controller, and the script ran a little further.
Now, I have tracked it down this far:
Dispatcher::dispatch();
Dispatcher::_invoke();
Controller::constructClasses();
Controller::loadModel();
ClassRegistry::init(); //called on line 635 of Controller in the "else" block of the if (PHP5) statement
In ClassRegistry::init(), the script times out on line 141, whic is as follows:
${$class} =& new $class($settings);
I have no idea where to go from here! Help much appreciated.
I got your problem.
To solve this you should do tw0 things.
1) Try to change the debug level 2 in the core.php from the config adn then change it to 0.
2) plz enable this line from the config.
line number 57 in core.php.
Configure::write('App.baseUrl', env('SCRIPT_NAME'));
regards,
archit.
Well, after following the problem a bit further, I found that the problem was coming up in Model::construct. It turns out that the hostname in my database.php file was wrong! I would have expected a "could not connect to the database..." error.