I have a CakePHP app which I have been developing on a remote server. Everything is working fine on the remote server.
I'm now trying to install it on a machine with a fresh install of XAMPP. Cake is timing out. Apache is working - other things, such as phpMyAdmin, work fine. I am running Apache through port 8000, as IIS is using port 80. The OS is Windows Server 2003.
When trying to access the app, it times out, with the following error:
Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\htdocs\cake\libs\debugger.php on line 247
I have turned URL rewriting off, that hasn't solved the problem.
I have been trying to track down the source of the problem by echoing things and then exiting the script in the cake core. I found that the script was timing out (with the same error) when the components were trying to load. I commented out my components array in app_controller, and the script ran a little further.
Now, I have tracked it down this far:
Dispatcher::dispatch();
Dispatcher::_invoke();
Controller::constructClasses();
Controller::loadModel();
ClassRegistry::init(); //called on line 635 of Controller in the "else" block of the if (PHP5) statement
In ClassRegistry::init(), the script times out on line 141, whic is as follows:
${$class} =& new $class($settings);
I have no idea where to go from here! Help much appreciated.
I got your problem.
To solve this you should do tw0 things.
1) Try to change the debug level 2 in the core.php from the config adn then change it to 0.
2) plz enable this line from the config.
line number 57 in core.php.
Configure::write('App.baseUrl', env('SCRIPT_NAME'));
regards,
archit.
Well, after following the problem a bit further, I found that the problem was coming up in Model::construct. It turns out that the hostname in my database.php file was wrong! I would have expected a "could not connect to the database..." error.
Related
I have made a flask application at my local computer in the debugging mode and it runs all fine. But when it comes to production, the website gives me 500 or internal server error, which I have no idea what the bug is. I am fairly new to flask production and this has been stopping me from moving forward for quite a few days.
My questions are:
1> in my local development environment, one could always print things out. But how can I see those prints in the production stage?
2> Do I see them through Apache2 log? Where is Apache2 log?
For production, I actually followed the tutorials from pythonprogramming.net. Youtube link is here:
https://www.youtube.com/watch?v=qZNL4Ku1UQg&list=PLQVvvaa0QuDc_owjTbIY4rbgXOFkUYOUB&index=2
To use a very simple example, if the code imports a package which wasn't installed, where can we see the errors?
Thanks in advance.
I've tried to use to use try ... except block for every flask function. Whenever there is an exception, it can be return to the front-end. But what about other errors?
I found out:
Use logging module
Read apache2 log from /var/log/apache2
I'm using the utl_http package to make HTTP GET requests to an IIS site on the same server (local) as Oracle. Sometimes it works and I get the response, but more often than not it hangs for about 15 seconds and then I get this error:
ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1722 ORA-29263: HTTP protocol error
As a test, I've got a small static text file in the IIS site, so this is how I'm testing it:
select utl_http.request('http://domain.com/test.txt') from dual
I get the same problem if I run it in Oracle Apex instead of direct on the db.
The other thing I've tried is to create a package of my own that does the HTTP request using the long utl_htp.begin_request() method, instead of the utl_http.request() shortcut. This gives the exact same problem (works sometimes but errors mostly - same error).
The pattern I'm seeing is if I wait a while and then try, it works for the first 2-10 times, and then begins erroring. When it does work, I get the response instantly and when it errors, there is always the delay before the error.
If I request the text file URL (or any other resource in the site) using a remote web browser then I get the correct response every time.
I have tried setting a timeout like below but it doesn't have any effect. For example instead of timing out after 3 seconds it continues for 10 or 15 seconds before the error is shown.
UTL_HTTP.set_transfer_timeout(3);
I think I can rule out ACL because it works sometimes.
Does anyone know what might cause this behaviour?
Possible reasons
-> You may have a problem with your TNS-Listener.
From the command prompt window, try to run TNSPING service_name .. try to run it quickly several times and check if it fails in some of them.
I had once a similar problem. Try to re-configure your TNS-Listener.
There must be also an option in which you can give an IP number in the TNS-listener definition. This also solves sometimes these kind of problems.
-> IIS problem.
Read about SET_PERSISTENT_CONN_SUPPORT Procedure:
https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_http.htm#i1027673
Using: utl_http.set_persistent_conn_support(true, 30);
Could you be exceeding the limit of of concurrent HTTP connections? I vaguely remembering that I run into a similar problem when I forgot to close the HTTP connection.
i want to set a script which will restart the server automatically.
And i write the following code to setenv.sh file
JAVA_OPTS="$JAVA_OPTS -XX:OnOutOfMemoryError=/usr/local/apache-tomcat-5.5.30/bin/shutdown.sh;/usr/local/apache-tomcat-5.5.30/bin/startup.sh;"
It not working properly.
I am using tomcat 5.
To answer your question - I don't think the error handler can run several commands. If you want to do this, then write a small restart script and run your custom script.
However, I would think about this again. Not every OutOfMemoryError means the server should be restarted automatically. You'd better to get a notification and then profile the server to get the cause of the OOME, so you can get rid of it.
P.S
Any reason to use the old tomcat 5.5? tomcat 7 is quite mature.
Use CATALINA_OPTS instead
CATALINA_OPTS="${CATALINA_OPTS} -XX:OnOutOfMemoryError=\"/bin/sleep 30;/bin/kill %p; /bin/sleep 60; /bin/kill -9 %p\""
I have been trying to test openfire server for Load testing over BOSH but I have been getting the following error after few minutes of run.
1)
11/4/11 3:49:33 PM (thread 3 run 0 test 601): Aborted run due to Java exception calling TestRunner
Java exception calling TestRunner
File "D:\grinder\projects\loadtest\bin\..\tests\..\tests\one2one.py", line 144, in changePresence
File "D:\grinder\projects\loadtest\bin\..\tests\..\tests\one2one.py", line 208, in __call__
Caused by: java.net.BindException: Address already in use: connect
2) I have also been getting 404 Invalid SID errors.
Initially I had set up openfire on Windows 2003 Server but later I set it up on ubuntu 11.10 (RAM 2.0 GiB Intel Core Duo T2400 # 1.83GHz)
1) Firstly, I ran php curl fetch script to add users to using the userservices plugin to add something around 10,000 uses (during which I got a lot of blank responses , so may be this is related to the problem but I will not focus on this misbehaviour now)
2) But I needed to test this for 400 users so i had the following grinder.properties set:
grinder.processes=4
grinder.threads=100
grinder.runs=1
grinder.consoleHost=192.168.1.205
grinder.consolePort=6372
grinder.logDirectory=../logs
grinder.numberOfOldLogs=0
grinder.jvm.arguments=-Dpython.cachedir=../tmp
grinder.script=../tests/one2one.py
(this strangely ended up starting only 103 concurrent users)
(I have tried testing this using one agent)
3) I did a bit of a research and found that I could configure openfire for bosh; so i added the following system.properties
xmpp.httpbind.client.idle 360
xmpp.httpbind.client.requests.max 400
badly need help!!!!! anyone has an insight about how can i resolve this?
The "Address already in use" problem is odd. You may want to try with
grinder.processes=1
grinder.threads=400
As far as only seeing 103 concurrent users, how long does it take for a single of your grinder runs to execute? My thinking is the earliest threads the JVM executes are completing before the final threads have a chance to fully initialize and do work. If you try this:
grinder.runs=100
You will be more likely to achieve the full level of concurrency you are looking for.
When i am trying to browse the svc file i am getting an error
Webdev.Webserver.exe stopped working properly...
Can solutions to solve the problem
You can get this if two tings on the same machine are trying to use the same port. Try changing the port number in your solution.
See: http://jberke.blogspot.com/2008/07/webdevwebserverexe-has-stopped-working.html