This error is occurring on every *.asp page in the application, but the *.aspx pages work just fine.
I did not develop the application, but the person who did has long since left the company. We have about 20 customers, all with the program working just fine. One of the customers had their server crash, and we had to re-set it up on their new server. Everything is working just fine now, except for this error on the *.asp pages.
The same connection string is used for all of our customers, so I know the problem is not there. There are no system DSNs or user DSNs defined on any of our customers.
This is a 32-bit application on a 32-bit server (server 2003 with SQL 2005).
I'm pretty sure it's a permissions or setting error, but I have checked absolutely everything I can think of.
Please help.
A company with 1400 employees can't use this program until I get it back up and running. I have no idea what else to do.
Thanks everyone for your help.
Sorry it took me a little bit to respond to you, but I had heard back from the original developer, and he had a hidden connection string deep within the program specifically to trip someone up if they were trying to install the program without authorization. He gave me the info I needed and it worked beautifully.
Thanks again!
I am working on a VB.Net web application. If I placed a break-point in Page_Init event of the page. After hitting reload in browser, it takes around 5-6 seconds to hit that break-point. Whereas in my other application it hits almost instantly.
Any help on this would be a great help for me. Thanks in advance
I find out the problem. My application uses microsoft assess db. Those db files(.mdb file) are placed inside the bin folder as some of the db file is required by a third party dll to be place in the same location as the dll. Now I placed a copy of the .mbd file in the App_Data and changed the connection string accordingly. Now I got tremendous speed improvement.
This behavior was really strange. It's not taking time to complete the db call and get the result. It was taking time to hit the server method.
Anyway this solution makes my day. I post it here just in case someone else face the same and this might help.
I am facing issue with my xpage application. It works perfectly fine with less number of concurrent users. But When more concurrent users say more then 1000 , try to access xpage application, It becomes very slow. I have looked the code and corrected some redundant code .
But I am not sure this is the issue. For that is there any way in lotus notes to simulate the load testing with 1000 users?
Please help me if any workaround there.
Agree with Oliver about using JMeter.
But then what you really want is to find out where you have "expensive" code. For an agent you can just "profile" it. However, that is a little less straight-forward for an XPage. You can try the XPages Toolbox from OpenNTF.org. I have not tried it on Domino 9.0.x but I would think you could use it.
Another simple (and quick) way to get an idea is to print some time info on the console of the server when you load the pages in your application. You can use a phase listener to add this information - or put it in another more specific location - it really depends on the way that your application is structured. But this way you can get a very quick idea of where the bottlenecks are before you dive into something like the toolbox :-)
/John
We used JMeter to get an idea what will happen if X users will access our app in Y threads etc. http://jmeter.apache.org/
I recently upgraded from 2005 to 2012 (about 10 months). When I first started using SSMS2012 it worked great was able to open multiple windows in a particular database. For about that last 2-3 months SSMS hangs when I try to open more than 1 query window or open a file. It is random but happens a majority of the time. I have been able to interrupt the hang a few times. When I interrupt it I get the login screen and it sits there trying to log in apparently unsuccessfully. If I cancel the login and try it again with the same entries it connects just fine and things are great, but I have to do this for every window and I cannot often break into that screen before it stops responding.
I have searched extensively and have not found an answer to this problem. It only appears to be happening to a particular instance. The instance does not show any signs of issues, has been rebooted and configuration checked for inconsistencies, etc. I am at a loss. If anyone has experienced this and has been able to resolve it I would appreciate a response.
Again this is a fresh install of 2012 standard with 2005 imported databases. All indications show that it is working fine. compatibility level for a majority of the databases are still 90, until I can clear them for 11 with the software they house. I have a test environment with the restored versions on a different server and no issues result from that instance - I can open up multiples without incident. My belief is it is something with that particular instance, but I am not sure where else to look.
Thank you in advance.
I have a PHP script that seemed to stop running after about 20 minutes.
To try to figure out why, I made a very simple script to see how long it would run without any complex code to confuse me.
I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it stops without any message or error. The browser says "Done".
I've been over every single possible thing I could think of:
set_time_limit ( session.gc_maxlifetime in the php.ini)
memory_limit
max_execution_time
The point that the script is stopped is not consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes.
Please, any help would be greatly appreciated.
It is hosted on a 1and1 server. I contacted them and they don't provide support for bugs caused by developers.
At some point your browser times out and stops loading the page. If you want to test, open up the command line and run the code in there. The script should run indefinitely.
Have you considered just running the script from the command line, eg:
php script.php
and have the script flush out a message every so often that its still running:
<?php
while (true) {
doWork();
echo "still alive...";
flush();
}
in such cases, i turn on all the development settings in php.ini, of course on a development server. This display many more messages, including deprecation warnings.
In my experience of debugging long running php scripts, the most common cause was memory allocation failure (Fatal error: Allowed memory size of xxxx bytes exhausted...)
I think what you need to find out is the exact time that it stops (you can set an initial time and keep dumping out the current time minus initial). There is something on the server side that is stopping the file. Also, consider doing an ini_get to check to make sure the execution time is actually 0. If you want, set the time limit to 30 and then EVERY loop you make, continue setting at 30. Every time you call set_time_limit, the counter resets and this might allow you to bypass the actual limits. If this still isn't working, there is something on 1and1's servers that might kill the script.
Also, did you try the ignore_user_abort?
I appreciate everyone's comments. Especially James Hartig's, you were very helpful and sent me on the right path.
I still don't know what the problem was. I got it to run on the server with using SSH, just by using the exec() command as well as the ignore_user_abort(). But it would still time out.
So, I just had to break it into small pieces that will run for only about 2 minutes each, and use session variables/arrays to store where I left off.
I'm glad to be done with this fairly simple project now, and am supremely pissed at 1and1. Oh well...
I think this is caused by some process monitor killing off "zombie processes" in order to allow resources for other users.
Run the exec using "2>&1" to log anything including stderr.
In my output I managed to catch this:
...
script.sh: line 4: 15932 Killed php5-cli -d max_execution_time=0 -d memory_limit=128M myscript.php
So something (an external force, not PHP itself) is killing my process!
I use IdWebSpace which is excellent BTW but I think most shared hosting providers impose this resource/process control mechanism just to be sane.