XAMPP/Apache error: VirtualAlloc() failed: [0x00000008] Not enough storage - apache

While using Apache on XAMPP my localhost keeps failing to load. It'll work for a bit and then it'll just stop working. First I got this error about increasing my ThreadsperChild option (which I did, to 1500) but now I keep getting these errors:
"VirtualAlloc() failed: [0x00000008] Not enough storage is available to process this command.
VirtualFree() failed: [0x000001e7] Attempt to access invalid address.
Can't initialize heap: [0x000001e7] Attempt to access invalid address."
Either that, or (depending on what value I choose for the ThreadsperChild option, e.g. 800,400,1500 etc.) I only get: "AH01909: www.example.com:443:0 server certificate does NOT include an ID which matches the server name"
In either case the localhost won't load. I'm using Mozilla Firefox to try to reach the localhost. It used to work fine but then XAMPP/Apache just started giving me all these errors.
Does anyone know how to solve this? Is there an ideal number for the ThreadsperChild option or is there something else in the config files that needs to be changed? It's really frustrating because the localhost will host a test file first no problem and then the next file I try to load will not come up and the server becomes unresponsive. It must be a simple fix but I just can't seem to find it.
Any help would be greatly appreciated.

Related

URL works fine from browser but from google plus we get a internal service error

The website URL works fine. The link from google maps on the phone works fine. For some reason when you google it in the browser (mobile browser or desktop) you get an Internal Service Error. Google said it was nothing on their end to be done.
Has anyone encountered this or have an idea of what's causing it? The error received is below.
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator and inform them of the time the error occurred, and the actions you performed just before this error.
More information about this error may be available in the server error log.
Apache Server at www.thetechbuyer.com Port 80
As the error suggests, an Internal Server Error is internal to your webapp or web service on your host. It usually means that your program, whatever it is doing, has crashed. This could be because of misconfiguration, or a bug in your code, or something else - without knowing more (possibly a lot more) about your server environment, it is impossible to know what.
Check your server logs - they should have more information.
(It also isn't clear what this has to do with Google+.)

Forcing a DNS failure

I need to test a change in our application's DNS retry behavior.
It previously switched into another mode to report the issue to the end user, but we've found a bug when the retry attempt worked, it would proceed to try loading the now-found far-end service in that "error reporting" mode.
To fix this, we have disabled the switch to the error reporting mode, and expect that on a successful retry we will load into the expected mode.
Thus, I need DNS (rndc/named) to fail once, and only once, and provide a successful result on the second attempt.
The only thing I can think is to run a large load test, and hope DNS fails like this at some point... But I am hoping someone on here might know of a better solution.
Maybe a way to block the connection attempt once ? The DNS server is part of the application, though, so it would be blocking the connection to localhost.
for sure you can use docker/vm/dedicated os, change its dns settings and use it as a dns resolver. it will be probably a lot of work to script it but it seems possible. but before it i would look for some dns mock service/server

ORA-29273: HTTP request failed intermittent error using the utl_http package

I'm using the utl_http package to make HTTP GET requests to an IIS site on the same server (local) as Oracle. Sometimes it works and I get the response, but more often than not it hangs for about 15 seconds and then I get this error:
ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1722 ORA-29263: HTTP protocol error
As a test, I've got a small static text file in the IIS site, so this is how I'm testing it:
select utl_http.request('http://domain.com/test.txt') from dual
I get the same problem if I run it in Oracle Apex instead of direct on the db.
The other thing I've tried is to create a package of my own that does the HTTP request using the long utl_htp.begin_request() method, instead of the utl_http.request() shortcut. This gives the exact same problem (works sometimes but errors mostly - same error).
The pattern I'm seeing is if I wait a while and then try, it works for the first 2-10 times, and then begins erroring. When it does work, I get the response instantly and when it errors, there is always the delay before the error.
If I request the text file URL (or any other resource in the site) using a remote web browser then I get the correct response every time.
I have tried setting a timeout like below but it doesn't have any effect. For example instead of timing out after 3 seconds it continues for 10 or 15 seconds before the error is shown.
UTL_HTTP.set_transfer_timeout(3);
I think I can rule out ACL because it works sometimes.
Does anyone know what might cause this behaviour?
Possible reasons
-> You may have a problem with your TNS-Listener.
From the command prompt window, try to run TNSPING service_name .. try to run it quickly several times and check if it fails in some of them.
I had once a similar problem. Try to re-configure your TNS-Listener.
There must be also an option in which you can give an IP number in the TNS-listener definition. This also solves sometimes these kind of problems.
-> IIS problem.
Read about SET_PERSISTENT_CONN_SUPPORT Procedure:
https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_http.htm#i1027673
Using: utl_http.set_persistent_conn_support(true, 30);
Could you be exceeding the limit of of concurrent HTTP connections? I vaguely remembering that I run into a similar problem when I forgot to close the HTTP connection.

CakePHP app working remotely but not on local machine

I have a CakePHP app which I have been developing on a remote server. Everything is working fine on the remote server.
I'm now trying to install it on a machine with a fresh install of XAMPP. Cake is timing out. Apache is working - other things, such as phpMyAdmin, work fine. I am running Apache through port 8000, as IIS is using port 80. The OS is Windows Server 2003.
When trying to access the app, it times out, with the following error:
Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\htdocs\cake\libs\debugger.php on line 247
I have turned URL rewriting off, that hasn't solved the problem.
I have been trying to track down the source of the problem by echoing things and then exiting the script in the cake core. I found that the script was timing out (with the same error) when the components were trying to load. I commented out my components array in app_controller, and the script ran a little further.
Now, I have tracked it down this far:
Dispatcher::dispatch();
Dispatcher::_invoke();
Controller::constructClasses();
Controller::loadModel();
ClassRegistry::init(); //called on line 635 of Controller in the "else" block of the if (PHP5) statement
In ClassRegistry::init(), the script times out on line 141, whic is as follows:
${$class} =& new $class($settings);
I have no idea where to go from here! Help much appreciated.
I got your problem.
To solve this you should do tw0 things.
1) Try to change the debug level 2 in the core.php from the config adn then change it to 0.
2) plz enable this line from the config.
line number 57 in core.php.
Configure::write('App.baseUrl', env('SCRIPT_NAME'));
regards,
archit.
Well, after following the problem a bit further, I found that the problem was coming up in Model::construct. It turns out that the hostname in my database.php file was wrong! I would have expected a "could not connect to the database..." error.

Apache upload failed when file size is over 100k

Below it is some information about my problem.
Our Apache2.2 is on windows 2008 server.
Basically the problem is user fails to upload file which is bigger than 100k to our server.
The error in Apache log is: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. : Error reading request entity data, referer: ......
There were a few times (not always) I could upload larger files(100k-800k, failed for 20m) in Chrome. In FF4 it always fails for uploading file over 100k. In IE8 it is similar to FF4.
It seems that it fails to get request from client, then I reset TimeOut in Apache setting to default value(300) which did not help at all.
I do not have the RequestLimitBody option set up and I am not using PHP. Anyone saw the similar error before? Now I am not sure what I can try next. Any advise would be appreciated!
Edit:
I just tried to use remote desk to upload files on the server and it worked fine. First thought was about the firewall which however is off all the time, Http Proxy is applied though.