selenium2 ( phpunit ) - connection error - selenium

I use selenium 2 with Phpunit. when I run a script I got this error during the run.
PHPUnit_Extensions_Selenium2TestCase_NoSeleniumException: Error connection[28] to localhost:4444/wd/hub/session/edf323b4-c6ba-471a-9966-f2b9f3718084/url: Operation timed out after 60000 milliseconds with 0 bytes received
sometimes after several seconds and sometimes after 20+ minutes (memory: 48Mb ).
it takes a lot of time to execute the script. ( it go over like 100 news in different pages )
but I don't believe that it is a problem ( sometimes it crush after seconds ).
I already tried update the selenium and phpunit framework to last version but it doesn't helps.
Is there any option to continue the script after the connection crush?
or avoid the crush?
I know that I can try to increase connection time, but I look for a different solution or explanation why it happens. any ideas?
thanks.

the problems with connection is because the CURL, I tried to change the php.ini and increase the timeout but it not works so I understand that probably selenium set the timeout on the fly or something. after a short grep I found this file:
phpunit/phpunit-selenium/PHPUnit/Extensions/Selenium2TestCase/Driver.php
in the construct method it possible to change the default of the 'timeout' variable. to make sure that this parameter never changed ( if you lazy to find all the places where selenium calls to this class ) set your default timeout to the 'seleniumServerRequestsTimeout' property ( not recommended ).

I had the same issue and after days trying everything(I tried the solutions in this question too) without success to solve this issue, I decided to change the browser.
I downloaded the Chrome Driver and everything started to work without any problem.
Which leads me to believe that can be some version conflict or something else.
I used Selenium version 2.53.0 and PhantomJS version 2.1.1.

I encountered the same problem.
It was caused by accessing to sessions and using session_id(). You should use session_write_close() to fix the issue.

Related

Your connection was interrupted. ERR_NETWORK_CHANGED when parse with Selenium

I wrote a parser in selenium, which clicks to different links and parses data. But from time to time I get an error
Your connection was interrupted. ERR_NETWORK_CHANGED
Perhaps this error is not related to selenium.
But, if I wait a little (about 1 second), then the connection appears again and I can continue to parse.
What way is there to solve this problem programmatically? Maybe in this case, somehow, I can reload the page until the connection appears again?
For me using a mac system and chrome for a daily basis, I encounter this problem now and then and finally found it might be helpful to solve this problem by just turning off the virtual machines or dock containers whichever can adjust your network configuration.
Maybe not fit for your case but it could be a hint. You can turn them off for a while if you have them running.

PhantomJS - set time limit on page.open()? Or workaround?

Using PhantomJS and bash, I'm working on a little piece of anti-malware that reads a web page, grabs all the domains that are delivering assets to the browser, then prints each server's country of origin. It works fine except for one site that has a... uh... 'suboptimal' piece of javascript that calls to an external server every 5 seconds. PhantomJS just loads the resource over and over and over, page.open() never finishes, and page.onLoadFinished() is never called.
Is there a way around this? Can I set a time limit on page.load()? I guess, as a workaround, can I set a time limit on the Linux process?
Thanks in advance, and if anyone is interested in a copy of this script let me know and I'll post it somewhere public.
I solved this problem using the solutions given here to set a execution time limit on the phantomjs command and kill it if needed.
Command line command to auto-kill a command after a certain amount of time

Running GLORP tests

I am trying to get GLORP into the pharo 2.0 image. I managed to load GLORP , PostgresV2 driver and then changed the GlorpDatabaseLoginResource default login params. After that, i started running the tests starting with PostgresV2 tests TestPGConnection in this i got 2 failures testFieldConverter2 and testFieldConverter3.
after i ran the GlorpTest. here i got only 353 out of 674 tests passed. Is this normal? I am running the test using the testRunner. Any idea where i could have taken a possible bad step?
Thanks in Advance.
I got all test correct now. The probelm was in the image i was checking the DateAndTime offset: method was modified somehow (may be installing some other packages did that). So that was causing my Date And time related functions to fail. After i loaded all to a fresh image and ran it was like a fine piece of cake. :)

Cuccumber + Capybara, When running a scenario in my feature file only the background steps are running whilst the scenario steps are ignored

After quite a few hours of searching for answers to this to no avail, along with trying to source the issue myself within rubymine, I am now resigning to asking a question for it...
When I run one of my scenario's in my feature file, or all scenario's, it only processes the background steps and then ignores all the others that are within my scenario.
The stats at the end then report:
1 Scenario (1 Failed)
4 Steps (3 Skipped, 1 Passed)
So no steps failed! I have verified that the scenario works on another machine and passes successfully. Does anyone have an idea why it would just be ignoring my scenario steps?
Thank you in advance
I have actually managed to fix this problem myself!!! :)
In the javascript_emulation.rb file there is a known issue around capybara and racktest, the workaround and easy fix for that is to remove ::Driver after :Capybara for the java emulation bits.
If none of the ::Driver entries are removed the following error is returned:
undefined method 'click' for class 'Capybara::Driver:RackTest:Node' (NameError)
then a list of the problem areas in different files.
If the ::Driver entry is removed from the class Capybara::Driver:RackTest::Node
then the test will run but only process the background tests.
All instances of ::Driver must be removed in this file. For me there were 4 in total.
Hope this helps others :)

PHP script stops running arbitrarily with no errors

I have a PHP script that seemed to stop running after about 20 minutes.
To try to figure out why, I made a very simple script to see how long it would run without any complex code to confuse me.
I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it stops without any message or error. The browser says "Done".
I've been over every single possible thing I could think of:
set_time_limit ( session.gc_maxlifetime in the php.ini)
memory_limit
max_execution_time
The point that the script is stopped is not consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes.
Please, any help would be greatly appreciated.
It is hosted on a 1and1 server. I contacted them and they don't provide support for bugs caused by developers.
At some point your browser times out and stops loading the page. If you want to test, open up the command line and run the code in there. The script should run indefinitely.
Have you considered just running the script from the command line, eg:
php script.php
and have the script flush out a message every so often that its still running:
<?php
while (true) {
doWork();
echo "still alive...";
flush();
}
in such cases, i turn on all the development settings in php.ini, of course on a development server. This display many more messages, including deprecation warnings.
In my experience of debugging long running php scripts, the most common cause was memory allocation failure (Fatal error: Allowed memory size of xxxx bytes exhausted...)
I think what you need to find out is the exact time that it stops (you can set an initial time and keep dumping out the current time minus initial). There is something on the server side that is stopping the file. Also, consider doing an ini_get to check to make sure the execution time is actually 0. If you want, set the time limit to 30 and then EVERY loop you make, continue setting at 30. Every time you call set_time_limit, the counter resets and this might allow you to bypass the actual limits. If this still isn't working, there is something on 1and1's servers that might kill the script.
Also, did you try the ignore_user_abort?
I appreciate everyone's comments. Especially James Hartig's, you were very helpful and sent me on the right path.
I still don't know what the problem was. I got it to run on the server with using SSH, just by using the exec() command as well as the ignore_user_abort(). But it would still time out.
So, I just had to break it into small pieces that will run for only about 2 minutes each, and use session variables/arrays to store where I left off.
I'm glad to be done with this fairly simple project now, and am supremely pissed at 1and1. Oh well...
I think this is caused by some process monitor killing off "zombie processes" in order to allow resources for other users.
Run the exec using "2>&1" to log anything including stderr.
In my output I managed to catch this:
...
script.sh: line 4: 15932 Killed php5-cli -d max_execution_time=0 -d memory_limit=128M myscript.php
So something (an external force, not PHP itself) is killing my process!
I use IdWebSpace which is excellent BTW but I think most shared hosting providers impose this resource/process control mechanism just to be sane.