PhantomJS command prompt (cmd) runs forever without HAR data - phantomjs

I am using PhantomJS 2.0.0 to capture netsniff data from a website.
I've tested with a few example sites like google and stackoverflow.com - it spits out HAR JSON data almost instantaneously
I'm having issue with one of our site and the command prompt just stays on forever. I've waited for, like 15 minutes. (Page size is ~2.5 MB)
Command used: phantomjs C:\phantomjs-2.0.0-windows\examples\netsniff.js http://www.testwebsitexyz.com > c:\out.txt
Any help is most appreciated.
Regards,
SatP

Related

Screenshots are not generating for failure cases using robot framework if i am doing execution through Jenkins in Linux slave

In Robot framework by default screenshots are generating for failure cases.I tried using Get page screenshot keyword, but still i can't see screenshots. Using the same script i can see screenshots if i am doing the execution in local machine.
I am executing with headless chrome in Linux slave with Jenkins. Same scripts are working in local but failing in Jenkins. I want to see screenshots for failures, but screenshots are not generating.
Input Text ${login_password} ${password}
capture page screenshot password.png
click on next ${password_next} ${login_password}
capture page screenshot next.png
It is giving broken image like below.
I tried with below script to store the screenshots. It is working in Local. But screenshots are generating but not coming in html report if i m doing execution in Jenkins server which is hosted in Linux.
Capture Image
[Arguments] ${imagename}
${path}= Catenate SEPARATOR= ${EXECDIR} / Screenshots / ${imagename} . png
capture page screenshot ${path}
Issue Resolved. I am using Robot plugin in Jenkins for results. In Post build configuration till the time i am allowing only log and html report. but now i updated that as to allow .png format files also. So, default it is giving screenshots for failuree.

Protractor test times out randomly in Docker on Jenkins, works fine in Docker locally

When using the APIs defined by Protractor & Jasmine (the default/supported runner for Protractor), the tests will always work okay on individual developer laptops. For some reason when the test runs on the Jenkins CI server, they will fail (despite being in the same docker containers on both hosts, and that was wildly frustrating.)
This error occurs: A Jasmine spec timed out. Resetting the WebDriver Control Flow.
This error also appears: Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
Setting getPageTimeout & allScriptsTimeout to 30 seconds had no effect on this.
I tried changing jasmine.DEFAULT_TIMEOUT_INTERVAL to 60 seconds for all tests in this suite, once the first error appears then every test will wait the full 60 seconds and time out.
I've read and reread Protractor's page on timeouts but none of that seems relevant to this situation.
Even stranger still, it seems like some kind of buffer issue - at first the tests would always fail on a particular spec, and nothing about that spec looked wrong. While debugging I upgraded the selenium docker container from 2.53.1-beryllium to 3.4.0-einsteinium and the tests still failed but they failed a couple specs down - suggesting that maybe there was some optimization in the update and so it was able to get more done before it gave out.
I confirmed that by rearranging the order of the specs - the specs that had failed consistently before were now passing and a test that previously passed began to fail (but around the same time in the test duration as the other failures before the reorder.)
Environment:
protractor - 5.1.2
selenium/standalone-chrome-debug - 3.4.0-einsteinium
docker - 1.12.5
The solution ended up being simple - I first found it on a chrome bug report, and it turned out it was also listed right on the front page of the docker-selenium repo but the text wasn't clear as to what it was for when I'd read it the first time. (It says that selenium will crash without it, but the errors I was getting from Jasmine were just talking about timeouts, and that was quite misleading.)
Chrome apparently utilizes /dev/shm, and apparently that's fairly small in docker. There are workarounds for chrome and firefox linked from their README that explain how to resolve the issue.
I had a couple test suites fail after applying the fix but all the test suites have been running and passing for the last day, so I think that was actually the problem and that this solution works. Hope this helps!

Selenium doesn't display IE when run via Task Schedular

It's not a problem and actually is a nice side affect, but it is confusing me.
When I run the test suite via the command line I see IE pop up and the test run.
When I run it with the exact same arguments from the Task Schedular though it doesn't display IE. The test seems to run correctly (I'm getting the expected TestResults.xml so it all looks OK.
Why's this happening though?
The command is:
"C:\Program Files (x86)\NUnit.org\nunit-console\nunit3-console.exe" "Path_to_test_assembly"
P.S. I'm using the .NET version of Selenium with the IE web driver.
Ok, it seems that the problem is with your access to remote machine. Your IE test are running as a background process on that machine, or running on wrong sessionID. It means that there could be more users/accounts, and your test is running on wrong one.
I'm not sure how exactly are you running this, but you could check your session ID's by typing qwinsta in command line on that machine.
If you want it to run properly you should pass this sessionID as a parameter when connecting to remote desktop, for example, if using psexec and your sessionID is 2 than you pass "-i 2" when starting it. It means that it will interact on user with sessionID 2 on that machine.

Unable create Screenshot of Craiglist with PhantomJS on Digital Ocean

Have a very strange problem. Since 1-2 weeks I am unable to create screenshots of a Craigslist pages with PhantomJS on Digital Ocean. Does anybody have any idea what the issue could be and why it does not work anymore?
It always worked fine before and still works totally fine when I run it locally on my notebook. It creates the screenshot within 2-4 seconds. However running the same (no matter if in a special Docker-Container like the one used locally or installed directly on the host) on Digital Ocean keeps on loading forever and if I am lucky and wait long enough I get the screenshot after around 7-10+ minutes.
Tried it on different Droplets (existing & totally new) in different zones (SF & Frankfurt) but always have the same issue. Already contacted Digital Ocean about that. They were able to reproduce the issue but according to them nothing changed on their side and have so also no idea what could cause that. They blame PhantomJS or Craigslist.
It can be reproduced very easily. On a new Droplet (Ubuntu 14.04) the following code will install PhantomJS:
# Install dependencies
sudo apt-get install -y libicu52 libjpeg8 libfontconfig libwebp5
# Install PhantomJS
cd /usr/local/share && \
curl -L -O https://github.com/bprodoehl/phantomjs/releases/download/v2.0.0-20150528/phantomjs-2.0.0-20150528-u1404-x86_64.zip && \
unzip phantomjs-2.0.0-20150528-u1404-x86_64.zip && \
ln -s /usr/local/share/phantomjs-2.0.0-20150528/bin/phantomjs /usr/local/bin/phantomjs
A very basic example script to create a screenshot of a product on craigslist. File called "test-screenshot.js" with this content:
var page = require('webpage').create();
var url = 'http://vancouver.craigslist.ca/van/ctd/5148995470.html';
page.open(url, function() {
page.render('craigslist.png');
phantom.exit();
});
To run the script: "phantomjs test-screenshot.js".
Thanks!
It appears that Craigslist is deliberately slowing traffic from Digital Ocean IPs. In the DC area, the CL homepage takes > 55 seconds to load when I use my VPN hosted at Digital Ocean. Without the VPN, load time is normal (< 1000 ms). Other sites work properly through the VPN.
I assume this is a technical response to their recent lawsuits involving scrapers of their site.

Waiting for file to download on selenium grid

I have a test using Webdriver and C# which downloads a file from a website.
When running this test on my local machine it works fine but when I try to run it on Selenium grid it Looks for the file I'm downloading on the hub and not on the node.
Is the anyway of accessing the node file structure to monitor when the file is downloaded.
Sorry if this is unclear.
Thanks
Aidan
It seems, there is no such possibility.. I have also tried to find such functionality, but failed.
But, one way to check it still exists - "upload downloaded". Of course it is workaround and not always you have such function as upload.
Anyway, you may:
download test data
delete test data from page
upload test data on page again
check it was really appeared
It is quite dirty way, but on other hand it allows you to test not only download function, but also upload function.