Cant open https web using Slimerjs, casperjs, phantomjs - automation

This is first time i cant open website using headless browser such: phantomjs, slimerjs or casperjs. I just want to open website. I just create very basic script to open the website and take screenshot. but 3 (three) of them give me blank picture.
i try using:
--debug=true
--ssl-protocol=TLSv1.2 (i try each of available protocol)
--ignore-ssl-errors=true
Here my script:
Slimerjs
var page = require("webpage").create();
page.open("https://domain/")
.then(function(status){
if (status == "success") {
page.viewportSize = { width:1024, height:768 };
page.render('screenshot.png');
}
else {
console.log("Sorry, the page is not loaded");
}
page.close();
phantom.exit();
});
phantomjs
var page = require('webpage').create();
page.open('https://domain/', function() {
page.render('screenshot.png');
phantom.exit();
});
casperjs
var casper = require('casper').create({
viewportSize: {width: 950, height: 950}
});
casper.start('https://domain/', function() {
this.capture('screenshot.png');
});
casper.run();
I even try to use screen capture service to know if they can open or not. But all of them give me nothing too.
is there i miss something?

The issue is not because of PhantomJS as such. The site you are checking is protected by a F5 network protection
https://devcentral.f5.com/articles/these-are-not-the-scrapes-youre-looking-for-session-anomalies
So its not that the page doesn't load. It is that the protection mechanism detects that PhantomJS is a bot based on checks they have implemented
The easiest of fixes is to use Chrome instead of PhantomJS. Else it means a decent amount of investigation time
Some similar unanswered/answered question in the past
Selenium and PhantomJS : webpage thinks Javascript is disabled
PhantomJS get no real content running on AWS EC2 CentOS 6
file_get_contents while bypassing javascript detection
Python POST Request Not Returning HTML, Requesting JavaScript Be Enabled
I will update this post with more details that I find. But my experience says, go with what works instead of wasting time on such sites which don't work under PhantomJS
Update-1
I have tried to import the browser cookies to PhantomJS and it still won't work. Which means there is some hard checks

I experienced this issue with phantomJS and the following service args resolved it:
--ignore-ssl-errors=true
--ssl-protocol=any
--web-security=false
--proxy-type=None
Can't help you with casperJS and slimerJS, and don't know exactly why this worked.

Related

How can I debug a websocket connection?

When running within a testcafe test, upon loading an app that tries to connect to a websocket, I receive an error in the console of "Connection closed before receiving a handshake response"
This prevents most of the app from working.
How can I get additional information about what the final request that testcafe is making after url-rewriting? I'd like to see exactly what url & headers it's sending to try to connect.
Simple example:
import { ClientFunction, Selector } from "testcafe";
fixture`Getting Started`.page("https://torus.qa.argos.education/session/new");
test("Example error", async (t) => {
await t.debug();
});
I've tried chrome with both non ssl and self signed certificate mode, and also tried disabling web security. Firefox gives the same error.
We released a new TestCafe version (v2.3.0), which includes experimental proxyless mode. This mode uses native browser automation. In Proxyless mode, a few issues are already fixed. This issue should also be fixed in Proxyless mode.
Unfortunately, I was not able to test your web site since the URL you shared is no longer available. Would you please check if your sample is working correctly in v2.3.0 with experimental proxyless mode enabled?
This option is available in all interfaces:
// Command-line
testcafe chrome tests --experimental-proxyless
// Programmatic
const testcafe = await createTestCafe({ experimentalProxyless: true });
// Configuration file
{
"experimentalProxyless": "true"
}
Please keep in mind that this mode is still experimental and is implemented only in Google Chrome. It will not work correctly if you run tests in a non-Chrome browser or in a combination of other browsers.

How can I browse the web randomly in a Chrome instance with a logged in Chrome extension?

I want to test a Chrome extension and to test it I need to have it browse the web randomly and visit random pages for a long period to see if it generates any errors. You need to be logged into the extension, which is why I am not using Selenium for this as I cannot find a way to log into the extension using Selenium.
Is there a way to make Selenium act on an existing or pre-set Chrome existence? Any other options?
You can use web-ext. You can use Firefox, google Chrome, Chromium.
You can script your browser like this.
import webExt from 'web-ext';
webExt.cmd.run({
// These are command options derived from their CLI conterpart.
// In this example, --source-dir is specified as sourceDir.
firefox: '/path/to/Firefox-executable',
sourceDir: '/path/to/your/extension/source/',
}, {
// These are non CLI related options for each function.
// You need to specify this one so that your NodeJS application
// can continue running after web-ext is finished.
shouldExitProgram: false,
})
.then((extensionRunner) => {
// The command has finished. Each command resolves its
// promise with a different value.
console.log(extensionRunner);
// You can do a few things like:
// extensionRunner.reloadAllExtensions();
// extensionRunner.exit();
});
There should be an option to define --start-url. You can make this a random url with some programming... Not able to test it now, but you should be able to make it work
Or you just run from the commandline
web-ext run --start-url www.mozilla.com

Webdriverio with chrome browser does not display correcty

I am currently trying to test an app using webdriverio and chrome. But the app does not display anything but the images and some borders (??)(see pic). I get the same behaviour also with other websites, so it is not a problem of the app under test. Everything also displays well using other browsers (e.g firefox)
how it displays in chrome browser
Update: after searching a little bit more, I think this has something to do with passing basic auth credentials in the url, because the page I am testing is password protected. I have this in the webdriverio config file:
capabilities: [{
browserName: 'chrome',
chromeOptions: {
args: ['--disable-blink-features=BlockCredentialedSubresources']
}
}]
I am also using selenium-standalone as a service, version 0.0.10, webdriverio version 4.13.1.
In my code I open the page using:
browser.url("https://user:password#mywebsite.com/kontakt/");

Catch the closing event of the chrome browser using chrome driver

I am doing the automation in an application for some web pages in the vb.net using the chrome driver.
During the application operation here is a chance that chrome may close before completion of automation process (user may close or chrome may crash).
Now my requirement is that if the chrome is closed by the user while automation is running, the application needs to know that a close event in chrome is raised by the user and the application needs give a Yes/No message with the custom text to confirm the action of the user.
I searched this for some time on the internet and found nothing. Please provide the way to catch the closing event of the chrome browser in vb.net using the chrome driver.
Not sure I quite understand the question, but can you use the window.onbeforeunload approach?
e.g. See W3Schools
$(document).ready(function () {
window.onbeforeunload = function () {
//your code here
};
});

No Alert Found using Chrome Driver

Using ChromeDriver 2.14, selenium server 2.47.1 and Chrome 45. I am attempting to handle a basic authentication prompt. I have tried the following code to try and resolve this.
var wait = new OpenQA.Selenium.Support.UI.WebDriverWait(Driver.Value, new TimeSpan(0,0,60)).Until(OpenQA.Selenium.Support.UI.ExpectedConditions.AlertIsPresent());
Driver.Value.SwitchTo().Alert().Dismiss();
And this
Driver.Value.SwitchTo().Alert().SetAuthenticationCredentials("test", "test");
and this
while (true)
{
try
{
Driver.Value.SwitchTo().Alert().SetAuthenticationCredentials("test", "test");
break; //this is brute force I know
}
catch
{
Thread.Sleep(100);
}
}
No luck, they all throw a "no alert found" exception. We would switch to firefox, but it is an internal application and we only support IE or Chrome.
I didn't find a way to use the authentication options inside Selenium with the basic authtication prompt. Instead I found that if you run fiddler in the background, you can have it auto-authenticate for you when prompted based on a specific site. Then the popup never shows up and your Selenium scripts run just fine.
Auto http authenticate traffic for a specific site with fiddler