I am having an issue getting phantomJS to click the login button on a website.
I can see in my second screenshot that it is trying to select the login button, but I cannot get it to wait and take the screenshot on the next page.
Here is my JS file:
var page = require('webpage').create();
page.viewportSize = {width: 1920,height: 1080};
page.open('http://clubs.bluesombrero.com/default.aspx?portalid=1809', function (status) {
console.log("Status: " + status);
if (status === "success") {
var url = page.url;
console.log('URL: ' + url);
console.log("TC0001: Pass");
page.render('TC0001.png');
var a = page.evaluate(function() {
return document.querySelector('#dnn_dnnLOGIN_cmdLogin');
});
page.sendEvent('click', a.offsetLeft, a.offsetTop);
page.render('TC0002.png');
} else {
console.log("TC0001: Failed, Page did not load.");
}
phantom.exit();
});
I have tried a few ways to get it to wait to take the screenshot after the page has loaded, but I have not had any luck.
page.sendEvent() is a synchronous function that finishes as soon as its action is done. The next call (page.render()) is executed even before the request which was triggered by the click is answered.
1. setTimeout
JavaScript provides two functions to wait a static amount of time: setTimeout and setInterval:
page.sendEvent('click', a.offsetLeft, a.offsetTop);
setTimeout(function(){
page.render('TC0002.png');
phantom.exit();
}, 5000);
(don't forget to remove the other phantom.exit() since you don't want to exit too early)
Of course the problem is now that on one hand the page still might not be ready after 5 seconds or on the other hand the page was loaded extremely fast and just sits there doing nothing.
2. waitFor
A better approach would be to use the waitFor() function that is provided in the examples folder of PhantomJS. You can wait for a specific condition of the page like the existence of a specific element:
page.sendEvent('click', a.offsetLeft, a.offsetTop);
waitFor(function _testFx(){
return page.evaluate(function(){
return !!document.querySelector("#someID");
});
}, function _done(){
page.render('TC0002.png');
phantom.exit();
}, 10000);
3. page.onLoadFinished
Another approach would be to listen to the page.onLoadFinished event which will be called when the next page is loaded, but you should register to it before you click:
page.onLoadFinished = function(){
page.render('TC0002.png');
phantom.exit();
};
page.sendEvent('click', a.offsetLeft, a.offsetTop);
4. page.onPageCreated
Whenever a new window/tab would be opened in a desktop browser, the page.onPageCreated would be triggered in PhantomJS. It provides a reference to the newly created page, because the previous page is not overwritten.
page.onPageCreated = function(newPage){
newPage.render('TC0002.png');
newPage.close();
phantom.exit();
};
page.sendEvent('click', a.offsetLeft, a.offsetTop);
In all the other cases, the page instance is overwritten by the new page.
5. "Full" page load
That might still not be sufficient, because PhantomJS doesn't specify what it means when a page is loaded and the JavaScript of the page may still make further requests to build up the page. This Q&A has some good suggestions to wait for a "full" page load: phantomjs not waiting for “full” page load
Related
I am having problem to get screenshots of the website that loads through JS. I want to get the screenshots of that site but I got black screenshots .The code is working fine for other websites except this one which loads all content through js.( website is: https://signup.investorplace.com/?cid=MKT390371&eid=MKT390711&encryptedsnaid=&snaid=&step=start&assetId=AST96863)
My code is here:
var webpage = require('webpage');
var page=webpage.create();
var system=require('system');
var url='http://'+system.args[1];
page.settings.resourceTimeout = 15000; // 15 seconds
page.open(url, function (status) {
if (status !== 'success') {
console.log('Unable to load the address!');
} else {
window.setTimeout(function () {
page.render('preview.jpg',{format:'jpeg',quality:'80'});
phantom.exit();
},3000);
}
});
Notice how you create the url variable, using http protocol
var url='http://'+system.args[1];
But your target page is served via https. The url is bound to be incorrect.
When given the correct URL, PhantomJS will produce a valid preview
Why is google reCaptcha2 (gReCaptcha) not showing after a page refresh, but showing if page is reopened by the link?
See this video for explanation: http://take.ms/I2a9Z
Page url: https://orlov.io/signup
Page first open: captcha exists.
Navigate by link: captcha exists.
Open new browser tab: captcha exists.
Refreshing page by refresh icon, ctrl+R, ctrl+F5: captcha NOT exists.
I added body unload event to prevent browser cache, it did not help.
Browsers for testing:
Firefix 39.0
Chome: 44.0.2403.125 m
Opera: 30.0
In all browsers I get the same result. So does this mean there's an error on my side?
I think it has to do with the browser and the speed of your network. You are calling ReCaptcha with a callback, but you call it before you define the callback. Depending on your network speed or browser quirks, it might execute before the rest of the script has loaded.
Line 330:
<script src="//www.google.com/recaptcha/api.js?onload=renderReCaptchaCallback&render=explicit&hl=en-US" async defer></script>
Line 351:
<script type="text/javascript">if (typeof (renderReCaptchaCallback) === "undefined") {
var reCaptchaWidgets = {};
var renderReCaptchaCallback = function() {
jQuery.each(reCaptchaWidgets, function(widgetId, widgetOptions) {
grecaptcha.render(document.getElementById(widgetId), widgetOptions);
});
};
}</script>
So I would move the definition of renderReCaptchaCallback to the top of the page so it is defined well before trying to load it.
Am new to protractor. I found some errors while automating the URL using protractor. And I can access the URL manually and does not find any issues. Please find the code mentioned below and kindly clarify my concern.
Screenshot of cmd while executing the code
exports.config={
specs: ['try.js'],
//seleniumArgs: ['-browserTimeout=60']
capabilities:{
'browserName':'chrome',
},
baseUrl:'',
allScriptsTimeout:3000,
//getPageTimeout:5000,
framework:'jasmine2',
jasmineNodeOpts: {
defaultTimeoutInterval:56000,
isVerbose: true,
}
}
spec: try.js
===========
describe('first try',function(){
var EW=protractor.ExpectedConditions;
beforeEach(function(done){
ignoreSynchronization=true;
browser.get('');
});
it('open PO',function(){
//clicking login button
var login=element(by.linkText('Login'));
browser.wait(EW.presenceOf(login),10000);
login.click();
//clicking open Po dashboard icon/link
var po=element(by.linkText('Open PO'));
browser.wait(EW.presenceOf(po),20000);
po.click();
//entering value 100 in the fiter field
var e=element.all(by.repeater('colFilter in col.filters')).get(00).element(by.tagName('input'));
browser.wait(EW.presenceOf(e),10000);
e.sendKeys(100);
//selecting the filterd values and printing it in console
element.all(by.repeater('col in colContainer.renderedColumns track by col.uid').column('Entity')).getText().then(console.log);
});
});
Make sure you have ng-app defined on all of your pages. Protractor requires it to run. If the page has redirects or just takes some time before it loads, try something like this:
browser.get(websiteUrl);
browser.wait(function () {
return browser.executeScript('return !!window.angular');
}, 10000, 'Error: Angular was not found on the page within ten seconds');
This will wait up to ten seconds for angular to load up, and fail if it is not there.
I'm trying to automate an application that uses form security in order to upload a file and then scrape data from the returned HTML.
I started out using the solution from this question. I can define my steps and get through the entire workflow as long as the last step is rendering the page.
Here are the two steps that are the meat of my script:
function() {
page.open("https://remotesite.com/do/something", function(status) {
if ('success' === status) {
page.uploadFile('input[name=file]', 'x.csv');
page.evaluate(function() {
// assignButton is used to associate modules with an account
document.getElementById("assignButton").click();
});
}
});
},
function() {
page.render('upload-results.png');
page.evaluate(function() {
var results = document.getElementById("moduleProcessingReport");
console.log("results: " + results);
});
},
When I run the script, I see that the output render is correct. However, the evaluate part isn't working. I can confirm that my DOM selection is correct by running it in the Javascript console while on the remote site.
I have seen other questions, but they revolve around using setTimeout. Unfortunately, the step strategy from the original approach already has a timeout.
UPDATE
I tried a slightly different approach, using this post and got similar results. I believe that document uses an older PhantomJS API, so I used the 'onLoadFinished' event to drive between steps.
i recomend you use casperjs or if you use PJS's webPage.injectScript() you could load up jquery and then your own script to do form input/navigation.
Is there a script to make the browser refresh when a page is re-sized? More specifically, one that emulates the browser refresh button or F5? I've found two, but they don't quite do what I'm looking for;
<script type="text/javascript">
var currheight = document.documentElement.clientHeight;
window.onresize = function(){
if(currheight != document.documentElement.clientHeight) {
location.replace(location.href);
}
}
</script>
and
<body onResize="window.location=window.location;">
The problem with these two is they appear to completely reset the page where as using the browsers refresh function leaves some user made changes intact (like being at a specific hash for instance) which is what I need.
So is there a way to refresh the window on re-size with a script similar to if the browser refresh was clicked? I don't understand why there is even a difference but there is.
Thanks.
Yes, you probably want to take a look at some JavaScript events. There is an OnResize event.
Here's a link to get you started with events:
http://www.devarticles.com/c/a/JavaScript/OnReset-OnResize-and-Other-JavaScript-Events/
As far as reloading the page, you can do that too:
http://www.mediacollege.com/internet/javascript/page/reload.html
As far as keeping the user values, you could persist them in a session.
Here is the perfect solution :
I have included timeout of 1 sec i.e. browser will refresh after 1 sec of window resize
$(window).resize(function() {
setTimeout( function(){
window.location.href = window.location.href;
},1000);
});
Without timeout
$(window).resize(function() {
window.location.href = window.location.href;
});
NOTE : You may also use window.location.reload() instead of window.location.href = window.location.href
window.location.reload() reloads the current page with POST data, whilewindow.location.href=window.location.href does not include the POST data
-- hope it helps
Try this:
<![if !IE]> <body onresize="document.location=window.location;"> <![endif]>
<!--[if IE]> <body onresize="window.location.reload();"> <![endif]-->