My Problem
I am using Chrome's Puppeteer to automate some personal tasks. Most of these tasks involve logging into a webpage with my credentials and fetching some data.
The process can take up to 5 seconds. This means that my development cycle is pretty long, and I can get throttled pretty quickly.
My Question
Is there a way to serialize a Puppeteer webpage after logging in, and run my scripts against a local copy?
Yes, at any time after the page is opened in puppeteer you can do
const html = await page.content();
and then save that data to file or html and do anything with it after.
it is possible to open the file later in puppeteer:
await page.goto('file://C:/Users/User/data.html');
or better still use another node library like cheerio or jsdom which are supposed to be less expensive to run that puppeteer.
Related
I'd like to benchmark my web application. Specifically I'd like to measure the load time of a particular DOM element.
I can use webdriver's wait for visible to measure how long a an element took to load and save the result somewhere. However I'd also like to measure concurrency and other factors.
Is there a standard way to do this?
While I love WebdriverIO, I would recommend using a tool other than it for benchmarking. WebdriverIO uses HTTP requests to send commands to Selenium, and because of that isn't the most performant thing. Your stats could be way off simply because the request from WebdriverIO to Selenium takes longer than usual.
I'd suggest using a tool a little "closer to the metal", possibly CasperJS
Actually, I am creating automation testing for an e-commerce website. Actually, the website have function lazy load or something. I am testing it on UAT server. So, it will load the page slowly because the specification of the server. It takes more than 60 sec or more to load all the resources from the webpage. So, when I am trying to create selenium automation, it always waiting more than 60 sec to continue the next step (because waiting the page fully loaded). Please, someone give me tips how to continue run the test step after 10 seconds wait the page to load. It won't throw an exception, just continue the test step.
Not possible.
If you find some element and try execute some action while loading you will get stale element error + due loading issue you will have a lot of failed tests and it will take a lot more time to debug.
Automation means to execute fast and have reliable results.
It seems that this environment is not built for automation, you should request more resources.
As an alternative maybe you can use a headless driver or see if you can put the same build on a VM.
Why this is an issue: Selenium needs to wait for each request to be complete.For example when you request a page, if the page is not received entirely and the server still sending info then the request is not done, it is logical that you need a complete request in order to continue.
You should address this to your Project Manager/QA Lead and ask for advice/option on how to handle this.
Please note that these costs should be included/added in the automation price.You need to address this in a simple way:
good server -> automation runs smoothly and fast and the testing is
done faster
bad server -> unable to run automation since is not reliable and each
test has a high rate of failure => alternative X day(s) of
manual testing for each build
If this would be a coding issue like some delayed ajax request then you would have some solutions, devs could help, but if is an infrastructure/resources issue then if not depending on you, and you cannot solve it.
You could use try any type of wait implicit/explicit, explicit would throw some exception, but this is not a solution for poor resources.
Is there a way to get a realtime view of what PhantomJS (or similar) is rendering?
I would like to develop my automation script while interacting with (or at least seeing a screencap of) the page it's targeted to.
No, there is no such thing. SlimerJS has the same API as PhantomJS, but runs the Gecko engine. You can see directly what is going on and run it headlessly with xvfb-run.
You will not be able to interact with it. You may want to use a screengrabber to record a video of the interaction when the tests are long and you don't want to run the test suite again if you didn't catch the problem in the test case.
The obvious way to debug PhantomJS scripts is to render many screenshots using page.render() and logging some objects to the console with
console.log(JSON.stringify(yourObj, undefined, 4));
with nice formatting.
Solution we use is an automatic screenshoting in case of exceptions, phantomJs will render the current page into a file that you can exam later .
That's for test execution phase.
When you writing the tests, just keep additional window open ("normal browser") with the application you trying to test and design the test according to it.
When the design is done, execute the test with phantomJS.
My Suggestion is to use logging alongside.
http://casperjs.org/
CasperJS is an open source navigation scripting & testing utility written in Javascript for the PhantomJS WebKit headless browser and SlimerJS (Gecko). It eases the process of defining a full navigation scenario and provides useful high-level functions, methods & syntactic sugar for doing common tasks such as:
defining & ordering browsing navigation steps
filling & submitting forms
clicking & following links
capturing screenshots of a page (or part of it)
testing remote DOM
logging events
downloading resources, including binary ones
writing functional test suites, saving results as JUnit XML
scraping Web contents
The solution to this problem is using the remote debugger:
--remote-debugger-port=9000
Using slimerjs for testing scripts with a browser is not advisable since it is based on gecko, which means the script might work on slimerjs and not on phantomjs or viceversa.
take a look at this guide for more info...
https://drupalize.me/blog/201410/using-remote-debugger-casperjs-and-phantomjs
I have written a phantomjs script to scrap Hoover.
Following is my flow:
1:Get data from database using Nodejs API .
2:At a time I fetch 10 rows,pass these rows one at a time to Website,scrap it(the prob is here. I somehow want to store results from Scrapped into a array or something then pass this data back to node API to update database in Azure).
Right now I am able to get data from azure using nodejs API and also able to scrap using phantomjs my only prob is how do I store the results in tempopary storage or array, which then can be passsed to nodejs API for updating database in azure.
(I'm using CasperJS - it adds a layer on PhantomJS, but I think it might also work in PhantomJS)
You can have CasperJS do an AJAX call to your backend with the data you want to store.
Make CasperJS include a content script to each page it visits:
var casper = require('casper').create({ clientScripts: ['content.js'] });
Then, in content.js:
function sendToServer(theData){
var xhr2 = new XMLHttpRequest();
xhr2.open('POST', your_server_url, true);
xhr2.send(theData);
}
Now you can call sendToServer with casper.evaluate from your script.
And remember to include this in your receiving app (or see this module):
res.writeHead(200, {
'Access-Control-Allow-Origin': '*'
});
otherwise your ajax will fail. It is possible that you would have to add OPTIONS route that returns CORS headers as well. Another solution for this is disabling cross-origin checks on PhantomJS with command line switch.
I'm using Zombie.js for testing with Cucumber-js, and I can't seem to get my client side scripts to run.
Visiting the page:
this.browser.visit("http://localhost/boic",function(e, browser,status,errors){
console.log('status',status);
console.log('error',errors);
console.log('console',browser.text("H1"));
});
Returns a status of 200, no errors, and displays the H1 text correctly. However, if I include a script to change the H1 code in the page:
<script>
$('H1').html('hello world');
</script>
The H1 text remains unchanged, and no global variables are accessible via browser.window...
thanks!
Did you load jQuery in your page before the script is called?
there is also the runScripts browser option but that defaults to true.
But I'm gonna recommend running an external phantom.js process and going through the webdriver interface. Just because I spent 6 months trying to get zombie to do what I wanted and phantomjs made it all easy. http://phantomjs.org/release-1.8.html https://github.com/LearnBoost/soda
I agree w/ Jon Biz. Zombie is difficult to work with. Many sites that use JS libraries that might contain minor errors cause it to crash (I think node fails) when the zombie browser encounters them - if you have the runScripts option set. This makes it very hard to use for any application/site that requires external JS - ie most of them.
I also recommend switching to Phantomjs.