When I take a screenshot of a webpage with PhantomJS 1.9.8, I have a test case where the output is always a zero size file. I tried several debugging options with page.onError, I see some errors related with Facebook plugins and scripts, but nothing very helpful...
So when PhantomJS fails on rendering a page, is there a way to know what's going on above the status of the render() function?
URL: http://www.santenatureinnovation.com/verrues-un-nombre-incroyable-de-solutions/
The page is so big that it uses between 600 and 700 MB of RAM to render the image. The dimensions of the resulting image are 960 x 141524 (sic!). Make sure you have enough RAM and wait a little. It takes several seconds for the image to be rendered. The good thing is that JavaScript is single threaded and you don't have to add anything to wait for the rendering to finish, everything else freezes.
I tried it successfully with PhantomJS 1.9.7 and 1.9.8 (on windows) without special care to viewportSize or user agent string.
Related
We are running TinyMCE version 5.4.1 with various options including:
paste_data_images: true
powerpaste_allow_local_image: true
When we drag & drop (or paste) in smaller images (400px X 400px) everything seems to work fine. The Base64 encoding is saved to the database and the image is rendered from all browsers, Chrome, Firefox and Safari.
However, when we paste in a larger image (1920px x 1081px) the image is only saved and rendered correctly in Chrome and Firefox. In Safari the Base64 encoding is saved with all lowercase characters. Therefore it doesn't render when attempting to view it. Has anyone else experienced this?
I have searched here as well as on the TinyMCE website but don't see anything mentioning this behavior. We will eventually attempt to move away from this Base64 implementation as it's no longer recommended but it's what we have for the time being so I'm just trying to address this issue.
When the page loads, its' elements can do so in parallel. But when the browser sees the base64 image, it blocks the page from loading until this image is rendered. Thus, inserting large images into the page as base64 is certainly not a good practice - it may slow down page loads and worsen the UX.
To fix this problem and maybe several other issues, utilizing the automatic_uploads option is highly recommended. It will upload pasted images on the server instead of converting them to base64. Here is the example of the PHP upload handler that will upload images and give their URLs back to TinyMCE.
Concerning the issue with Safari, some minimal reproducible example would be very useful.
I should also mention that PowerPaste is a premium feature that will not work with TinyMCE opensource. If you are using the paid version of TinyMCE, you can create a support ticket.
I am testing the browser for mobile responsiveness. I changed the browser window size to iPhone 5 which is 320 x 568 using this command
driver.Manage().Window.Size = new Size(320, 568);
When I run the test, the browser opens fine according to the mentioned size without any issue. But it fails to find a hyperlink text which is displayed on the page. I get Element not visible exception when I could actually see the link text on the screen. So, could anyone help me solve this issue or have any ideas that I could try?
Any help would be highly appreciated.
Thanks.
Perhaps it's due to the time delay, that means code executes even before the link appears, So write the following code in your language
Code from Ruby Selenium-binding
wait = Selenium::WebDriver::Wait.new(timeout: 10) # seconds
wait.until { driver.find_element(id: "foo").displayed? }
driver.find_element(id: "foo").click
Try to scroll to the element.
You could use java script to do that.
In Python this can be done via
WebDriver.execute_script("arguments[0].scrollIntoView();", elem)
Some elements of the DOM of the webpage change when you test for mobile responsiveness, so selenium is unable to locate the element that you are specifically trying to target.So, you should try to debug and find the methods where the code is failing to perform the action.Then you should find the locators for those elements in "mobile responsiveness view" and trigger only those methods when you are testing for mobile.
I want to take a screenshot of a web page with watir. It should capture the final design of the page that a user would see.
I have the following problem:
The fonts are loaded somewhat after the pageload.
Thus, waiting for elements being visible? / exists? is not sufficient, as all html is already present on the page before the fonts are loaded. For those cases I only see the system standard fonts.
Does anyone know how to wait for fonts loading (except for using sleep X) with Watir?
You can get font size using
b.div(:id => 'foo').style 'font-size'
strong textI am new to Sencha and I am evaluating it.
Every time I try to open a browser and visit the API page at docs.sencha.com/whatever ... it takes forever to load up.
I mean, what the hell is it doing at the back end? Is it taking that long to load up all the necessary extjs app files? Or is it loading me the whole API library while I am only try to see one page?
So far, I have gone through a couple of examples, and I like Sencha a lot . However, I have a concern about the loading speed in production, because the speed they load up the API scares me.
If you are experienced with Sencha, could you tell me what is going on at the back? Please don't say "API is 20MB big, takes time to load ...", because I only want to see one page per visit, i believe it is wrong to load me the whole API to initialize a page.
UPDATE ------------------------------------
I face this loading screen for 20-30 seconds everytime when I open the browser. IE only. Chrome and FF are fast??
UPDATE 2 ----------------------------------
I did a profiling for IE. Btween http://projects.sencha.com/auth/session?... and /architect/2guides/intro/README.js?... IE went to sleep for over 20 seconds blank doing nothing (as u can see from the highlighted blank gap in between the 2 rows in the picture), then suddenly came back and finish loading the rest of the page!
I copied those links 1 by 1 and load them up in a new IE window. They all individually came up in milliseconds, so there is no speed issue. It is just that, IE went to sleep for 20+ seconds (CPU monitor shows no activity) StranGe!
Since you're starting up, you should definitely start building your application using sencha cmd. This allows you to build a version of the extjs file that uses only the components that you use.
But as a side note. I use the full sencha api and it takes me less than 2 sec to load the whole API. I use the production version ext-all.js and ext-all.css and gzip everything. After the zipping, I get a file size of less than 500KB, which is like nothing actually.
EDIT:
I checked the API docs page. The total download size is less than 1 MB and that too cause there are a lot of icons which aren't combined as sprites. Hence the browser takes a lot of time in requesting the icons. That's why the page is slow.
For IE, well sencha can't do much about it. The browser itself is slow. Any webpage you load will suffer from the same problem. Not just sencha's. The page speed will improve if you do some optimizations. The size of the API isn't the problem.
While running the rasterize.js example provided by PhantomJS, I find it I have to wait 20 seconds or more until a web page image is produced.
Is there any possible way to speed this up without consuming a lot of resource? I am basically looking to rapidly produce series of sequential images captured from webpages loaded with PhantomJS. It would be really great if I could output Phantomjs somehow to a video stream even.
For now I would look for something that just takes a screenshot of a web page within 1~2 second range with PhantomJS. If there's already a project or library which accomplishes this that would be great too.
In case your images URL are hardcoded into html response, then you can do next things:
Get html body
Parse it and get your images
And then render them into something like PhantomJS or anything else WebKit based.
You can take a look to this sample, https://github.com/eugenehp/node-crawler/blob/master/test/simple.js
Like:
var Crawler = require("../lib/crawler").Crawler;
var c = new Crawler({
"maxConnections":10,
// "timeout":60,
"debug":true,
callback:function(error,result,$) {
console.log("Got page");
$("img").each(function(i,img) {
console.log(img.src);
})
}
});
c.queue(["http://jamendo.com/","http://tedxparis.com"]);