JAVA Selenium Webdriver Capturing wrong region in the screenshot - selenium

I am using selenium Web Driver(Java) to automate our web application.
I need to capture and compare the each icon of the application in all browsers.
For that first I've opened the application in Firefox and captured icons images with its xpath and then saving them at a particular path.
Later comparing the saved images when the application opened in another browser.
For this I have used the below code to capture the images, but the element image is not capturing, some unknown region in the screen is saving.
Please help, how to get the correct image of the element.
File screenshot = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
BufferedImage fullImg = ImageIO.read(screenshot);
Point point = x.getLocation();
//Get width and height of the element
int eleWidth = x.getSize().getWidth();
int eleHeight = x.getSize().getHeight();
Rectangle rect = new Rectangle(point.getX(),point.getY(),eleWidth, eleHeight);
//Crop the entire page screenshot to get only element screenshot
BufferedImage eleScreenshot= fullImg.getSubimage(point.getX(), point.getY(), rect.width, rect.height);
ImageIO.write(eleScreenshot, "png", screenshot);
//Copy the element screenshot to disk
FileUtils.copyFile(screenshot, new File("E:\\ICONS\\Icon1.jpg"));

driver.switchTo().defaultContent();
driver.switchTo().frame(driver.findElement(By.xpath("//*[#id='CWinBtn']")));
WebElement ele =driver.findElement(By.xpath("//*[#id='CCDLinkedformToolbar_cmdPrint']"));
try{
File screenshot = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
BufferedImage fullImg = ImageIO.read(screenshot);
Point point = ele.getLocation();
int eleWidth = ele.getSize().getWidth();
int eleHeight = ele.getSize().getHeight();
BufferedImage eleScreenshot= fullImg.getSubimage(point.getX()+30, 95, eleWidth, eleHeight);
ImageIO.write(eleScreenshot, "png", screenshot);
FileUtils.copyFile(screenshot, new File("E:\\ ICONS\\Icon1.png"));
}
catch(Exception e){
e.printStackTrace();
}
Changes that i have made to my previous code are, added value to X coordinate and passed static value to Y coordinate, as per my application resolution.

Related

Selenium Marionette getLocation nullpointer

I'm encountering:
Exception in thread "main" java.lang.NullPointerException at org.openqa.selenium.remote.RemoteWebElement.getLocation(RemoteWebElement.java:338)
while trying to get a BufferedImage of the captcha at https://signup.live.com/:
public BufferedImage getCaptchaBufferedImage() throws IOException, InterruptedException {
System.out.println("Looking for captcha image");
this.wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("hipTemplateContainer")));
System.out.println("Found image");
WebElement element = this.driver.findElement(By.id("hipTemplateContainer"));
System.out.println(element.getAttribute("outerHTML"));
List<WebElement> childs = element.findElements(By.xpath(".//*"));
WebElement firstChild = childs.get(0);
System.out.println(firstChild.getAttribute("outerHTML"));
List<WebElement> childs2 = firstChild.findElements(By.xpath(".//*"));
WebElement imageChild = childs2.get(0);
System.out.println(imageChild.getAttribute("outerHTML"));
((JavascriptExecutor) this.driver).executeScript("arguments[0].scrollIntoView(true);", imageChild);
String id = imageChild.getAttribute("id");
this.wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(id)));
**Point point = firstChild.getLocation();**
byte[] img_bytes = ((TakesScreenshot) this.driver).getScreenshotAs(OutputType.BYTES);
BufferedImage imageScreen = ImageIO.read(new ByteArrayInputStream(img_bytes));
System.out.println("Downloaded image");
double d = Double.parseDouble(firstChild.getCssValue("height").split("px")[0]);
int height = (int) d;
double e = Double.parseDouble(firstChild.getCssValue("width").split("px")[0]);
int width = (int) e;
BufferedImage captcha = imageScreen.getSubimage(point.getX(), point.getY(), width, height);
JFrame frame = new JFrame();
frame.getContentPane().setLayout(new FlowLayout());
frame.getContentPane().add(new JLabel(new ImageIcon(captcha)));
frame.pack();
frame.setVisible(true);
return captcha;
}
I've looked all over the net, can't figure this one out.. Possible bug in Selenium 3.0? This code works if I skip getting the offset of the image and just hardcode the values in getSubImage()..
I tried for more than 2 hours and guess what? I found the issue.
Issue:
It is because getScreenshotAs is taking only the visible part of the page (after scrolling to the captcha) but not complete the page, hence resulting in all the issues. It resulted in the Y coordinate returned by the Point (1041), is relative to the complete web page, but the screenshot image has different Y-coordinate for the captcha (300) relative to the partial page. hence resulted in the following exception:
java.awt.image.RasterFormatException: (y + height) is outside of Raster
X-coordinate is the same value in both complete web page and partial
web page.
so, hardcoding the Y-coordinate to 300 solved the issue temporararliy. BUt the actual issue is that why screenshot is not taken for the complete page instead of jus visible page. may be the bug in the latest geckodriver (firefox driver). tried in Firefox 49 version with Selenium 3 version, geckodriver v0.1.11 and Java 1.8.
Following is the code. please try and let me know:
driver.get("https://signup.live.com/");
driver.manage().window().maximize();
System.out.println("Looking for captcha image");
WebDriverWait wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("hipTemplateContainer")));
System.out.println("Found image");
WebElement element = driver.findElement(By.id("hipTemplateContainer"));
System.out.println(element.getAttribute("outerHTML"));
List<WebElement> childs = element.findElements(By.xpath(".//*"));
WebElement firstChild = childs.get(0);
System.out.println(firstChild.getAttribute("outerHTML"));
List<WebElement> childs2 = firstChild.findElements(By.xpath(".//*"));
WebElement imageChild = childs2.get(0);
System.out.println(imageChild.getAttribute("outerHTML"));
((JavascriptExecutor) driver).executeScript("arguments[0].scrollIntoView(true);", imageChild);
String id = imageChild.getAttribute("id");
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(id)));
Point point = imageChild.getLocation();
int width = imageChild.getSize().getWidth();
int height = imageChild.getSize().getHeight();
System.out.println("height: " + height + "\t weight : " + width);
System.out.println("X co-ordinate: " + point.getX());
System.out.println("Y co-ordinate: " + point.getY());
File screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
FileUtils.copyFile(screenshot, new File("G:\\naveen\\screenshot.png"));
BufferedImage imageScreen = ImageIO.read(screenshot);
System.out.println("Downloaded image");
BufferedImage captcha = imageScreen.getSubimage(245, 300, width, height);
ImageIO.write(captcha, "png", screenshot);
FileUtils.copyFile(screenshot, new File("G:\\naveen\\screenshot1.png"));
JFrame frame = new JFrame();
frame.getContentPane().setLayout(new FlowLayout());
frame.getContentPane().add(new JLabel(new ImageIcon(captcha)));
frame.pack();
frame.setVisible(true);
Following are screenshots saved:
Only visible web page is saved.
sub image - captcha image:

Capture screenshot of all the elements present on a page

I'm trying to capture the screenshot of all the elements present on a webpage and want to store it in my disk for which i have written the below code.
The only issue is this piece of code is working only for the first iteration and after which some unexpected thing is happening.
List<WebElement> eleId = driver.findElements(By.xpath("//*[#id]")); //fetch all the elements with ID attribute
System.out.println(eleId.size());
for (int i = 0;i < eleId.size();i++) {
// Get entire page screenshot
File screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
BufferedImage fullImg = ImageIO.read(screenshot);
// Get the location of element on the page
Point point = eleId.get(i).getLocation();
// Get width and height of the element
int eleWidth = eleId.get(i).getSize().getWidth();
int eleHeight = eleId.get(i).getSize().getHeight();
// Crop the entire page screenshot to get only element screenshot
BufferedImage eleScreenshot = fullImg.getSubimage(point.getX(), point.getY(), eleWidth, eleHeight);
ImageIO.write(eleScreenshot, "png", screenshot);
// Creating variables name for image to be stores in the disk
String fileName = eleId.get(i).getAttribute("id");
String imageLocation = "D:\\" + fileName + ".png";
// System.out.println(imageLocation);
// Copy the element screenshot to disk
File screenshotLocation = new File(imageLocation);
FileUtils.copyFile(screenshot, screenshotLocation);
System.out.println("Screenshot has been stored.");
}
Hi Sandeep try below code. It is working for me. I just added one "if" condition to check the image height and width.
#Test(enabled=true)
public void getIndividualElementScreenShot() throws IOException{
WebDriver driver = new FirefoxDriver();
driver.get("http://www.google.com/");
driver.manage().timeouts().pageLoadTimeout(30, TimeUnit.SECONDS);
driver.manage().window().maximize();
List<WebElement> eles = driver.findElements(By.xpath("//*[#id]"));
System.out.println(eles.size());
for(WebElement ele : eles){
//Get Entire page screen shot
File screenShot = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
BufferedImage fullImage = ImageIO.read(screenShot);
//Get the location on the page
Point point = ele.getLocation();
//Get width and height of an element
int eleWidth = ele.getSize().getWidth();
int eleHeight = ele.getSize().getHeight();
//Cropping the entire page screen shot to have only element screen shot
if(eleWidth != 0 && eleHeight != 0){
BufferedImage eleScreenShot = fullImage.getSubimage(point.getX(), point.getY(), eleWidth, eleHeight);
ImageIO.write(eleScreenShot, "png", screenShot);
//Creating variable name for image to be store in disk
String fileName = ele.getAttribute("id");
String imageLocation = "F:\\ElementImage\\"+fileName+".png";
System.out.println(imageLocation);
//Copy the element screenshot to disk
File screenShotLocation = new File(imageLocation);
org.apache.commons.io.FileUtils.copyFile(screenShot, screenShotLocation);
System.out.println("Screen shot has beed stored");
}
}
driver.close();
driver.quit();
}
Your code is doing fine. But you are overwriting the file in each iteration. Did you get it? Change file name in each iteration by assigning new file name in fileName variable. Use the code below:
List<WebElement> eleId = driver.findElements(By.xpath("//*[#id]")); //fetch all the elements with ID attribute
System.out.println(eleId.size());
for (int i = 0;i < eleId.size();i++) {
// Get entire page screenshot
File screenshot = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
BufferedImage fullImg = ImageIO.read(screenshot);
// Get the location of element on the page
Point point = eleId.get(i).getLocation();
// Get width and height of the element
int eleWidth = eleId.get(i).getSize().getWidth();
int eleHeight = eleId.get(i).getSize().getHeight();
// Crop the entire page screenshot to get only element screenshot
BufferedImage eleScreenshot = fullImg.getSubimage(point.getX(), point.getY(), eleWidth, eleHeight);
ImageIO.write(eleScreenshot, "png", screenshot);
// Creating variables name for image to be stores in the disk
String fileName = eleId.get(i).getAttribute("id");
String imageLocation = "D:/" + fileName + i + ".png";
// System.out.println(imageLocation);
// Copy the element screenshot to disk
File screenshotLocation = new File(imageLocation);
FileUtils.copyFile(screenshot, screenshotLocation);
System.out.println("Screenshot has been stored.");
}

Selenium- screenshot of visible part of page

Is there a way to get Selenium WebDriver to take screenshot only of the visible part of the page for PhantomJS? I've browsed the source and there is no API AFAICT. So is there a trick to do that somehow?
EDIT: Chrome already snaps only visible part, so removed it as part of question.
According to the JavaDoc API for TakesScreenshot a WebDriver extending TakesScreenshot will make a best effort to return the following in order of preference:
Entire page
Current window
Visible portion of the current frame
The screenshot of the entire display containing the browser
As PhantomJS is a headless browser it probably doesn't have menus/tabs and other similar browser chrome. So all you can control is the Dimension of the browser window.
// Portrait iPhone 6 browser dimensions
Dimension dim = new Dimension(375, 627);
driver.manage().window().setSize(dim);
Taking a screenshot will most likely capture the entire page. If you want to restrict your resulting file to the dimensions you requested you could always
crop it to your required dimensions (not ideal but PhantomJS is not a real browser).
private static void capture(String url, WebDriver driver, Dimension dim, String filename) throws IOException{
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
driver.manage().window().setSize(dim);
driver.get(url);
File scrFile = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
int w = dim.getWidth();
int h = dim.getHeight();
Image orig = ImageIO.read(scrFile);
BufferedImage bi = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
bi.getGraphics().drawImage(orig, 0, 0, w, h, 0, 0, w, h, null);
ImageIO.write(bi, "png", new File(filename));
}
You can use robot class for this as belows
Robot rb=new Robot();
rb.keyPress(KeyEvent.VK_ALT);
rb.keyPress(KeyEvent.VK_PRINTSCREEN);
rb.keyRelease(KeyEvent.VK_PRINTSCREEN);
rb.keyRelease(KeyEvent.VK_ALT);
Once you have copied the screenshot on clipboard then u can save it to file.
WebDriver driver = new FirefoxDriver(); driver.get("http://www.google.com/");
File scrFile = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
FileUtils.copyFile(scrFile, new File("c:\\tmp\\screenshot.png"));

Add links to PDF programmatically

I have about 180 PDF files that are generated from a geodatabase. I would like to programmatically add links as hot spots (no text) at the top, bottom, left and right as needed to navigate to the adjoining page files. I would also like to add links over a 3x3 grid in the lower left corner of the page for additional navigation. The grid is already in the existing PDF just no links. Total there will be a possible 14 links added to each page
I am open to suggestions as to how to go about this. I am using Acrobat Pro XI, and I am familiar with various programing languages python, vb.net, C#... Just no experience working directly with PDF files.
This is very late answer. Actually I was searching for free alternative to above paid libraries. I found the following links which can be helpful to others.
Apache PDFBox is a vast java library to create pdf programmatically.
TomRoush/PdfBox-Android is it's android implementation. You can find the sample project with this implementation.
I have added the code for creating clickable links in pdf by using above android library and sample project.
public void createPdf(View v) {
PDDocument document = new PDDocument();
PDPage page = new PDPage();
document.addPage(page);
// Create a new font object selecting one of the PDF base fonts
PDFont font = PDType1Font.HELVETICA;
// Or a custom font
//try {
// PDType0Font font = PDType0Font.load(document, assetManager.open("MyFontFile.TTF"));
//} catch(IOException e) {
// e.printStackTrace();
//}
PDPageContentStream contentStream;
try {
// Define a content stream for adding to the PDF
contentStream = new PDPageContentStream(document, page);
String preText = "Icons made by ";
String linkText = "My_Site";
float upperRightX = page.getMediaBox().getUpperRightX();
float upperRightY = page.getMediaBox().getUpperRightY();
// Write linkText in blue text
contentStream.beginText();
contentStream.setNonStrokingColor(15, 38, 192);
contentStream.setFont(font, 18);
contentStream.moveTextPositionByAmount( 0, upperRightY-20);
contentStream.drawString(preText + linkText);
contentStream.endText();
// create a link annotation
PDAnnotationLink txtLink = new PDAnnotationLink();
// set up the markup area
float offset = (font.getStringWidth(preText) / 1000) * 18;
float textWidth = (font.getStringWidth(linkText) / 1000) * 18;
PDRectangle position = new PDRectangle();
position.setLowerLeftX(offset);
position.setLowerLeftY(upperRightY - 24f);
position.setUpperRightX(offset + textWidth);
position.setUpperRightY(upperRightY -4);
txtLink.setRectangle(position);
// add an action
PDActionURI action = new PDActionURI();
action.setURI("https://www.**********.com/");
txtLink.setAction(action);
// and that's all ;-)
page.getAnnotations().add(txtLink);
// load 'Social media' icons from 'vector' resources.
float padding = 5, startX = 5, startY = upperRightY-100, width = 25, height=25;
loadVectorIconWithLink(document, page, contentStream, R.drawable.ic_facebook,
"https://www.facebook.com/My_Name/", startX, startY, width, height);
startX += (width + padding);
loadVectorIconWithLink(document, page, contentStream, R.drawable.ic_instagram,
"https://www.instagram.com/My_Name", startX, startY, width, height);
// Make sure that the content stream is closed:
contentStream.close();
// Save the final pdf document to a file
String path = root.getAbsolutePath() + "/Download/Created.pdf";
document.save(path);
document.close();
tv.setText("Successfully wrote PDF to " + path);
} catch (IOException e) {
e.printStackTrace();
}
}
private void loadVectorIconWithLink( PDDocument theDocument,
PDPage thePage,
PDPageContentStream theContentStream,
#DrawableRes int theDrawableId,
String theUriString,
float x, float y, float width, float height
) throws IOException
{
Bitmap alphaImage = getBitmapFromDrawable(this, theDrawableId);
PDImageXObject alphaXimage = LosslessFactory.createFromImage(theDocument, alphaImage);
theContentStream.drawImage(alphaXimage, x, y, width, height );
// create a link annotation
PDAnnotationLink iconLink = new PDAnnotationLink();
PDRectangle position = new PDRectangle( x, y, width, height );
iconLink.setRectangle(position);
// add an action
PDActionURI action1 = new PDActionURI();
action1.setURI(theUriString);
iconLink.setAction(action1);
// and that's all ;-)
thePage.getAnnotations().add(iconLink);
}
public static Bitmap getBitmapFromDrawable(Context context, #DrawableRes int drawableId) {
Drawable drawable = AppCompatResources.getDrawable(context, drawableId);
if (drawable instanceof BitmapDrawable) {
return ((BitmapDrawable) drawable).getBitmap();
} else if (drawable instanceof VectorDrawableCompat || drawable instanceof VectorDrawable) {
Bitmap bitmap = Bitmap.createBitmap(drawable.getIntrinsicWidth(), drawable.getIntrinsicHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
drawable.setBounds(0, 0, canvas.getWidth(), canvas.getHeight());
drawable.draw(canvas);
return bitmap;
} else {
throw new IllegalArgumentException("unsupported drawable type");
}
}
There are at least three types of links you might want to add: links to pages within the same document, links to pages in other PDF document, links to URLs on the web.
Docotic.Pdf library can add links of any of these types (please note that I am on of the developers of this library). Here are two relevant examples:
Create link to page
Create hyperlink
There are no examples for how to create links to pages in an other PDF document published online, but you can always contact support if you need such an example.
After continuing to search and not finding any other promising open source solutions I went with Debenu Quick PDF Library. The specific functions I used are are noted below:
AddLinkToFile
AddLinkToPage
Other annotations and hotspot links
The time that the two library functions are going to save me weekly is worth the cost alone. I am sure I will find other use for the other 900+ PDF functions

How can I programmatically create a screen shot of a given Web site?

I want to be able to create a screen shot of a given Web site, but the Web site may be larger than can be viewed on the screen. Is there a way I can do this?
Goal is to do this with .NET in C# in a WinForms application.
There are a few tools.
The thing is, you need to render it in some given program, and take a snapshot of it.
I don't know about .NET but here are some tools to look at.
KHTML2PNG
imagegrabwindow() (Windows PHP Only)
Create screenshots of a web page using Python and QtWebKit
Website Thumbnails Service
Taking automated webpage screenshots with embedded Mozilla
I just found out about the website browsershots.org which generates screenshots for a whole bunch of different browsers. To a certain degree you can even specify the resolution.
I wrote a program in VB.NET that did what you specified, except for the screen size issue.
I embedded a web control(look at the very bottom of all controls) onto my form, and tweaked it's settings(Hide scroll). I used a timer to wait on dynamic content, and then I used "copyFromScreen" to get the image.
My program had dynamic dimensions(settable via command line). I found that if I made my program larger than the screen, the image would just return black pixels for the off screen area. I did not research farther since my job was complete at that time.
Hope that gives you a good start. Sorry for any wrong wordings. I log onto windows to develop only once every couple of months.
Doing at as a screen shot is likely to get ugly. It's easy enough to capture the entire content of the page with wget, but the image means capturing the rendering.
Here's some tools that purport to do it.
You can render it on WebBrowser control and then take snapshot if page size bigger than screen size you have to scroll control take one or more snapshots and then merge all pictures :)
This is the code for creating screenshot programatically:
using System.Drawing.Imaging;
int screenWidth = Screen.GetBounds(new Point(0, 0)).Width;
int screenHeight = Screen.GetBounds(new Point(0, 0)).Height;
Bitmap bmpScreenShot = new Bitmap(screenWidth, screenHeight);
Graphics gfx = Graphics.FromImage((Image)bmpScreenShot);
gfx.CopyFromScreen(0, 0, 0, 0, new Size(screenWidth, screenHeight));
bmpScreenShot.Save("test.jpg", ImageFormat.Jpeg);
Java ScreenShots of WebSite
Combine Screens together for Final Entire WebPage Screenshot.
public static void main(String[] args) throws FileNotFoundException, IOException {
System.setProperty("webdriver.chrome.driver", "D:\\chromedriver.exe");
ChromeDriver browser = new ChromeDriver();
WebDriver driver = browser;
driver.get("https://news.google.co.in/");
driver.manage().timeouts().implicitlyWait(500, TimeUnit.SECONDS);
JavascriptExecutor jse = (JavascriptExecutor) driver;
Long clientHeight = (Long) jse.executeScript("return document.documentElement.clientHeight");
Long scrollHeight = (Long) jse.executeScript("return document.documentElement.scrollHeight");
int screens = 0, xAxis = 0, yAxis = clientHeight.intValue();
String screenNames = "D:\\Screenshots\\Yash";
for (screens = 0; ; screens++) {
if (scrollHeight.intValue() - xAxis < clientHeight) {
File crop = new File(screenNames + screens+".jpg");
FileUtils.copyFile(browser.getScreenshotAs(OutputType.FILE), crop);
BufferedImage image = ImageIO.read(new FileInputStream(crop));
int y_Axixs = scrollHeight.intValue() - xAxis;
BufferedImage croppedImage = image.getSubimage(0, image.getHeight()-y_Axixs, image.getWidth(), y_Axixs);
ImageIO.write(croppedImage, "jpg", crop);
break;
}
FileUtils.copyFile(browser.getScreenshotAs(OutputType.FILE), new File(screenNames + screens+".jpg"));
jse.executeScript("window.scrollBy("+ xAxis +", "+yAxis+")");
jse.executeScript("var elems = window.document.getElementsByTagName('*');"
+ " for(i = 0; i < elems.length; i++) { "
+ " var elemStyle = window.getComputedStyle(elems[i], null);"
+ " if(elemStyle.getPropertyValue('position') == 'fixed' && elems[i].innerHTML.length != 0 ){"
+ " elems[i].parentNode.removeChild(elems[i]); "
+ "}}"); // Sticky Content Removes
xAxis += yAxis;
}
driver.quit();
}