how to fill in data to pdf by itext 5? - pdf

<%#page import="java.io.*, com.itextpdf.text.*, com.itextpdf.text.pdf.*"%>
<%
String src= "//usr//local//tomcat8//webapps//test//src.pdf";
String dest= "//usr//local//tomcat8//webapps//test//result.pdf";
PdfReader reader = new PdfReader(src);
PdfStamper stamper = new PdfStamper(reader, new FileOutputStream(dest));
AcroFields form = stamper.getAcroFields();
form.setField("UserName", "PETER");
form.setField("Company Name", "ABC Company");
form.setField("Company ID", "ID001");
stamper.close();
reader.close();
%>
I developed a small jsp page to test write data to pdf. It will generate a new pdf after run the jsp page. When i open the result.pdf. The filled data will show only when i click on the named field. It will clear the value when i click on another field. What my coding wrong?

Related

How to read a div details with / without scrapping web page which is not present in source code in java?

I have a use case where I want to read the version of the published extension on edge store.
The link of any published extension is as follows -> https://microsoftedge.microsoft.com/addons/detail/incognito-adblocker/efpgcmfgkpmogadebodiegjleafcmdcb
Now Here the problem I am facing is that the span where the version is location. ( Span ID is "versionLabel" ), has a parent div called "root". Now if we inspect it and check we can see all the children divs of this "root" div. But if we see the source of this page ( Ctrl + U ). This div always shows up empty with no details.
<div id="root" style="min-height: 100vh"></div>
I am using Jsoup to parse this page and get this details but because this div "root" is empty. I can not able to read this "verisonLabel" details. Is there any way to do this ?
Please refer the ways I have already tried but none worked.
1.
String URL = "https://microsoftedge.microsoft.com/addons/detail/incognito-adblocker/efpgcmfgkpmogadebodiegjleafcmdcb";
Document doc = Jsoup.connect(URL).get();
Element version = doc.getElementById("versionLabel");
Document demo = Jsoup.parse(URL);
Element newHere = demo.getElementById("versionLabel");
WebDriver driver = new ChromeDriver();
driver.get(URL);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
WebElement e = driver.findElement(By.xpath("//*[text()='Get started free']"));
System.out.println(e);
String webpage = "https://microsoftedge.microsoft.com/addons/detail/incognito-adblocker/efpgcmfgkpmogadebodiegjleafcmdcb";
URL url = new URL(webpage);
BufferedReader readr =
new BufferedReader(new InputStreamReader(url.openStream()));
// Enter filename in which you want to download
BufferedWriter writer =
new BufferedWriter(new FileWriter("Download.html"));
// read each line from stream till end
String line;
while ((line = readr.readLine()) != null) {
writer.write(line);
}
readr.close();
writer.close();
In each of this ways, because the "root" div itself is empty, I am not able to read the "versionLabel" span.
Can someeone suggest some way here ?
This will get the version from the 'versionLabel':
driver.find_element(By.XPATH, "(//span[#id='versionLabel'])[2]").text

How can I click on a specific link?

I'm making an app (console application) similar to a search engine. The app asks the user for input, then it googles that input and gathers the URLs of the first page of Google results. Then, it asks the user which link does he want to open, for example: "Which link should I open?" and the user should type "First", "Second", "Third", etc. I have managed to get to the point where I gather all URLs, but I don't know how to code the part where I choose which URL I want it to open.
Here's the code:
Console.WriteLine("Search for:"); //User Input
string command = Console.ReadLine();
IWebDriver driver = new FirefoxDriver();
driver.Navigate().GoToUrl("http://www.google.com"); //Opens Firefox, Goes to Google.
driver.Manage().Window.Maximize();
IWebElement searchInput = driver.FindElement(By.Id("lst-ib"));
searchInput.SendKeys(command); //Types User Input in "Search"
searchInput.SendKeys(Keys.Enter); //Hits "Enter"
//Gets Urls
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(1));
By linkLocator = By.CssSelector("cite._Rm");
wait.Until(ExpectedConditions.ElementToBeClickable(linkLocator));
IReadOnlyCollection<IWebElement> links = driver.FindElements(linkLocator);
foreach (IWebElement link in links)
{
Console.WriteLine(link.Text);
}

HtmlAgilityPack returning random characters

I have the following code that is using HtmlAgilityPack to pull back html code for a number of websites. All seems to be working well, apart from asos.com. When running a url through, it returns random characters (‹\b\0\0\0\0\0\0UÍ „ï&¾CãÁ¢ø›\bãhìÁ3-«Ziý}z‘š/»ómf³Ü`]In#iÉÑbr[œ¡Ä¬v7Ðœ¶7N[GáôSv;Ü°?[†.ã*3Ž¢G×ù6OƒäwPŒõH\rÙ¸\vzìmèÎ;M›4q_K¨Ð)
HtmlAgilityPack.HtmlDocument doc = new HtmlDocument();
doc.OptionReadEncoding = false;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("http://www.asos.com/ASOS/ASOS-Sweatshirt-With-Contrast-Ribs/Prod/pgeproduct.aspx?iid=2765751&cid=14368&sh=0&pge=0&pgesize=20&sort=-1&clr=Red");
request.Timeout = 10000;
request.ReadWriteTimeout = 32000;
request.UserAgent = "TEST";
request.Method = "GET";
request.Accept = "text/html";
request.AllowAutoRedirect = false;
request.CookieContainer = new CookieContainer();
StreamReader reader = new StreamReader(request.GetResponse().GetResponseStream(), Encoding.Default); //put your encoding
doc.Load(reader);
string html = doc.DocumentNode.OuterHtml;
I have ran the url through Fiddler, however cant seem to see anything to suggest there should be a problem. Any ideas where i'm going wrong?
See header image from fiddler here: http://i.stack.imgur.com/2LRFY.png
This has nothing to do with Html Agility Pack, it's because you have set AllowAutoRedirect to false. Remove it and it will work. The site apparently does a redirect, you need to follow it if you want the final HTML text.
Note the Html Agility Pack has a utility HtmlWeb class that can download file directly as an HmlDocument:
HtmlWeb web = new HtmlWeb();
HtmlDocument doc = web.Load(#"http://www.asos.com/ASOS/ASOS-Sweatshirt-With-Contrast-Ribs/Prod/pgeproduct.aspx?iid=2765751&cid=14368&sh=0&pge=0&pgesize=20&sort=-1&clr=Red");

I am not able to create a defect and associate with a userstory. I am using the below rally rest api java code

I am not able to create a defect and associate with a userstory. I am using the below rally rest api java code.
JsonObject newDefect = new JsonObject();
newDefect.addProperty("Name", "defect added to check");
newDefect.addProperty("Description", "description added to check");
newDefect.addProperty("Requirement", "/userstory/11018012245");
newDefect.addProperty("SubmittedBy", "user/10832575945");
newDefect.addProperty("Workspace", "/workspace/10832575967");
newDefect.addProperty("Project", "project/10832575978");
CreateRequest createRequest = new CreateRequest("defect", newDefect);
CreateResponse createResponse = restApi.create(createRequest);createResponse.getObject().get("FormattedID").toString();
Your code looks fine, it would appear that you've just got some missing "/'s" on a couple of your ref's:
newDefect.addProperty("SubmittedBy", "user/10832575945");
-->
newDefect.addProperty("SubmittedBy", "/user/10832575945");
newDefect.addProperty("Project", "project/10832575978");
-->
newDefect.addProperty("Project", "/project/10832575978");

How can I check for text on a site in VB.NET?

I am trying to create a simple program that logs in to a site using a WebBrowser, and it does it fine, but I want it to check if it actually logs in (correct details have been put in) and return the result to the program, I figured it would be possible to search for text on the page after submitting the login to see if it was successful or if it failed. How would I search for text on the page? My current code is this:
Status.Text = "Validating details..."
WebBrowser1.Navigate("http://www.site.com/login")
wait(6000)
WebBrowser1.Document.GetElementById("username").SetAttribute("value", TextBox1.Text)
WebBrowser1.Document.GetElementById("password").SetAttribute("value", TextBox2.Text)
WebBrowser1.Document.GetElementById("login").InvokeMember("click")
You can use the All HtmlElementCollection of Document property to get the inner text or inner html of each html element in loaded document.
Here's a little test that I wrote :
public void Test()
{
var browser = new WebBrowser();
var handle = new AutoResetEvent(false);
browser.DocumentCompleted += (sender, args) => {
foreach (HtmlElement element in browser.Document.All)
Console.WriteLine(element.InnerHtml);
handle.Set();
};
browser.Navigate("http://www.google.com");
handle.WaitOne();
}