How to sort a list - selenium

I need to check if the list is in alphabetical order. SysPrint seems like that the list is in alphabetical order, but assertion throws an error that the list is NOT in alphabetical order. Any help is greatly appreciated!
java.lang.AssertionError: Advertiser is not in alphabetical order. expected [true] but found [false]".
My Script:
try {
WebDriverWait wait = new WebDriverWait(driver, 30);
List<String> displayed = new ArrayList<String>();
List<String> sorted = new ArrayList<String>();
List<WebElement> verifyAdvertiser = wait
.until(ExpectedConditions.visibilityOfAllElementsLocatedBy(By
.xpath(".//*[#id='basic']/div[4]/table/tbody[2]/tr")));
System.out.println("NUMBER OF ROWS IN THIS TABLE = "
+ verifyAdvertiser.size());
for (WebElement element : verifyAdvertiser) {
System.out.println("" + element.getText());
displayed.add(element.getText());
sorted.add(element.getText());
}
Collections.sort(sorted);
log.info(sorted);
if (!displayed.equals(sorted)) {
final String failedMsg = "Advertiser is not in alphabetical order.";
log.error(failedMsg, null);
boolean passed = false;
Assert.assertTrue(passed, failedMsg);
}
} catch (Exception e) {
final String failedMsg = "Failed trying to check agency names in alphabetical order.";
log.error(failedMsg, e);
boolean passed = false;
Assert.assertTrue(passed, failedMsg);
}

Related

How do I click on an element that appears sometimes 1 or sometimes 2 at different intervals

I have done this
List<WebElement> element= driver.findElements(By.xpath(""));
for(int i=0;element.size();i++)
{
driver.findElements(By.xpath("")).isDisplayed();
driver.findElements(By.xpath("")).click();
}
I am new to java and selenium , so I thought of doing this. Is this logic correct or am I wrong? If wrong(most probably) , can you please rectify and explain alongside , wud be very helpful.
I get element not interactable error on this.
In case you are looking for method to wait for one of two elements to appear and then to click on the appeared element you can use this method:
public String waitForOneOfTwoElements(String xpath1, String xpath2){
wait = new WebDriverWait(driver, 30);
try {
wait.until(ExpectedConditions.or(ExpectedConditions.visibilityOfElementLocated(By.xpath(xpath1)),
ExpectedConditions.visibilityOfElementLocated(By.xpath(xpath2))));
if(waitForElementToBeVisible(xpath1)){
return xpath1;
}else if(waitForElementToBeVisible(xpath2)){
return xpath2;
}
}catch (Throwable t){
return null;
}
}
Where waitForElementToBeVisible is:
public boolean waitForElementToBeVisible(String xpath) {
wait = new WebDriverWait(driver, 30);
try {
wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath(xpath)));
return true;
}catch (Throwable t) {
return false;
}
}
Now, when you get the appeared element you can simply click it with:
String appearedElementXpath = waitForOneOfTwoElements(xpath1,xpath2);
driver.findElement(By.xpath(appearedElementXpath)).click();

Selenium not inputting the whole text into a text box

I am using Selenium to clear out the existing Shipping Address fields of a Salesforce Account object and to assign new values. I am coding in C# and running on Visual Studio 2019.
I am getting cases where the textboxes are not getting fully populated.
My code is below.
private string shippingStreet = "56789 Liberty Street"; // 80 character limit
private string shippingCity = "Toronto"; // 40 character limit
private string shippingState = "ON"; // 80 character limit
private string shippingZip = "87654"; // 20 character limit
private string shippingCountry = "Canada"; // 80 character limit
IWebElement shStreet = driver.FindElement(By.XPath("//textarea[#placeholder='Shipping Street']"));
shStreet.Clear();
shStreet.SendKeys(shippingStreet);
IWebElement shCity = driver.FindElement(By.XPath("//input[#placeholder='Shipping City']"));
shCity.Clear();
shCity.SendKeys(shippingCity);
IWebElement shState = driver.FindElement(By.XPath("//input[#placeholder='Shipping State/Province']"));
shState.Clear();
shState.SendKeys(shippingState);
IWebElement shZip = driver.FindElement(By.XPath("//input[#placeholder='Shipping Zip/Postal Code']"));
shZip.Clear();
shZip.SendKeys(shippingZip);
IWebElement shCountry = driver.FindElement(By.XPath("//input[#placeholder='Shipping Country']"));
shCountry.Clear();
shCountry.SendKeys(shippingCountry);
Please see the screenshot.
I fix this issue by adding an extra space after city, state, zip code, and country but I was wondering if there is a better solution.
You can try this method:
Just call it and add the xpath: WaitForElementDisplayed_byXPathTime("//myPath");
WaitForElementDisplayed_byXPathTime
public static void WaitForElementDisplayed_byXPathTime(string value)
{
var wait = new WebDriverWait(Driver, new TimeSpan(0, 0, 30));
wait.Until(webDriver => webDriver.FindElement(By.XPath(value)).Displayed);
}
The other thing I have done on these is create a new type method for individual characters like you would on mobile. This just slows it down a bit.
public static void TypeCharsIndividually(IWebElement element, string expectedValue)
{
//use your code for element displayed and element enabled
element.Click();
element.Clear();
foreach (char c in expectedValue)
{
element.SendKeys(c.ToString());
Thread.Sleep(100);
}
}
java click
public static void ClickJava(IWebElement element)
{
IJavaScriptExecutor executor = driver IJavaScriptExecutor;
executor.ExecuteScript("arguments[0].click();", element);
}
WaitForElement
public static bool WaitForElementDisplayed_byXPath(string path)
{
var result = true;
try { _wait.Until(webDriver => driver.FindElement(By.XPath(path)).Displayed); }
catch (StaleElementReferenceException) { WaitForElementDisplayed_byXPath(path); }
catch (NoSuchElementException) { WaitForElementDisplayed_byXPath(path); }
catch (WebDriverTimeoutException) { result = false; }
return result;
}
This is a Salesforce issue. I see the problem even when i am manually updating a shipping address field and I tab to another field.

AsyncTask doInBackground() does not execute correctly on run, but works on debugger

#Override
protected ArrayList<HashMap<String, String>> doInBackground(Void... params) {
ArrayList<HashMap<String, String>> PLIST = new ArrayList<>();
HttpHandler sh = new HttpHandler();
String jsonStr = sh.makeServiceCall(jsonUrl);
ArrayList<String> URLList = new ArrayList<>();
if (jsonStr != null) {
placesList.clear();
try {
JSONObject jsonObj = new JSONObject(jsonStr);
// Getting JSON Array node
JSONArray placesJsonArray = jsonObj.getJSONArray("results");
String pToken = "";
// looping through All Places
for (int i = 0; i < placesJsonArray.length(); i++) {
JSONObject placesJSONObject = placesJsonArray.getJSONObject(i);
String id = placesJSONObject.getString("id");
String name = placesJSONObject.getString("name");
HashMap<String, String> places = new HashMap<>();
// adding each child node to HashMap key => value
places.put("id", id);
places.put("name", name);
PLIST.add(places);
}
//TODO: fix this...
if (SEARCH_RADIUS == 1500) {
Log.e(TAG, "did it get to 1500?");
try {
for (int k = 0; k < 2; k++) {
//error is no value for next_page_token... this
ERROR HERE
pToken = jsonObj.getString("next_page_token"); //if I place breakpoint here, debugger runs correctly, and returns more than 20 results if there is a next_page_token.
String newjsonUrl = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?location="
+ midpointLocation.getLatitude() + "," + midpointLocation.getLongitude()
+ "&radius=" + SEARCH_RADIUS + "&key=AIzaSyCiK0Gnape_SW-53Fnva09IjEGvn55pQ8I&pagetoken=" + pToken;
URLList.add(newjsonUrl);
jsonObj = new JSONObject(new HttpHandler().makeServiceCall(newjsonUrl)); //moved
Log.e(TAG, "page does this try catch");
}
}
catch (Exception e ) {
Log.e(TAG, "page token not found: " + e.toString());
}
for (String url : URLList){
Log.e(TAG, "url is : " + url);
}
I made an ArrayList of URLS after many attempts to debug this code, I planned on unpacking the ArrayList after all the urls with next_page_tokens were added, and then parsing through each of them later. When running the debugger with the breakpoint on pToken = getString("next_page_token") i get the first url from the Logger, and then the second url correctly. When I run as is, I get the first url, and then the following error: JSONException: No value for next_page_token
Things I've tried
Invalidating Caches and restarting
Clean Build
Run on different SDK versions
Made sure that the if statement is hitting (SEARCH_RADIUS == 1500)
Any help would be much appreciated, thanks!
Function is called in a listener function like this.
new GetPlaces(new AsyncResponse() {
#Override
public void processFinish(ArrayList<HashMap<String, String>> output) {
Log.e(TAG, "outputasync:" );
placesList = output;
}
}).execute();
My onPostExecute method.
#Override
protected void onPostExecute(ArrayList<HashMap<String, String>> result) {
delegate.processFinish(result);
// Dismiss the progress dialog
if (pDialog.isShowing())
pDialog.dismiss();
}
It turns out that the google places api takes a few milliseconds to validate the next_page_token after it is generated. As such, I have used the wait() function to pause before creating the newly generated url based on the next_page_token. This fixed my problem. Thanks for the help.

Get dropdown values using actions class

After moving to the value which is present in dropdown, i want to get it using actions class. Below is the code i have written. i am trying to print the dropdown values. HTML tag for dropdown is input(for select i have code). Please help me
public static void caseSearch()
{
try
{
Actions a=new Actions(driver);
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
logger.info("clicking on cases tab :: ");
driver.findElement(By.xpath(loader.getProperty(Constants.CaseTab))).click();
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
WebElement ele=driver.findElement(By.xpath(loader.getProperty(Constants.CaseSearch)));
ele.click();
for(int i=0;i<=20;i++)
{
//i want to print the first value of dropdown in console
a.sendKeys(Keys.DOWN,Keys.DOWN,Keys.DOWN).build().perform();
String value=ele.getText();
System.out.println("value is = "+value);
a.sendKeys(Keys.DOWN,Keys.DOWN).build().perform();
Thread.sleep(3000);
a.sendKeys(Keys.ENTER).build().perform();
}
}
catch(Exception e)
{
logger.info("case search method is not executed :: " +e);
}
}
I think you forgot to get the value from the dropdown box.
You need to use the Select Class to get it.
Actions key = new Actions(browser);
WebElement dropdownElement = this.browser.findElement(By.xpath("YOUR DROPDOWN ELEMENT"));
Select dropOptions = new Select(dropdownElement);
int elements = dropOptions.getOptions().size();
for (int i = 0; i < elements; i++) {
dropdownElement.click();
if (i > 0) {
Thread.sleep(1000L);
key.sendKeys(Keys.DOWN).build().perform();
Thread.sleep(1000L);
key.sendKeys(Keys.ENTER).build().perform();
}
else {
Thread.sleep(1000L);
ActionPerformers.sendActionKey(browser, Keys.ENTER);
}
System.out.println("ELEMENT " + i + " -> " + dropOptions.getFirstSelectedOption().getText()); //Here you will get the selected Value...
}
This is only an exemple, I hope it suits you.

Selenium WebDriver generates StaleElementReferenceExeption on getText() on table elements

The current environment:
Selenium Server version 2.37.0
RemoteWebDriver running on Firefox
no Ajax / asynchronously loaded content
My tests are attempting to validate the content of each cell of an HTML table. Before accessing any table element an explicit wait verifies that the <tbody> element exists
ExpectedCondition<WebElement> recruitTableIsPresent = ExpectedConditions.presenceOfElementLocated(By.id("newRecruitFieldAgentWidget:newRecruitDataTable_data"));
new WebDriverWait(driver, 5).until(recruitTableIsPresent);
Once the table is verified to exist, data is pulled out by row and column
private Stats[] parseStats() {
String xpath = "//tbody[#id='regionalFieldAgentWidget:regionalDataTable_data']/tr[%d]/td[%d]";
Stats[] stats = new Stats[3];
for (int i = 0; i < stats.length; i++) {
String inProgresOrders = cellContent(xpath, i, 1);
String maxCapacity = cellContent(xpath, i, 2);
String allocationRatio = cellContent(xpath, i, 3);
Stats[i] = new Stats(inProgressORders, maxCapacity, allocationRatio);
}
return stats;
}
private String cellContent(String xpathTemplate, int row, int cell) {
String xpath = String.format(xpathTemplate, row + 1, cell + 1);
new WebDriverWait(driver, 10).until(ExpectedConditions.presenceOfElementLocated(By.xpath(xpath)));
WebElement elementByXPath = driver.findElementByXPath(xpath);
return elementByXPath.getText();
}
I don't see any race conditions, since the table content is populated with the page, and not in an asynchronous call. Additionally, I have seen other answers that suggest invoking findElement() via the driver instance will refresh the cache. Lastly, the explicit wait before accessing the element should ensure that the <TD> tag is present.
What could be causing the getText() method return the following exception:
org.openqa.selenium.StaleElementReferenceException: Element not found in the cache - perhaps the page has changed since it was looked up
It's worthwhile to note that the failure is intermittent. Some executions fail while other passes through the same code pass. The table cell causing the failure are also not consistent.
There is a solution to this using Html-Agility-Pack.
This will work only if you want to read the data from that page.
This goes likes this
//Convert the pageContent into documentNode.
void _getHtmlNode(IWebDriver driver){
var htmlDocument = new HtmlDocument();
htmlDocument.LoadHtml(driver.PageSource);
return htmlDocument.DocumentNode;
}
private Stats[] parseStats(){
String xpath = "//tbody[#id='regionalFieldAgentWidget:regionalDataTable_data']/tr[%d]/td[%d]";
Stats[] stats = new Stats[3];
for (int i = 0; i < stats.Length; i++) {
String inProgresOrders = cellContent(xpath, i, 1);
String maxCapacity = cellContent(xpath, i, 2);
String allocationRatio = cellContent(xpath, i, 3);
Stats[i] = new Stats(inProgressORders, maxCapacity, allocationRatio);
}
return stats;
}
private String cellContent(String xpathTemplate, int row, int cell) {
String xpath = String.format(xpathTemplate, row + 1, cell + 1);
new WebDriverWait(driver, 10).until(ExpectedConditions.presenceOfElementLocated(By.xpath(xpath)));
var documentNode = _getHtmlNode(driver);
var elementByXPath = documentNode.SelectSingleNode(xpath);
return elementByXPath.InnerText;
}
now read any data.
Some tips for using htmlNode.
1. Similar to driver.FindElement: document.SelectSingleNode
2. Similar to driver.FindElements: document.SelectNodes
3. Similar to driver.Text: document.InnerText.
For more search regarding HtmlNode.
Turns out there was a race condition as I've already mentioned. Since jQuery is available via PrimeFaces there is a very handy solution mentioned in a few other posts. I implemented the following method to wait for any asynchronous requests to return before parsing page elements
public static void waitForPageLoad(JavascriptExecutor jsContext) {
while (getActiveConnections(jsContext) > 0) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
}
private static long getActiveConnections(JavascriptExecutor jsContext) {
return (Long) jsContext.executeScript("return (window.jQuery || { active : 0 }).active");
}
Each built in driver implementation implements the JavascriptExecutor interface, so the calling code is very straightforward:
WebDriver driver = new FirefoxDriver();
waitForPageLoad((JavascriptExecutor) driver);