NullPointerException in the foreach statement - arraylist

I am trying to run the program but I keep getting the problem NullPointerException and it says that the problem is in the first for statement "for (ArrayList arrlist : arlt)". Does anybody knows what is the problem? Thank you very much.
public static ArrayList<String> LinesToBeShifted(ArrayList<ArrayList<String>> arlt) {
ArrayList<String> shift = new ArrayList<String>();
for (ArrayList<String> arrlist : arlt) {
for (int i = 0; i < arrlist.size(); ++i) {
shift.add(getString(arrlist));
arrlist = wordShift(arrlist);
}
}

If you get NullPointerException in the for loop the problem could be that arlt is null

Related

Write Status of test case with Hash Map and Selenium

I am using Hash Map to read the get excel data and use them in methods to perform If...else validations.
I am using class file for initializing the Hash Map for reading the data. it goes as shown below
public class SampleDataset {
public static HashMap<String, ArrayList<String>> main() throws IOException {
final String DatasetSheet = "src/test/resources/SampleDataSet.xlsx";
final String DatasetTab = "TestCase";
Object[][] ab = DataLoader.ReadMyExcelData(DatasetSheet, DatasetTab);
int rowcount = DataLoader.myrowCount(DatasetSheet, DatasetTab);
int colcount = DataLoader.mycolCount(DatasetSheet, DatasetTab);
HashMap<String, ArrayList<String>> map = new HashMap<String, ArrayList<String>>();
// i = 2 to avoid column names
for (int i = 2; i < rowcount;) {
ArrayList<String> mycolvalueslist = new ArrayList<String>();
for (int j = 0; j < colcount;) {
mycolvalueslist.add(ab[i][j].toString());
j++;
}
map.put(ab[i][0].toString(), mycolvalueslist);
i++;
}
return map;
}
I am using this map in my testcase file which is as shown below
#Test //Testcase
public void testThis() throws Exception {
try {
launchMainApplication();
TestMain MainPage = new TestMain(tool, test, user, application);
HashMap<String, ArrayList<String>> win = SampleDataset.main();
SortedSet<String> keys = new TreeSet<>(win.keySet());
for (String i : keys) {
System.out.println("########### Test = " + win.get(i).get(0) + " ###########");
MainPage.step01(win.get(i).get(1));
MainPage.step02(win.get(i).get(2));
}
test.setResult("pass");
} catch (AlreadyRunException e) {
} catch (Exception e) {
verificationErrors.append(e.getMessage());
throw e;
}
}
#Override
#After
public void tearDown() throws Exception {
super.tearDown();
}
I want to write the status as PASS or FAIL for all the testcase initiated through above FOR-LOOP on to same excel by creating new column as Status for each row of test case
my excel sheet is as shown below
Cereate global List. After every test case add status result in the list.
After all test cases are finished, iterate trough te list and updete your excell file. In this way.
public static void main(String[] args) throws EncryptedDocumentException, IOException {
// Step 1: load your excel file as a Workbook
String excelFilePath = "D:\\Desktop\\testExcel.xlsx";
Workbook workbook = WorkbookFactory.create(new FileInputStream(excelFilePath));
// Step 2: modify your Workbook as you prefer
Iterator<Sheet> sheetIterator = workbook.sheetIterator(); // Getting an iterator for all the sheets
while (sheetIterator.hasNext()) {
Iterator<Row> rowIterator = sheetIterator.next().rowIterator(); // Getting an iterator for all the rows (of current sheet)
while (rowIterator.hasNext()) {
Row row = rowIterator.next();
// Put here your internal logic to understand if the row needs some changes!
int cellsn = row.getLastCellNum();
row.getCell(cellsn).setCellValue("String that you gat from List = list.get(rownumber)")
}
}
You may need Apache POI.
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi-ooxml</artifactId>
<version>5.00</version>
</dependency>

Unable to delete documents in documentum application using DFC

I have written the following code with the approach given in EMC DFC 7.2 Development Guide. With this code, I'm able to delete only 50 documents even though there are more records. Before deletion, I'm taking the dump of object id. I'm not sure if there is any limit with IDfDeleteOperation. As this is deleting only 50 documents, I tried using DQL delete command, even there it is limited to 50 documents. I tried using destory() and destroyAllVersions() method that document has, even this didn't work for me. I have written everything in main method.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.*;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfLoginInfo;
import com.documentum.operations.IDfCancelCheckoutNode;
import com.documentum.operations.IDfCancelCheckoutOperation;
import com.documentum.operations.IDfDeleteNode;
import com.documentum.operations.IDfDeleteOperation;
import java.io.BufferedWriter;
import java.io.FileWriter;
public class DeleteDoCAll {
public static void main(String[] args) throws DfException {
System.out.println("Started...");
IDfClientX clientX = new DfClientX();
IDfClient dfClient = clientX.getLocalClient();
IDfSessionManager sessionManager = dfClient.newSessionManager();
IDfLoginInfo loginInfo = clientX.getLoginInfo();
loginInfo.setUser("username");
loginInfo.setPassword("password");
sessionManager.setIdentity("repo", loginInfo);
IDfSession dfSession = sessionManager.getSession("repo");
System.out.println(dfSession);
IDfDeleteOperation delo = clientX.getDeleteOperation();
IDfCancelCheckoutOperation cco = clientX.getCancelCheckoutOperation();
try {
String dql = "select r_object_id from my_report where folder('/Home', descend);
IDfQuery idfquery = new DfQuery();
IDfCollection collection1 = null;
try {
idfquery.setDQL(dql);
collection1 = idfquery.execute(dfSession, IDfQuery.DF_READ_QUERY);
int i = 1;
while(collection1 != null && collection1.next()) {
String r_object_id = collection1.getString("r_object_id");
StringBuilder attributes = new StringBuilder();
IDfDocument iDfDocument = (IDfDocument)dfSession.getObject(new DfId(r_object_id));
attributes.append(iDfDocument.dump());
BufferedWriter writer = new BufferedWriter(new FileWriter("path to file", true));
writer.write(attributes.toString());
writer.close();
cco.setKeepLocalFile(true);
IDfCancelCheckoutNode cnode;
if(iDfDocument.isCheckedOut()) {
if(iDfDocument.isVirtualDocument()) {
IDfVirtualDocument vdoc = iDfDocument.asVirtualDocument("CURRENT", false);
cnode = (IDfCancelCheckoutNode)cco.add(iDfDocument);
} else {
cnode = (IDfCancelCheckoutNode)cco.add(iDfDocument);
}
if(cnode == null) {
System.out.println("Node is null");
}
if(!cco.execute()) {
System.out.println("Cancel check out operation failed");
} else {
System.out.println("Cancelled check out for " + r_object_id);
}
}
delo.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
IDfDeleteNode node = (IDfDeleteNode)delo.add(iDfDocument);
if(node == null) {
System.out.println("Node is null");
System.out.println(i);
i += 1;
}
if(delo.execute()) {
System.out.println("Delete operation done");
System.out.println(i);
i += 1;
} else {
System.out.println("Delete operation failed");
System.out.println(i);
i += 1;
}
}
} finally {
if(collection1 != null) {
collection1.close();
}
}
} catch(Exception e) {
e.printStackTrace();
} finally {
sessionManager.release(dfSession);
}
}
}
I don't know where I'm making mistake, every time I try, the program stops at 50th iteration. Can you please help me to delete all documents in proper way? Thanks a lot!
At first select all document IDs into List<IDfId> for example and close the collection. Don't do another expensive operations inside of the opened collection, because you are then unnecessarily blocking it.
This is the cause why it did only 50 documents. Because you had one main opened collection and each execution of delete operation opened another collection and it probably reached some limit. So as I said it is better to consume the collection at first and then work further with those data:
List<IDfId> ids = new ArrayList<>();
try {
query.setDQL("SELECT r_object_id FROM my_report WHERE FOLDER('/Home', DESCEND)");
collection = query.execute(session, IDfQuery.DF_READ_QUERY);
while (collection.next()) {
ids.add(collection.getId("r_object_id"));
}
} finally {
if (collection != null) {
collection.close();
}
}
After that you can iterate through the list and do all actions with the document you need. But don't execute delete operation in each iteration - it is ineffective. Instead of it add all documents into one operation and execute it once at the end.
IDfDeleteOperation deleteOperation = clientX.getDeleteOperation();
deleteOperation.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
for (IDfId id : ids) {
IDfDocument document = (IDfDocument) session.getObject(id);
...
deleteOperation.add(document);
}
deleteOperation.execute();
The same is for the IDfCancelCheckoutOperation.
And another thing - when you are using FileWriter use close() in the finally block or use try-with-resources like this:
try (BufferedWriter writer = new BufferedWriter(new FileWriter("file.path", true))) {
writer.write(document.dump());
} catch (IOException e) {
throw new UncheckedIOException(e);
}
Using of StringBuilder is good idea, but create it only once at the beginning, append all attributes in each iteration and then write the content of the StringBuilder into the file at the end and not during each iteration - it is slow.
You could just do this from inside your code:
delete my_report objects where folder('/Home', descend)
no need to fetch information you are throwing away again ;-)
You're probably facing result set limit for DFC client.
Try adding to dfc.properties these lines and rerun your code to see if can delete more than 50 rows and after it adjust to your needs.
dfc.search.max_results = 100
dfc.search.max_results_per_source = 100

Getting error in reading data from excel in selenium webdriver (java)

Here is my code
public class MyClass
{
public void readExcel(String filePath,String fileName,String sheetName) throws IOException{
System.setProperty("webdriver.chrome.driver","C:\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
// To Maximize browser screen
driver.manage().window().maximize();
//Test 5 : Excel Read
File file = new File(filePath+"\\"+fileName);
FileInputStream inputStream = new FileInputStream(file);
String fileExtensionName = fileName.substring(fileName.indexOf("."));
Workbook guru99Workbook = null;
if(fileExtensionName.equals(".xlsx")) {
guru99Workbook = new XSSFWorkbook(inputStream);
}
else if(fileExtensionName.equals(".xls")){
guru99Workbook = new HSSFWorkbook(inputStream);
}
Sheet guru99Sheet = guru99Workbook.getSheet(sheetName);
//Find number of rows in excel file
int rowCount = guru99Sheet.getLastRowNum()-guru99Sheet.getFirstRowNum();
for (int i = 0; i < rowCount+1; i++) {
Row row = guru99Sheet.getRow(i);
//Create a loop to print cell values in a row
for (int j = 0; j < row.getLastCellNum(); j++) {
//Print Excel data in console
System.out.print(row.getCell(j).getStringCellValue()+"|| ");
}
}
}
//Main function is calling readExcel function to read data from excel file
public static void main(String...strings) throws IOException{
//Create an object of ReadGuru99ExcelFile class
MyClass objExcelFile = new MyClass();
//Prepare the path of excel file
String filePath = System.getProperty("user.dir")+"\\src\\newpackage";
//excelExportAndFileIO
//Call read file method of the class to read data
objExcelFile.readExcel(filePath,"Keywords.xlsx","ExcelGuru99Demo");
}
}
Here is the error :
Exception in thread "main" java.lang.NoSuchFieldError:
RAW_XML_FILE_HEADER at
org.apache.poi.poifs.filesystem.FileMagic.(FileMagic.java:42)
at
org.apache.poi.openxml4j.opc.internal.ZipHelper.openZipStream(ZipHelper.java:208)
at org.apache.poi.openxml4j.opc.ZipPackage.(ZipPackage.java:98)
at org.apache.poi.openxml4j.opc.OPCPackage.open(OPCPackage.java:324)
at org.apache.poi.util.PackageHelper.open(PackageHelper.java:37) at
org.apache.poi.xssf.usermodel.XSSFWorkbook.(XSSFWorkbook.java:295)
at newpackage.MyClass.readExcel(MyClass.java:139) at
newpackage.MyClass.main(MyClass.java:184)
PS : I am new to Selenium so learning this feature from :
https://www.guru99.com/all-about-excel-in-selenium-poi-jxl.html
Please help me , TIA
Hi I googled for it & found solution of my error :
I had to include one more jar.
xmlbeans-2.3.0.jar
Such error or suggestion was not giving though while creating / building code, I wonder why not..

AsyncTask doInBackground() does not execute correctly on run, but works on debugger

#Override
protected ArrayList<HashMap<String, String>> doInBackground(Void... params) {
ArrayList<HashMap<String, String>> PLIST = new ArrayList<>();
HttpHandler sh = new HttpHandler();
String jsonStr = sh.makeServiceCall(jsonUrl);
ArrayList<String> URLList = new ArrayList<>();
if (jsonStr != null) {
placesList.clear();
try {
JSONObject jsonObj = new JSONObject(jsonStr);
// Getting JSON Array node
JSONArray placesJsonArray = jsonObj.getJSONArray("results");
String pToken = "";
// looping through All Places
for (int i = 0; i < placesJsonArray.length(); i++) {
JSONObject placesJSONObject = placesJsonArray.getJSONObject(i);
String id = placesJSONObject.getString("id");
String name = placesJSONObject.getString("name");
HashMap<String, String> places = new HashMap<>();
// adding each child node to HashMap key => value
places.put("id", id);
places.put("name", name);
PLIST.add(places);
}
//TODO: fix this...
if (SEARCH_RADIUS == 1500) {
Log.e(TAG, "did it get to 1500?");
try {
for (int k = 0; k < 2; k++) {
//error is no value for next_page_token... this
ERROR HERE
pToken = jsonObj.getString("next_page_token"); //if I place breakpoint here, debugger runs correctly, and returns more than 20 results if there is a next_page_token.
String newjsonUrl = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?location="
+ midpointLocation.getLatitude() + "," + midpointLocation.getLongitude()
+ "&radius=" + SEARCH_RADIUS + "&key=AIzaSyCiK0Gnape_SW-53Fnva09IjEGvn55pQ8I&pagetoken=" + pToken;
URLList.add(newjsonUrl);
jsonObj = new JSONObject(new HttpHandler().makeServiceCall(newjsonUrl)); //moved
Log.e(TAG, "page does this try catch");
}
}
catch (Exception e ) {
Log.e(TAG, "page token not found: " + e.toString());
}
for (String url : URLList){
Log.e(TAG, "url is : " + url);
}
I made an ArrayList of URLS after many attempts to debug this code, I planned on unpacking the ArrayList after all the urls with next_page_tokens were added, and then parsing through each of them later. When running the debugger with the breakpoint on pToken = getString("next_page_token") i get the first url from the Logger, and then the second url correctly. When I run as is, I get the first url, and then the following error: JSONException: No value for next_page_token
Things I've tried
Invalidating Caches and restarting
Clean Build
Run on different SDK versions
Made sure that the if statement is hitting (SEARCH_RADIUS == 1500)
Any help would be much appreciated, thanks!
Function is called in a listener function like this.
new GetPlaces(new AsyncResponse() {
#Override
public void processFinish(ArrayList<HashMap<String, String>> output) {
Log.e(TAG, "outputasync:" );
placesList = output;
}
}).execute();
My onPostExecute method.
#Override
protected void onPostExecute(ArrayList<HashMap<String, String>> result) {
delegate.processFinish(result);
// Dismiss the progress dialog
if (pDialog.isShowing())
pDialog.dismiss();
}
It turns out that the google places api takes a few milliseconds to validate the next_page_token after it is generated. As such, I have used the wait() function to pause before creating the newly generated url based on the next_page_token. This fixed my problem. Thanks for the help.

Selenium WebDriver generates StaleElementReferenceExeption on getText() on table elements

The current environment:
Selenium Server version 2.37.0
RemoteWebDriver running on Firefox
no Ajax / asynchronously loaded content
My tests are attempting to validate the content of each cell of an HTML table. Before accessing any table element an explicit wait verifies that the <tbody> element exists
ExpectedCondition<WebElement> recruitTableIsPresent = ExpectedConditions.presenceOfElementLocated(By.id("newRecruitFieldAgentWidget:newRecruitDataTable_data"));
new WebDriverWait(driver, 5).until(recruitTableIsPresent);
Once the table is verified to exist, data is pulled out by row and column
private Stats[] parseStats() {
String xpath = "//tbody[#id='regionalFieldAgentWidget:regionalDataTable_data']/tr[%d]/td[%d]";
Stats[] stats = new Stats[3];
for (int i = 0; i < stats.length; i++) {
String inProgresOrders = cellContent(xpath, i, 1);
String maxCapacity = cellContent(xpath, i, 2);
String allocationRatio = cellContent(xpath, i, 3);
Stats[i] = new Stats(inProgressORders, maxCapacity, allocationRatio);
}
return stats;
}
private String cellContent(String xpathTemplate, int row, int cell) {
String xpath = String.format(xpathTemplate, row + 1, cell + 1);
new WebDriverWait(driver, 10).until(ExpectedConditions.presenceOfElementLocated(By.xpath(xpath)));
WebElement elementByXPath = driver.findElementByXPath(xpath);
return elementByXPath.getText();
}
I don't see any race conditions, since the table content is populated with the page, and not in an asynchronous call. Additionally, I have seen other answers that suggest invoking findElement() via the driver instance will refresh the cache. Lastly, the explicit wait before accessing the element should ensure that the <TD> tag is present.
What could be causing the getText() method return the following exception:
org.openqa.selenium.StaleElementReferenceException: Element not found in the cache - perhaps the page has changed since it was looked up
It's worthwhile to note that the failure is intermittent. Some executions fail while other passes through the same code pass. The table cell causing the failure are also not consistent.
There is a solution to this using Html-Agility-Pack.
This will work only if you want to read the data from that page.
This goes likes this
//Convert the pageContent into documentNode.
void _getHtmlNode(IWebDriver driver){
var htmlDocument = new HtmlDocument();
htmlDocument.LoadHtml(driver.PageSource);
return htmlDocument.DocumentNode;
}
private Stats[] parseStats(){
String xpath = "//tbody[#id='regionalFieldAgentWidget:regionalDataTable_data']/tr[%d]/td[%d]";
Stats[] stats = new Stats[3];
for (int i = 0; i < stats.Length; i++) {
String inProgresOrders = cellContent(xpath, i, 1);
String maxCapacity = cellContent(xpath, i, 2);
String allocationRatio = cellContent(xpath, i, 3);
Stats[i] = new Stats(inProgressORders, maxCapacity, allocationRatio);
}
return stats;
}
private String cellContent(String xpathTemplate, int row, int cell) {
String xpath = String.format(xpathTemplate, row + 1, cell + 1);
new WebDriverWait(driver, 10).until(ExpectedConditions.presenceOfElementLocated(By.xpath(xpath)));
var documentNode = _getHtmlNode(driver);
var elementByXPath = documentNode.SelectSingleNode(xpath);
return elementByXPath.InnerText;
}
now read any data.
Some tips for using htmlNode.
1. Similar to driver.FindElement: document.SelectSingleNode
2. Similar to driver.FindElements: document.SelectNodes
3. Similar to driver.Text: document.InnerText.
For more search regarding HtmlNode.
Turns out there was a race condition as I've already mentioned. Since jQuery is available via PrimeFaces there is a very handy solution mentioned in a few other posts. I implemented the following method to wait for any asynchronous requests to return before parsing page elements
public static void waitForPageLoad(JavascriptExecutor jsContext) {
while (getActiveConnections(jsContext) > 0) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
throw new RuntimeException(ex);
}
}
}
private static long getActiveConnections(JavascriptExecutor jsContext) {
return (Long) jsContext.executeScript("return (window.jQuery || { active : 0 }).active");
}
Each built in driver implementation implements the JavascriptExecutor interface, so the calling code is very straightforward:
WebDriver driver = new FirefoxDriver();
waitForPageLoad((JavascriptExecutor) driver);