How to append data to a .xls file through Selenium WebDriver - selenium

I want to append data to a TestData.xls.
TestData.xls already contains data for my scripts. I am reading this data in program, processing it and result-Pass/Fail is decided on this processing.
I want to write back this results to next column in TestData.xls
I have tried like below,
for(int i=0;i<rows-1;i++)
{
WritableWorkbook wkb = Workbook.createWorkbook(new File("E://Testing//testing Tools//Selenium//TestData//TestData.xls"));
WritableSheet sh = wkb.createSheet("Agent",0);
WritableFont f = new WritableFont(WritableFont.ARIAL);
WritableCellFormat cf = new WritableCellFormat(f);
sh.addCell(new Label(0,i+1,"Pass",cf));
wkb.write();
wkb.close();
}
But it deletes all data previously available in TestData.xls
Then i tried as TestData1.xls but data is not written at correct position and it is partial.
Now I want to append data to next available column of TestData.xls without erasing previous data.
Please tell me how to append data through Selenium Webdriver

For a Starter to this question : This has nothing to do with Selenium WebDriver.
You are trying to insert value to a particular location in xls.

Try below code may be it it useful to you.
`public static void writeReport(String readFileName, String readSheetName, String result, int row,int col) throws BiffException, IOException, RowsExceededException, WriteException
{
File fileWrite = new File(readFileName);
Workbook in = Workbook.getWorkbook(fileWrite);
WritableWorkbook writable_workbook = Workbook.createWorkbook(fileWrite, in);
WritableSheet writable_sheet = writable_workbook.getSheet(readSheetName);
writable_sheet.addCell(new Label(col, row , result));
writable_workbook.write();
writable_workbook.close();
}`

Related

IntelliJ IDEA LiveTemplate auto increment between usages

I am trying to make my life easier with Live Templates in intelliJ
I need to increment some param by 1 every-time I use the snippet.
So I tried to develop some groovyScript, and I am close, but my groovy capabilities keeps me back. the number is not incremented by 1, but incremented by 57 for some reason... (UTF-8?)
here is the script:
File file = new File("out.txt");
int code = Integer.parseInt(file.getText('UTF-8'));
code=code+1;
try{
if(_1){
code = Integer.parseInt(_1);
}
} catch(Exception e){}
file.text = code.toString();
return code
So whenever there's param passed to this script (with _1) the initial value is set, and otherwise simply incremented.
this script needs to be passed to the live template param with:
groovyScript("File file = new File(\"out.txt\");int code = Integer.parseInt(file.getText(\'UTF-8\'));code=code+1;String propName = \'_1\';if(this.hasProperty(propName) && this.\"$propName\"){code = Integer.parseInt(_1);};file.text =code.toString();return code", "<optional initial value>")

GOOGLE DOCS API Invalid requests[0].updateTextStyle: Index 4 must be less than the end index of the referenced segment, 2.",

I've created a document using Google Docs API, but when I try to modify its options or add text, it gives me this error:
http://prntscr.com/naf0nm
The thing is if I open the document and click enter many times ( to make many lines) then execution and modification works. Can anyone help me?? What do I need to do to not get this error?
String text1 = "hola, llegó papa";
List<Request> requests = new ArrayList<>();
requests.add(new Request().setInsertText(new InsertTextRequest()
.setText(text1)
.setLocation(new Location().setIndex(25))));
BatchUpdateDocumentRequest body = new BatchUpdateDocumentRequest().setRequests(requests);
BatchUpdateDocumentResponse response = service.documents()
.batchUpdate(idDoc, body).execute();
Here the method to create doc:
private static void createDoc(Docs service) throws IOException {
Document doc = new Document()
.setTitle("TEXTO CAMBIADO");
doc = service.documents().create(doc)
.execute();
System.out.println("Created document with title: " + doc.getTitle());
idDoc = doc.getDocumentId();
}
It is very late for answer but it can help to others.
May be this answer can help you. someone has answered here
Also you have to write backword to get the last inserted text at the starting of doc.
I just found another way to write text at end of doc. You don't need to set the location, just do this way..
public void insertText(Docs docsService) throws IOException {
List<Request> requests = new ArrayList<>();
requests.add(new Request().setInsertText(new InsertTextRequest()
.setText("Name : {{NAME}}\n")
.setEndOfSegmentLocation(new EndOfSegmentLocation())));
requests.add(new Request().setInsertText(new InsertTextRequest()
.setText("\nDOB: {{DOB}}\n")
.setEndOfSegmentLocation(new EndOfSegmentLocation())));
requests.add(new Request().setInsertText(new InsertTextRequest()
.setText("\nMobile: {{MOBILE}}\n")
.setEndOfSegmentLocation(new EndOfSegmentLocation())));
BatchUpdateDocumentRequest body = new BatchUpdateDocumentRequest().setRequests(requests);
BatchUpdateDocumentResponse response = docsService.documents()
.batchUpdate(Constants.DOCUMENT_ID, body).execute();
}

Read data from excel data by JSPDynpage

Please help me with the below issue.
I have to read data from excel .
I have created JSPDynpage component and followd below link :
http://help.sap.com/saphelp_sm40/helpdata/en/63/9c0e41a346ef6fe10000000a1550b0/frameset.htm below is my code. I am trying to read excel file using apache poi 3.2 API
try
{
FileUpload fu = (FileUpload)
this.getComponentByName("myfileupload");
// this is the temporary file
if (fu != null) {
// Output to the console to see size and UI.
System.out.println(fu.getSize());
System.out.println(fu.getUI());
// Get file parameters and write it to the console
IFileParam fileParam = fu.getFile();
System.out.println(fileParam);
// Get the temporary file name
File f = fileParam.getFile();
String fileName = fileParam.getFileName();
// Get the selected file name and write ti to the console
ivSelectedFileName = fu.getFile().getSelectedFileName();
File fp = new File(ivSelectedFileName);
myLoc.errorT("#fp#"+fp);
try {
//
**FileInputStream file = new FileInputStream(fp);** --> error at this line
HSSFWorkbook workbook = new HSSFWorkbook(file);
myLoc.errorT("#workbook#"+workbook);
//Get first sheet from the workbook
HSSFSheet sheet = workbook.getSheetAt(0);
myLoc.errorT("#sheet#"+sheet);
//
} catch(Exception ioe) {
myLoc.errorT("#getLocalizedMessage# " + ioe.getLocalizedMessage());
}
Error :
#getLocalizedMessage# C:\Documents and Settings\10608871\Desktop\test.xls (The system cannot find the path specified)
at line FileInputStream file = new FileInputStream(fp);
I have created the PAR file and deploying it on server.
Thanks in Advance,
Aliya Khan.
i resolved the problem, i was passing the worng parameter instead of f
FileInputStream file = new FileInputStream(f);

Process a CSV file starting at a predetermined line/row using LumenWorks parser

I am using LumenWorks awesome CSV reader to process CSV files. Some files have over 1 million records.
What I want is to process the file in sections. E.g. I want to process 100,000 records first, validate the data and then send this records over an Internet connection. Once sent, I then reopen the file and continue from record 100,001. On and on till I finish processing the file. In my application I have already created the logic of keeping track of which record I am currently processing.
Does the LumenWorks parser support processing from a predetermined line in the CSV or it always has to start from the top? I see it has a buffer variable. Is there a way to use this buffer variable to achieve my goal?
my_csv = New CsvReader(New StreamReader(file_path), False, ",", buffer_variable)
It seems the LumenWorks CSV Reader needs to start at the top - I needed to ignore the first n lines in a file, and attempted to pass a StreamReader that was at the correct position/row, but got a Key already exists Dictionary error when I attempted to get the FieldCount (there were no duplicates).
However, I have found some success by first reading pre-trimmed file into StringBuilder and then into a StringReader to allow the CSV Reader to read it. Your mileage may vary with huge files, but it does help to trim a file:
using (StreamReader sr = new StreamReader(filePath))
{
string line = sr.ReadLine();
StringBuilder sbCsv = new StringBuilder();
int lineNumber = 0;
do
{
lineNumber++;
// Ignore the start rows of the CSV file until we reach the header
if (lineNumber >= Constants.HeaderStartingRow)
{
// Place into StringBuilder
sbCsv.AppendLine(line);
}
}
while ((line = sr.ReadLine()) != null);
// Use a StringReader to read the trimmed CSV file into a CSV Reader
using (StringReader str = new StringReader(sbCsv.ToString()))
{
using (CsvReader csv = new CsvReader(str, true))
{
int fieldCount = csv.FieldCount;
string[] headers = csv.GetFieldHeaders();
while (csv.ReadNextRecord())
{
for (int i = 0; i < fieldCount; i++)
{
// Do Work
}
}
}
}
}
You might be able to adapt this solution to reading chunks of a file - e.g. as you read through the StreamReader, assign different "chunks" to a Collection of StringBuilder objects and also pre-pend the header row if you want it.
Try to use CachedCSVReader instead of CSVReader and MoveTo(long recordnumber), MoveToStart etc. methods.

Apache POI Java heap space with xlsm file

I am trying to execute the following code to convert a xlsm file to csv :
//Workbook wbk = new HSSFWorkbook(new FileInputStream(new File("myFile.xls")));
Workbook wbk = new XSSFWorkbook(new FileInputStream(new File("myFile.xlsm")));
for (int i = 0; i < wbk.getNumberOfNames(); i++) {
if (wbk.getNameAt(i).getNameName().startsWith("START\\")) {
// Get SheetName
sheetName = wbk.getNameAt(i).getSheetName();
// Get csv Filename
csvFilename = generateFileName(wbk.getNameAt(i).getNameName(), currentDate);
// Starting row index for this sheet
startingRowIndex = getStartingRowIndex(wbk, i);
// Max column index for this sheet
maxColumnIndex = getMaxColumnIndex(wbk, wbk.getSheet(sheetName));
// Convert sheet to csv
toCSV(csvFilename, startingRowIndex, maxColumnIndex, wbk, sheetName);
}
}
-Xmx argument is setted to 1024 and i use a xslm file.
This file is 15 Mo.
I get this error "java.lang.OutOfMemoryError: Java heap space" on the first line.
With the same file in xls format (50 Mo), it works great.
I can't change the Xmx argument and I can't use other API than POI.
I read in others messages that the better way is to use the SAX API for this kind of memory problems.
However, in my file, all sheets and all rows don't need to be extracted in CSV.
That is why I use "wbk.getNumberOfNames()" to get all the defined names (in the name manager) and to know the sheets to convert.
Do you know how i can access these properties using SAX API ?
Thanks.
Regards.
The following Apache POI code example uses SAX parser to convert XLSX file to CSV.
http://svn.apache.org/repos/asf/poi/trunk/src/examples/src/org/apache/poi/xssf/eventusermodel/XLSX2CSV.java