This is a function that appends Pivot table to google sheet. And it creates a pivot table as shown:
.
public void appendPivotTableTemplate(String spreadsheetId,int sheetid , String title) throws Exception{
List<Request> sheetsRequests = new ArrayList<>();
List<RowData> rdata = new ArrayList<>();
List<CellData> cdata = new ArrayList<>();
List<PivotGroup> pgroup_row = new ArrayList<>();
List<PivotGroup> pgroup_col = new ArrayList<>();
Map<String, PivotFilterCriteria> colcriteria = new HashMap<>();
List<String> colval = Arrays.asList("Completed in Time", "Completed after Time", "Not completed");
List<PivotValue> pvalue = new ArrayList<>();
colcriteria.put("5",new PivotFilterCriteria().setVisibleValues(colval)); // 5 is the column offset for the Status( (26,5) in Grid Offset)
pgroup_row.add(new PivotGroup()
.setSourceColumnOffset(2) // for week
.setShowTotals(false)
.setSortOrder("DESCENDING")
);
pgroup_col.add(new PivotGroup()
.setSourceColumnOffset(5) // for status
.setShowTotals(false)
.setSortOrder("ASCENDING"));
pvalue.add(new PivotValue().setSourceColumnOffset(5).setSummarizeFunction("COUNTA").setName("Count of Task")); // https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/pivot-tables#PivotValueSummarizeFunction
cdata.add(new CellData().setPivotTable(new
PivotTable()
.setSource(new GridRange()
.setSheetId(sheetid)
.setStartRowIndex(24)
.setStartColumnIndex(0)
.setEndColumnIndex(6))
.setRows(pgroup_row)
.setColumns(pgroup_col)
.setValues(pvalue)
.setCriteria(colcriteria)
));
The pivot table is created in the same sheet where the source data is. The issue is whenever I click any cells in the pivot table, It pops unable to load file and asks to refresh the page anytime.
Is this issue with google sheet API? Or am I missing something here?
I found the solution to this. That would be setting the setEndRowIndex.
.setSheetId(sheetid)
.setStartRowIndex(24)
.setEndRowIndex(1000) // or the limit to your rowIndex
.setStartColumnIndex(0)
.setEndColumnIndex(6))
Related
I want to use Apache Velocity Template Engine to generate SQL query based on the input.
Any sample snippet to get started would be helpful.
JSONObject keysObject = new JSONObject();
keysObject.put("HistoryId", "1");
keysObject.put("TenantName", "Tesla");
Iterator<?> keys = keysObject.keys();
ArrayList list = new ArrayList();
Map map = new HashMap();
while( keys.hasNext() ) {
String key = (String)keys.next();
map.put(key, keysObject.get(key));
}
list.add( map );
int keyObjectSize = keysObject.length();
JSONObject can have more keys, but in this example i am using 2.
I want to use the keys historyId and tenantName to generate below SQL query, Where keys are used as the column name and keys size can be used to generate the value parameter(?1, ?2).
INSERT INTO "Alert" (historyid, tenantname) VALUES (?1, ?2)
I am getting missing row with invalid reason in response while inserting data into BigQuery.
{"errors":[{"message":"Missing row.","reason":"invalid"}],"index":0}, {"errors":[{"message":"Missing row.","reason":"invalid"}]
Below is the code which i am executing:
/The below lines calls the dfp API to get all adunits
AdUnitPage page = inventoryService.getAdUnitsByStatement(statementBuilder.toStatement());
List dfpadunits = new ArrayList();
if (page.getResults() != null) {
totalResultSetSize = page.getTotalResultSetSize();
int i = page.getStartIndex();
for (AdUnit adUnit : page.getResults()) {
/*System.out.printf(
"%d) Ad unit with ID '%s' and name '%s' was found.%n", i++,
adUnit.getId(), adUnit.getName());*/
//Map<String,Object> dfpadunitrow = new HashMap<String,Object>();
//dfpadunitrow.put(adUnit.getId(), adUnit.getName());
Rows dfpadunit = new TableDataInsertAllRequest.Rows();
dfpadunit.setInsertId(adUnit.getId());
dfpadunit.set("id",adUnit.getId());
dfpadunit.set("name",adUnit.getName());
dfpadunits.add(dfpadunit);
}
}
TableDataInsertAllRequest content = new TableDataInsertAllRequest();
content.setRows(dfpadunits);
content.setSkipInvalidRows(true);
content.setIgnoreUnknownValues(true);
System.out.println(dfpadunits.get(0));
Bigquery.Tabledata.InsertAll request = bigqueryService.tabledata().insertAll(projectId, datasetId, tableId, content);
TableDataInsertAllResponse response = request.execute();
System.out.println(response.getInsertErrors());
I put loggers to check my data is populated correctly but when i try to insert records into bigquery using insertAll , I get missing row in response with invalid reason.
Thanks,
Kapil
You need to use a TableRow object. This works (I tested):
TableDataInsertAllRequest.Rows dfpadunit = new TableDataInsertAllRequest.Rows();
TableRow row = new TableRow();
row.set("id",adUnit.getId());
row.set("name",adUnit.getName());
dfpadunit.setInsertId(adUnit.getId());
dfpadunit.setJson(row);
dfpadunits.add(dfpadunit);
below is my code to convert excel to pdf, but i dont understand how do i generate multiple pdf from multiple excel sheets.
String files;
File folder = new File(dirpath);
File[] listOfFiles = folder.listFiles();
for (int i = 0; i < listOfFiles.length; i++) {
if (listOfFiles[i].isFile()) {
files = listOfFiles[i].getName();
if (files.endsWith(".xls") || files.endsWith(".xlsx")) {
// inputting files one by one
//here it should take an input one by one
System.out.println(files);
String inputR = files.toString();
FileInputStream input_document = new FileInputStream(new File("D:\\ExcelToPdfProject\\"+inputR));
// Read workbook into HSSFWorkbook
Workbook workbook = null;
if (inputR.endsWith(".xlsx")) {
workbook = new XSSFWorkbook(input_document);
System.out.println("1");
} else if (inputR.endsWith(".xls")) {
workbook = new HSSFWorkbook(input_document);
System.out.println("GO TO HELL ######");
} else {
System.out.println("GO TO HELL");
}
Sheet my_worksheet = workbook.getSheetAt(2);
// Read worksheet into HSSFSheet
// To iterate over the rows
Iterator<Row> rowIterator = my_worksheet.iterator();
//Iterator<Row> rowIterator1 = my_worksheet.iterator();
//We will create output PDF document objects at this point
Document iText_xls_2_pdf = new Document();
PdfWriter writer = PdfWriter.getInstance(iText_xls_2_pdf, new FileOutputStream("D:\\Output.pdf"));
iText_xls_2_pdf.open();
//we have two columns in the Excel sheet, so we create a PDF table with two columns
//Note: There are ways to make this dynamic in nature, if you want to.
Row row = rowIterator.next();
row.setHeight((short) 2);
int count = row.getPhysicalNumberOfCells();
PdfPTable my_table = new PdfPTable(count);
float[] columnWidths = new float[count];
my_table.setWidthPercentage(100f);
//We will use the object below to dynamically add new data to the table
PdfPCell table_cell;
I want something that can help me create a folder full of pdfs.
I was experimenting with Lucene on the Cystic Fybrosis collection. I made 4 indexes (seperate indexes),where one index had only title, while other had abstract and another subject. the last one had all fields.
Now I find that the search time for index which uses only title is significantly larger than for other 3 indexes. This seems counter-intuitive as the index size is small, when compared to other indices. What can be the probable reason for this?
Here is the code I have used for the benchmark
public class PrecisionRecall {
public static void main(String[] args) throws Throwable {
File topicsFile = new File("C:/Users/Raden/Documents/lucene/LuceneHibernate/LIA/lia2e/src/lia/benchmark/topics.txt");
File qrelsFile = new File("C:/Users/Raden/Documents/lucene/LuceneHibernate/LIA/lia2e/src/lia/benchmark/qrels.txt");
Directory dir = FSDirectory.open(new File("C:/Users/Raden/Documents/myindex"));
Searcher searcher = new IndexSearcher(dir, true);
String docNameField = "filename";
PrintWriter logger = new PrintWriter(System.out, true);
TrecTopicsReader qReader = new TrecTopicsReader(); //#1
QualityQuery qqs[] = qReader.readQueries( //#1
new BufferedReader(new FileReader(topicsFile))); //#1
Judge judge = new TrecJudge(new BufferedReader( //#2
new FileReader(qrelsFile))); //#2
judge.validateData(qqs, logger); //#3
QualityQueryParser qqParser = new SimpleQQParser("title", "contents"); //#4
QualityBenchmark qrun = new QualityBenchmark(qqs, qqParser, searcher, docNameField);
SubmissionReport submitLog = null;
QualityStats stats[] = qrun.execute(judge, //#5
submitLog, logger);
QualityStats avg = QualityStats.average(stats); //#6
avg.log("SUMMARY",2,logger, " ");
dir.close();
}
}
The response time of a query does not depend on index size. It depends on the number of hits and the number of terms in the query.
This is because you don't have to read all the index data. You only need to read the document list for the query terms.
CommonsHttpSolrServer server = new CommonsHttpSolrServer("http://localhost:8983/solr/");
SolrInputDocument doc1 = new SolrInputDocument();
doc1.addField("id", "id1");
doc1.addField("name", "doc1");
doc1.addField("price", new Float(10));
SolrInputDocument doc2 = new SolrInputDocument();
doc2.addField("id", "id1");
doc2.addField("name", "doc2");
server.add(doc1);
server.add(doc2);
server.commit();
SolrQuery query = new SolrQuery();
query.setQuery("id:id1");
query.addSortField("price", SolrQuery.ORDER.desc);
QueryResponse rsp = server.query(query);
Iterator<SolrDocument> iter = rsp.getResults().iterator();
while(iter.hasNext()){
SolrDocument doc = iter.next();
Collection fieldNames = doc.getFieldNames();
Iterator<String> fieldIter = fieldNames.iterator();
StringBuffer content = new StringBuffer("");
while(fieldIter.hasNext()){
String field = fieldIter.next();
content.append(field+":"+doc.get(field)).append(" ");
//System.out.println(field);
}
System.out.println(content);
}
The question is that I want to get the result "id:id1 name:doc2 price:10.0", but the output is "id:id1 name:doc2"...
So I want to know if I want to get the result as "id:id1 name:doc2 price:10.0", how can I modify my programming?
As you are adding the documents with same id. You are basically adding a same document twice.
Solr will update/overwrite the document. updated is basically delete and add.
As the second document you added with the same id does not have the price field, it won't be added and you wont find it the index.
you would need to have all the fields changed and unchanged when you are adding back the document.
doc2.addField("price", new Float(10)); // should add it back to the document