I'm generating telerik report for
"Number of students by level of education, field and sex"
Here the SQL query that I'm using to create this report
SELECT
[tbl_hec_ISCED].[ISCED_ID],
[tbl_hec_ISCED].[ISCED_Level],
[tbl_hec_Programme].[ISCED_ID] AS 'tbl_hec_ProgrammeISCED_ID',
[tbl_hec_Programme].[Programme_ID],
[tbl_hec_Programme].[Specialisation_ID_Number],
[tbl_hec_specialisation].[Rank_ID_Number],
[tbl_hec_specialisation].[Rank_Title],
[tbl_HEI_student].[Programme_ID] AS 'tbl_HEI_studentProgramme_ID',
[tbl_HEI_student].[Gender]
FROM ((([tbl_HEI_student]
FULL OUTER JOIN [tbl_hec_Programme]
ON [tbl_HEI_student].[Programme_ID] = [tbl_hec_Programme].[Programme_ID])
FULL OUTER JOIN [tbl_hec_specialisation]
ON [tbl_hec_Programme].[Specialisation_ID_Number] = [tbl_hec_specialisation].[Rank_ID_Number])
FULL OUTER JOIN [tbl_hec_ISCED]
ON [tbl_hec_Programme].[ISCED_ID] = [tbl_hec_ISCED].[ISCED_ID])
WHERE ([tbl_HEI_student].[Gender]='Male' or [tbl_HEI_student].[Gender]='Female') and ([tbl_hec_ISCED].[ISCED_Level]='5' or [tbl_hec_ISCED].[ISCED_Level]='6'or [tbl_hec_ISCED].[ISCED_Level]='7'or [tbl_hec_ISCED].[ISCED_Level]='8')
I'm getting null report since some values are not in database. I attached picture of it ,
HERE that view
I want generate report when there is no data in database. like below which means zero values for null rows.
HERE the expected report output
How can I overcome this challenge
My suggestion is to use document merging functionality using a template with all the report layout already set up as well as value placeholders (fields) that will be replaced by calculated values.
I'm using Aspose.Words library in my projects, but I'm sure there are others out there maybe even Telerik Reporting. This is very simple functionality so any reporting tool that can do complex stuff must be able to do this as well.
Here's some example code for Aspose. Other libraries will have different implementation.
using Aspose.Words;
void GenerateDocument(string templateFilePath, Dictionary<string, object> fieldNamesAndValues)
{
// This is our document object
Document output = null;
// Obtain the template file
if (File.Exists(templateFilePath))
{
// If the template file is successfully located, use this template
output = new Document(templateFilePath);
}
// Merge the provided values into the appropriate fields of the template
output.MailMerge.Execute(fieldNamesAndValues.Keys.ToArray(), fieldNamesAndValues.Values.ToArray());
// Save the document into a stream as PDF
MemoryStream stream = new MemoryStream();
doc.Save(stream, SaveFormat.Pdf);
// You can then do whatever with the stream:
// save it or push it to the browser for download
}
Using your expected result as an example, let's assume these are the names of your placeholders (fields) for the first row:
MALE_EDUCATION_ISCED5, MALE_EDUCATION_ISCED6, MALE_EDUCATION_ISCED7
You can then generate your report like so:
Dictionary<string, object> fieldsAndValues = new Dictionary<string, object>();
fieldsAndValues.Add("MALE_EDUCATION_ISCED5", calculatedValue1);
fieldsAndValues.Add("MALE_EDUCATION_ISCED6", calculatedValue2);
fieldsAndValues.Add("MALE_EDUCATION_ISCED7", calculatedValue3);
// and so on for other fields
GenerateDocument("~/Templates/Report.docx", fieldsAndValues);
More info on how to add fields in Microsoft Word:
https://support.office.com/en-us/article/7e9ea3b4-83ec-4203-9e66-4efc027f2cf3
More info on Aspose MailMerge:
http://www.aspose.com/docs/display/wordsnet/How+to++Execute+Simple+Mail+Merge
Related
I'm working on a personal project and very new (learning as I go) to JSON, NiFi, SQL, etc., so forgive any confusing language used here or a potentially really obvious solution. I can clarify as needed.
I need to take the JSON output from a website's API call and insert it into a table in my MariaDB local server that I've set up. The issue is that the JSON data is nested, and two of the key pieces of data that I need to insert are used as variable key objects rather than values, so I don't know how to extract it and put it in the database table. Essentially, I think I need to identify different pieces of the JSON expression and insert them as values, but I'm clueless how to do so.
I've played around with the EvaluateJSON, SplitJSON, and FlattenJSON processors in particular, but I can't make it work. All I can ever do is get the result of the whole expression, rather than each piece of it.
{"5381":{"wind_speed":4.0,"tm_st_snp":26.0,"tm_off_snp":74.0,"tm_def_snp":63.0,"temperature":58.0,"st_snp":8.0,"punts":4.0,"punt_yds":178.0,"punt_lng":55.0,"punt_in_20":1.0,"punt_avg":44.5,"humidity":47.0,"gp":1.0,"gms_active":1.0},
"1023":{"wind_speed":4.0,"tm_st_snp":26.0,"tm_off_snp":82.0,"tm_def_snp":56.0,"temperature":74.0,"off_snp":82.0,"humidity":66.0,"gs":1.0,"gp":1.0,"gms_active":1.0},
"5300":{"wind_speed":17.0,"tm_st_snp":27.0,"tm_off_snp":80.0,"tm_def_snp":64.0,"temperature":64.0,"st_snp":21.0,"pts_std":9.0,"pts_ppr":9.0,"pts_half_ppr":9.0,"idp_tkl_solo":4.0,"idp_tkl_loss":1.0,"idp_tkl":4.0,"idp_sack":1.0,"idp_qb_hit":2.0,"humidity":100.0,"gp":1.0,"gms_active":1.0,"def_snp":23.0},
"608":{"wind_speed":6.0,"tm_st_snp":20.0,"tm_off_snp":53.0,"tm_def_snp":79.0,"temperature":88.0,"st_snp":4.0,"pts_std":5.5,"pts_ppr":5.5,"pts_half_ppr":5.5,"idp_tkl_solo":4.0,"idp_tkl_loss":1.0,"idp_tkl_ast":1.0,"idp_tkl":5.0,"humidity":78.0,"gs":1.0,"gp":1.0,"gms_active":1.0,"def_snp":56.0},
"3396":{"wind_speed":6.0,"tm_st_snp":20.0,"tm_off_snp":60.0,"tm_def_snp":70.0,"temperature":63.0,"st_snp":19.0,"off_snp":13.0,"humidity":100.0,"gp":1.0,"gms_active":1.0}}
This is a snapshot of an output with a couple thousand lines. Each of the numeric keys that you see above (5381, 1023, 5300, etc) are player IDs for the following stats. I have a table set up with three columns: Player ID, Stat ID, and Stat Value. For example, I need that first snippet to be inserted into my table as such:
Player ID Stat ID Stat Value
5381 wind_speed 4.0
5381 tm_st_snp 26.0
5381 tm_off_snp 74.0
And so on, for each piece of data. But I don't know how to have NiFi select the right pieces of data to insert in the right columns.
I believe that it's possible to use jolt to transform your json into a format:
[
{"playerId":"5381", "statId":"wind_speed", "statValue": 0.123},
{"playerId":"5381", "statId":"tm_st_snp", "statValue": 0.456},
...
]
then use PutDatabaseRecord with json reader.
Another approach is to use ExecuteGroovyScript processor.
Add new parameter to it with name SQL.mydb and link it to your DBCP controller service
And use the following script as Script Body parameter:
import groovy.json.JsonSlurper
import groovy.json.JsonBuilder
def ff=session.get()
if(!ff)return
//read flow file content and parse it
def body = ff.read().withReader("UTF-8"){reader->
new JsonSlurper().parse(reader)
}
def results = []
//use defined sql connection to create a batch
SQL.mydb.withTransaction{
def cmd = 'insert into mytable(playerId, statId, statValue) values(?,?,?)'
results = SQL.mydb.withBatch(100, cmd){statement->
//run through all keys/subkeys in flow file body
body.each{pid,keys->
keys.each{k,v->
statement.addBatch(pid,k,v)
}
}
}
}
//write results as a new flow file content
ff.write("UTF-8"){writer->
new JsonBuilder(results).writeTo(writer)
}
//transfer to success
REL_SUCCESS << ff
I found a website where I can look up vehicle inspections in Denmark. I need to extract some information from the page and loop through a series of license plates. Lets take this car as an example: http://selvbetjening.trafikstyrelsen.dk/Sider/resultater.aspx?Reg=as87640
Here on the left table, you can see some basic information about the vehicle. On the right, you can see a list of the inspections for this specific car. I need a script, which can check if the car has any inspections and then grab the link to each of the inspection reports. Lets take the first inspection from the example. I would like to extract the onclick text from each of the inspections.
The first inspection link would be:
location.href="/Sider/synsrapport.aspx?Inspection=18014439&Vin=VF7X1REVF72378327"
or if you could extract the inspection ID and Vin variable from the URL immediately:
Inspection ID: 18014439
Vin: VF7X1REVF72378327
Here is an example of a car which don't have any inspections yet, if you want to see what that looks like: http://selvbetjening.trafikstyrelsen.dk/Sider/resultater.aspx?Reg=as87400
Current Solution plan:
Download the HTML source code as a String in VB.net
Search the string and extract the specific parts.
Store it in a StringBuilder and upload this to my SQL server
Is this the most efficient way, or do you know of any libraries which is used to specific extract elements from a website in VB.net! Thanks!
You could use Java libraries HtmlUnit or Jsoup to webscrape the page.
Here's an example using HtmlUnit:
LogFactory.getFactory().setAttribute("org.apache.commons.logging.Log", "org.apache.commons.logging.impl.NoOpLog");
java.util.logging.Logger.getLogger("com.gargoylesoftware").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("org.apache.commons.httpclient").setLevel(Level.OFF);
WebClient client = new WebClient(BrowserVersion.CHROME);
client.getOptions().setJavaScriptEnabled(true);
client.getOptions().setThrowExceptionOnScriptError(false);
client.getOptions().setThrowExceptionOnFailingStatusCode(false);
HtmlPage page = client.getPage("http://selvbetjening.trafikstyrelsen.dk/Sider/resultater.aspx?Reg=as87640");
HtmlTable inspectionsTable = (HtmlTable) page.getElementById("tblInspections");
Map<String, String> inspections = new HashMap<String, String>();
for (HtmlTableRow row: inspectionsTable.getRows()) {
String[] splitRow = row.getAttribute("onclick").split("=");
if (splitRow.length >= 4) {
String id = splitRow[2].split("&")[0];
String vin = splitRow[3].replace("\"", "");
inspections.put(id, vin);
System.out.println(id + " " + vin);
}
}
I am well into developing a billing program in VB2013 that needs to be able to export each customer bill to a pdf file that can then be attached to an email to the customer being billed. I have used CR for many, many years, but I have not found any way to programmatically make CR export to pdf. I have made activereports2 do so, but I am trying to get back down to just one report generator. I have had compatibility issues with Activereports2 by Datadynamics when running on some Windows Vista and later machines, so I was hoping to move everything to CR.
Well you can definitely generate Crystal Reports and convert to PDF on the fly in .NET.
Here's some sample code to help with that (this saves the PDF to the database, but you can remove that part if you don't need it):
public static int Crystal_PDFToDatabase(string reportName, object par1, object par2, object par3, string user, string DocName, string DocDesc, string DocType, int ClaimID, string exportFormatType)
{
try
{
CrystalReportSource CrystalReportSource1 = Crystal_SetDataSource(reportName, par1, par2, par3);
//SET EXPORT FORMAT
CrystalDecisions.Shared.ExportFormatType typ = CrystalDecisions.Shared.ExportFormatType.PortableDocFormat;
Stream str = CrystalReportSource1.ReportDocument.ExportToStream(typ);
if (str != null)
{
var memoryStream = new MemoryStream();
str.CopyTo(memoryStream);
int DocID = db_Docs.SaveNewDocument(DocName, DocDesc, DocType, user, memoryStream.ToArray(), ClaimID, null, null);
return DocID;
}
//clear out cache (to prevent other crystal reports from reusing old generated documents
CrystalReportSource1.ReportDocument.Close();
CrystalReportSource1.Dispose();
return 0;
}
catch
{
return 0;
}
}
Now a separate issue is not produce a separate PDF for each page in the report. I simply would not design it that way. I'd instead have the Crystal report generate 1 page, convert to PDF memorystream on the fly, and email it out. Then iterate on to the next customer. So much easier that way instead of having to figure out how (if its possible at all) to split that one huge Crystal Report / PDF into many slices.
We have an Endeca index configured across multiple fields of email content - subject and body. But we only want searches to be performed on the subject lines. Endeca is returning matches within the bodies too. How do you limit the search to the subject?
You can search a specific field or fields by specifying it (them) with the Ntk parameter.
Or if you wish to search a specific group of fields frequently you can set up an interface (also specified with the Ntk parameter), that includes that group of fields.
This is how you can do it using presentation API.
final ENEQuery query = new ENEQuery();
final DimValIdList dimValIdList = new DimValIdList("0");
query.setNavDescriptors(dimValIdList);
final ERecSearchList searches = new ERecSearchList();
final StringBuilder builder = new StringBuilder();
for(final String productId : productIds){
builder.append(productId);
builder.append(" ");
}
final ERecSearch eRecSearch = new ERecSearch("product.id", builder.toString().trim(), "mode matchany");
searches.add(eRecSearch);
query.setNavERecSearches(searches);
Please see this post for a complete example.
Use Search Interfaces in Developer Studio.
Refer - http://docs.oracle.com/cd/E28912_01/DeveloperStudio.612/pdf/DevStudioHelp.pdf#page=209
I have an excel 2007 file (OpenXML format) with a connection to an xml file. This connection generates an excel table and pivot charts.
I am trying to find a way with OpenXML SDK v2 to do the same as the "Refresh All" button in Excel. So that I could automatically update my file as soon as a new xml file is provided.
Thank you.
Well there is quite good workaround for this.
Using OpenXML you can turn on "refresh data when opening the file" option in pivot table (right click on pivot table->PivotTable Options->Data tab).
This result in auto refresh pivot table when user first opens spreadsheet.
The code:
using (var document = SpreadsheetDocument.Open(newFilePath, true))
{
var uriPartDictionary = BuildUriPartDictionary(document);
PivotTableCacheDefinitionPart pivotTableCacheDefinitionPart1 = (PivotTableCacheDefinitionPart)uriPartDictionary["/xl/pivotCache/pivotCacheDefinition1.xml"];
PivotCacheDefinition pivotCacheDefinition1 = pivotTableCacheDefinitionPart1.PivotCacheDefinition;
pivotCacheDefinition1.RefreshOnLoad = true;
}
you need to determine "path" to yours pivotCacheDefinition - use OpenXML SDK 2.0 Productivity Tool to look for it.
BuildUriPartDictionary is a standard method generated by OpenXML SDK 2.0 Productivity Tool
protected Dictionary<String, OpenXmlPart> BuildUriPartDictionary(SpreadsheetDocument document)
{
var uriPartDictionary = new Dictionary<String, OpenXmlPart>();
var queue = new Queue<OpenXmlPartContainer>();
queue.Enqueue(document);
while (queue.Count > 0)
{
foreach (var part in queue.Dequeue().Parts.Where(part => !uriPartDictionary.Keys.Contains(part.OpenXmlPart.Uri.ToString())))
{
uriPartDictionary.Add(part.OpenXmlPart.Uri.ToString(), part.OpenXmlPart);
queue.Enqueue(part.OpenXmlPart);
}
}
return uriPartDictionary;
}
Another solution is to convert your spreadsheet to macroenabled, embed there a VBA script that will refresh all pivot tables.
This can happen on button click or again when user opens spreadsheet.
Here you can find VBA code to refresh pivot tables:
http://www.ozgrid.com/VBA/pivot-table-refresh.htm
You can't do this with Open XML. Open XML allows you to work with the data stored in the file and change the data and formulas and definitions and such. It doesn't actually do any calculations.
Excel automation technically would work, but it's absolutely not recommended for a server environment and is best avoided on the desktop if at all possible.
I think the only way you can do this is following this type of method..
Save Open XML workbook back to a xlsx file.
Load the workbook using the Excel object model.
Call either
ThisWorkbook.PivotCaches(yourIndex).Refresh();
or
ThisWorkbook.RefreshAll();
although I was pretty sure RefreshAll would also work.
Use the object model to Save the workbook and close it.
Reopen for use with xml namespaces.
The solution provided by Bartosz Strutyński will only work if the workbook does contain pivot tables and they share the same cache. If the workbook does not contain pivot tables, the code will throw a NullPointerException. If the workbook contains pivot tables that use different caches (which is the case when data sources are different), only one group of pivot tables that use the same cache will be refreshed. Below is the code based on Bartosz Strutyński's code, free of the aforementioned limitation, and not relying on knowing the "path" of PivotCacheDefinition object. The code also inlines BuildUriPartDictionary, which allows avoiding enumeration of uriPartDictionary in case it’s not used somewhere else, and uses explicit types, to ease searching documentation for the used classes.
Dictionary<String, OpenXmlPart> uriPartDictionary = new Dictionary<String, OpenXmlPart>();
Queue<OpenXmlPartContainer> queue = new Queue<OpenXmlPartContainer>();
queue.Enqueue(document);
while (queue.Count > 0)
{
foreach (IdPartPair part in queue.Dequeue().Parts.Where(part => !uriPartDictionary.Keys.Contains(part.OpenXmlPart.Uri.ToString())))
{
uriPartDictionary.Add(part.OpenXmlPart.Uri.ToString(), part.OpenXmlPart);
queue.Enqueue(part.OpenXmlPart);
PivotTableCacheDefinitionPart pivotTableCacheDefinitionPart;
if ((pivotTableCacheDefinitionPart = part.OpenXmlPart as PivotTableCacheDefinitionPart) != null)
{
pivotTableCacheDefinitionPart.PivotCacheDefinition.RefreshOnLoad = true;
}
}
}