Can CrystalReports for VS2013 be programmed to export each page of the report to a separate pdf file - pdf

I am well into developing a billing program in VB2013 that needs to be able to export each customer bill to a pdf file that can then be attached to an email to the customer being billed. I have used CR for many, many years, but I have not found any way to programmatically make CR export to pdf. I have made activereports2 do so, but I am trying to get back down to just one report generator. I have had compatibility issues with Activereports2 by Datadynamics when running on some Windows Vista and later machines, so I was hoping to move everything to CR.

Well you can definitely generate Crystal Reports and convert to PDF on the fly in .NET.
Here's some sample code to help with that (this saves the PDF to the database, but you can remove that part if you don't need it):
public static int Crystal_PDFToDatabase(string reportName, object par1, object par2, object par3, string user, string DocName, string DocDesc, string DocType, int ClaimID, string exportFormatType)
{
try
{
CrystalReportSource CrystalReportSource1 = Crystal_SetDataSource(reportName, par1, par2, par3);
//SET EXPORT FORMAT
CrystalDecisions.Shared.ExportFormatType typ = CrystalDecisions.Shared.ExportFormatType.PortableDocFormat;
Stream str = CrystalReportSource1.ReportDocument.ExportToStream(typ);
if (str != null)
{
var memoryStream = new MemoryStream();
str.CopyTo(memoryStream);
int DocID = db_Docs.SaveNewDocument(DocName, DocDesc, DocType, user, memoryStream.ToArray(), ClaimID, null, null);
return DocID;
}
//clear out cache (to prevent other crystal reports from reusing old generated documents
CrystalReportSource1.ReportDocument.Close();
CrystalReportSource1.Dispose();
return 0;
}
catch
{
return 0;
}
}
Now a separate issue is not produce a separate PDF for each page in the report. I simply would not design it that way. I'd instead have the Crystal report generate 1 page, convert to PDF memorystream on the fly, and email it out. Then iterate on to the next customer. So much easier that way instead of having to figure out how (if its possible at all) to split that one huge Crystal Report / PDF into many slices.

Related

Select specific elemets from a website in VB.net (WebScraping)

I found a website where I can look up vehicle inspections in Denmark. I need to extract some information from the page and loop through a series of license plates. Lets take this car as an example: http://selvbetjening.trafikstyrelsen.dk/Sider/resultater.aspx?Reg=as87640
Here on the left table, you can see some basic information about the vehicle. On the right, you can see a list of the inspections for this specific car. I need a script, which can check if the car has any inspections and then grab the link to each of the inspection reports. Lets take the first inspection from the example. I would like to extract the onclick text from each of the inspections.
The first inspection link would be:
location.href="/Sider/synsrapport.aspx?Inspection=18014439&Vin=VF7X1REVF72378327"
or if you could extract the inspection ID and Vin variable from the URL immediately:
Inspection ID: 18014439
Vin: VF7X1REVF72378327
Here is an example of a car which don't have any inspections yet, if you want to see what that looks like: http://selvbetjening.trafikstyrelsen.dk/Sider/resultater.aspx?Reg=as87400
Current Solution plan:
Download the HTML source code as a String in VB.net
Search the string and extract the specific parts.
Store it in a StringBuilder and upload this to my SQL server
Is this the most efficient way, or do you know of any libraries which is used to specific extract elements from a website in VB.net! Thanks!
You could use Java libraries HtmlUnit or Jsoup to webscrape the page.
Here's an example using HtmlUnit:
LogFactory.getFactory().setAttribute("org.apache.commons.logging.Log", "org.apache.commons.logging.impl.NoOpLog");
java.util.logging.Logger.getLogger("com.gargoylesoftware").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("org.apache.commons.httpclient").setLevel(Level.OFF);
WebClient client = new WebClient(BrowserVersion.CHROME);
client.getOptions().setJavaScriptEnabled(true);
client.getOptions().setThrowExceptionOnScriptError(false);
client.getOptions().setThrowExceptionOnFailingStatusCode(false);
HtmlPage page = client.getPage("http://selvbetjening.trafikstyrelsen.dk/Sider/resultater.aspx?Reg=as87640");
HtmlTable inspectionsTable = (HtmlTable) page.getElementById("tblInspections");
Map<String, String> inspections = new HashMap<String, String>();
for (HtmlTableRow row: inspectionsTable.getRows()) {
String[] splitRow = row.getAttribute("onclick").split("=");
if (splitRow.length >= 4) {
String id = splitRow[2].split("&")[0];
String vin = splitRow[3].replace("\"", "");
inspections.put(id, vin);
System.out.println(id + " " + vin);
}
}

Generate proper report when data is not in database [Telerik]

I'm generating telerik report for
"Number of students by level of education, field and sex"
Here the SQL query that I'm using to create this report
SELECT
[tbl_hec_ISCED].[ISCED_ID],
[tbl_hec_ISCED].[ISCED_Level],
[tbl_hec_Programme].[ISCED_ID] AS 'tbl_hec_ProgrammeISCED_ID',
[tbl_hec_Programme].[Programme_ID],
[tbl_hec_Programme].[Specialisation_ID_Number],
[tbl_hec_specialisation].[Rank_ID_Number],
[tbl_hec_specialisation].[Rank_Title],
[tbl_HEI_student].[Programme_ID] AS 'tbl_HEI_studentProgramme_ID',
[tbl_HEI_student].[Gender]
FROM ((([tbl_HEI_student]
FULL OUTER JOIN [tbl_hec_Programme]
ON [tbl_HEI_student].[Programme_ID] = [tbl_hec_Programme].[Programme_ID])
FULL OUTER JOIN [tbl_hec_specialisation]
ON [tbl_hec_Programme].[Specialisation_ID_Number] = [tbl_hec_specialisation].[Rank_ID_Number])
FULL OUTER JOIN [tbl_hec_ISCED]
ON [tbl_hec_Programme].[ISCED_ID] = [tbl_hec_ISCED].[ISCED_ID])
WHERE ([tbl_HEI_student].[Gender]='Male' or [tbl_HEI_student].[Gender]='Female') and ([tbl_hec_ISCED].[ISCED_Level]='5' or [tbl_hec_ISCED].[ISCED_Level]='6'or [tbl_hec_ISCED].[ISCED_Level]='7'or [tbl_hec_ISCED].[ISCED_Level]='8')
I'm getting null report since some values are not in database. I attached picture of it ,
HERE that view
I want generate report when there is no data in database. like below which means zero values for null rows.
HERE the expected report output
How can I overcome this challenge
My suggestion is to use document merging functionality using a template with all the report layout already set up as well as value placeholders (fields) that will be replaced by calculated values.
I'm using Aspose.Words library in my projects, but I'm sure there are others out there maybe even Telerik Reporting. This is very simple functionality so any reporting tool that can do complex stuff must be able to do this as well.
Here's some example code for Aspose. Other libraries will have different implementation.
using Aspose.Words;
void GenerateDocument(string templateFilePath, Dictionary<string, object> fieldNamesAndValues)
{
// This is our document object
Document output = null;
// Obtain the template file
if (File.Exists(templateFilePath))
{
// If the template file is successfully located, use this template
output = new Document(templateFilePath);
}
// Merge the provided values into the appropriate fields of the template
output.MailMerge.Execute(fieldNamesAndValues.Keys.ToArray(), fieldNamesAndValues.Values.ToArray());
// Save the document into a stream as PDF
MemoryStream stream = new MemoryStream();
doc.Save(stream, SaveFormat.Pdf);
// You can then do whatever with the stream:
// save it or push it to the browser for download
}
Using your expected result as an example, let's assume these are the names of your placeholders (fields) for the first row:
MALE_EDUCATION_ISCED5, MALE_EDUCATION_ISCED6, MALE_EDUCATION_ISCED7
You can then generate your report like so:
Dictionary<string, object> fieldsAndValues = new Dictionary<string, object>();
fieldsAndValues.Add("MALE_EDUCATION_ISCED5", calculatedValue1);
fieldsAndValues.Add("MALE_EDUCATION_ISCED6", calculatedValue2);
fieldsAndValues.Add("MALE_EDUCATION_ISCED7", calculatedValue3);
// and so on for other fields
GenerateDocument("~/Templates/Report.docx", fieldsAndValues);
More info on how to add fields in Microsoft Word:
https://support.office.com/en-us/article/7e9ea3b4-83ec-4203-9e66-4efc027f2cf3
More info on Aspose MailMerge:
http://www.aspose.com/docs/display/wordsnet/How+to++Execute+Simple+Mail+Merge

Create a Crystal Report with dynamic tables at runtime

Hello I'm planning an application that is basically a reporting front-end for a database (proprietary Pervasive SQL) using an ODBC dsn-less connection string.
I am able to create the dataset in Visual Studio and link up the report(s) to the app. However in real world usage, the location of the database will be different per user. That is not the main problem. I overcame that with the connection string that allows you to set the location of the database path.
Here's the kicker...
The name of the tables that I want to read from are prefixed with a unique filename (actually the same name I mentioned above). I need crystal to re-map the table names at runtime. Just the table name prefixes really. The fields and names of the fields will not change.
Any ideas on where I should look for writing this block? I am using VS2010 & C# if that helps. I think thee should be some sort of class files that come with Crystal that can do some runtime reflection to get/set the table names?
Any thoughts would be welcome and appreciated.
Rob
Edit: I found some documents link that has API docs and other support. I will be studying them. They are all .chm files (Windows help files) so there is no "online" docs to search for.
Add a Crystal reportViewer change the name to objReport
add a public function ShowReport()
public void ShowReport(ReportDocument objReport)
{
Cursor.Current = Cursors.WaitCursor;
objReport.SetDatabaseLogon("", "dbpassword");
cRep.ReportSource = objReport;
this.Show();
Cursor.Current = Cursors.Default;
}
And Call it from anywhere in your application
ds = GetDataInDataSet();//fILL DataSet with your data
rptPSummary objRpt = new rptPSummary();
objRpt.SummaryInfo.ReportComments = "Have a nice day";
objRpt.SummaryInfo.ReportTitle = "Purchase Summary Report from " + sDate.ToString("dd/MM/yyyy") + " to " + eDate.ToString("dd/MM/yyyy");
objRpt.SetDataSource(ds);
frmReportView frmRpt = new frmReportView();
frmRpt.Text = objRpt.SummaryInfo.ReportTitle;
frmRpt.MdiParent = this;
frmRpt.ShowReport(objRpt);

How best to find hard-coded English language strings in SQL Server stored procedures?

We're working on making our application 100% localizable, and we're mostly there. However, we occasionally find an English string still hard-coded in stored procedures. (We're on SQL Server 2005 by the way.) We have thousands of stored procedures, so going through them by hand is impractical. I'm trying to think of the most accurate means of automating a search.
Now, I know there's no means to search for "English" strings - but searching for strings bounded by single quotes and perhaps 20+ characters long should flush MOST of them out. Good enough for our purposes now. But I'm anticipating a lot of false-positives in the comments of the stored procedures, too.
So how would you approach this? Would SMO let me tease apart the SQL in a stored procedure from the comments in it? Do I have to use OBJECT_DEFINITION() and start hacking out some terrifying regular expressions?
Much appreciated in advance, folks.
Another thought: Microsoft provides, with Visual Studio, an assembly that can parse SQL. I've used it, and it's fairly simple to use. You might be able to use it to parse the text of your stored procedures; it can return a list of the various tokens in your statements, including the type of the token. So, it should be able to help you differentiate between what is a string of text you might be interested in vs. what is part of a comment and can be ignored. There are more details here: http://blogs.msdn.com/b/gertd/archive/2008/08/21/getting-to-the-crown-jewels.aspx.
Basically, from .NET, you'd open a connection to your database and query syscomments for your stored procedures' text. You'd loop through each procedure and parse it using these parser. Then you'd use the Sql100ScriptGenerator to get the tokens out of the parsed text, loop through the tokens and look for tokens whose types are either ASCII or Unicode string literals. For those strings, check their length to see if it's 20+, and if it is, flag the strings and the procs as needing further review.
I played around with it a bit, and here is a very raw example to illustrate the basic principle:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using System.Data.SqlClient;
using Microsoft.Data.Schema;
using Microsoft.Data.Schema.ScriptDom;
using Microsoft.Data.Schema.ScriptDom.Sql;
namespace FindHardCodedStrings
{
class Program
{
static void Main(string[] args)
{
using (SqlConnection conn = new SqlConnection())
{
SqlConnectionStringBuilder bldr = new SqlConnectionStringBuilder();
bldr.DataSource = "localhost\\sqlexpress";
bldr.InitialCatalog = "msdb";
bldr.IntegratedSecurity = true;
conn.ConnectionString = bldr.ConnectionString;
SqlCommand cmd = conn.CreateCommand();
cmd.CommandType = System.Data.CommandType.Text;
cmd.CommandText = "select [text] from syscomments";
SqlDataAdapter da = new SqlDataAdapter(cmd);
DataSet ds = new DataSet();
da.Fill(ds);
TSql100Parser parser = new TSql100Parser(false);
Sql100ScriptGenerator gen = new Sql100ScriptGenerator();
gen.Options.SqlVersion = SqlVersion.Sql100;
foreach (DataRow proc in ds.Tables[0].Rows)
{
string txt = proc[0].ToString();
using (System.IO.TextReader sr = new System.IO.StringReader(txt))
{
IList<ParseError> errs;
IScriptFragment frag = parser.Parse(sr, out errs);
if (null == frag)
continue;
IList<TSqlParserToken> tokens = gen.GenerateTokens((TSqlFragment)frag);
foreach (TSqlParserToken token in tokens)
{
if (token.TokenType == TSqlTokenType.UnicodeStringLiteral || token.TokenType == TSqlTokenType.AsciiStringLiteral)
{
if (token.Text.Length >= 20)
Console.WriteLine("String found: " + token.Text);
}
}
}
}
}
}
}
}
We resolved a problem stemming from this by creating SQL Servers in different language/region settings and running our sp's, noting which ones broke.
This may not be the most elegant solution but we were able to do it quickly because of the limited different locality settings we support.

Excel "Refresh All" with OpenXML

I have an excel 2007 file (OpenXML format) with a connection to an xml file. This connection generates an excel table and pivot charts.
I am trying to find a way with OpenXML SDK v2 to do the same as the "Refresh All" button in Excel. So that I could automatically update my file as soon as a new xml file is provided.
Thank you.
Well there is quite good workaround for this.
Using OpenXML you can turn on "refresh data when opening the file" option in pivot table (right click on pivot table->PivotTable Options->Data tab).
This result in auto refresh pivot table when user first opens spreadsheet.
The code:
using (var document = SpreadsheetDocument.Open(newFilePath, true))
{
var uriPartDictionary = BuildUriPartDictionary(document);
PivotTableCacheDefinitionPart pivotTableCacheDefinitionPart1 = (PivotTableCacheDefinitionPart)uriPartDictionary["/xl/pivotCache/pivotCacheDefinition1.xml"];
PivotCacheDefinition pivotCacheDefinition1 = pivotTableCacheDefinitionPart1.PivotCacheDefinition;
pivotCacheDefinition1.RefreshOnLoad = true;
}
you need to determine "path" to yours pivotCacheDefinition - use OpenXML SDK 2.0 Productivity Tool to look for it.
BuildUriPartDictionary is a standard method generated by OpenXML SDK 2.0 Productivity Tool
protected Dictionary<String, OpenXmlPart> BuildUriPartDictionary(SpreadsheetDocument document)
{
var uriPartDictionary = new Dictionary<String, OpenXmlPart>();
var queue = new Queue<OpenXmlPartContainer>();
queue.Enqueue(document);
while (queue.Count > 0)
{
foreach (var part in queue.Dequeue().Parts.Where(part => !uriPartDictionary.Keys.Contains(part.OpenXmlPart.Uri.ToString())))
{
uriPartDictionary.Add(part.OpenXmlPart.Uri.ToString(), part.OpenXmlPart);
queue.Enqueue(part.OpenXmlPart);
}
}
return uriPartDictionary;
}
Another solution is to convert your spreadsheet to macroenabled, embed there a VBA script that will refresh all pivot tables.
This can happen on button click or again when user opens spreadsheet.
Here you can find VBA code to refresh pivot tables:
http://www.ozgrid.com/VBA/pivot-table-refresh.htm
You can't do this with Open XML. Open XML allows you to work with the data stored in the file and change the data and formulas and definitions and such. It doesn't actually do any calculations.
Excel automation technically would work, but it's absolutely not recommended for a server environment and is best avoided on the desktop if at all possible.
I think the only way you can do this is following this type of method..
Save Open XML workbook back to a xlsx file.
Load the workbook using the Excel object model.
Call either
ThisWorkbook.PivotCaches(yourIndex).Refresh();
or
ThisWorkbook.RefreshAll();
although I was pretty sure RefreshAll would also work.
Use the object model to Save the workbook and close it.
Reopen for use with xml namespaces.
The solution provided by Bartosz Strutyński will only work if the workbook does contain pivot tables and they share the same cache. If the workbook does not contain pivot tables, the code will throw a NullPointerException. If the workbook contains pivot tables that use different caches (which is the case when data sources are different), only one group of pivot tables that use the same cache will be refreshed. Below is the code based on Bartosz Strutyński's code, free of the aforementioned limitation, and not relying on knowing the "path" of PivotCacheDefinition object. The code also inlines BuildUriPartDictionary, which allows avoiding enumeration of uriPartDictionary in case it’s not used somewhere else, and uses explicit types, to ease searching documentation for the used classes.
Dictionary<String, OpenXmlPart> uriPartDictionary = new Dictionary<String, OpenXmlPart>();
Queue<OpenXmlPartContainer> queue = new Queue<OpenXmlPartContainer>();
queue.Enqueue(document);
while (queue.Count > 0)
{
foreach (IdPartPair part in queue.Dequeue().Parts.Where(part => !uriPartDictionary.Keys.Contains(part.OpenXmlPart.Uri.ToString())))
{
uriPartDictionary.Add(part.OpenXmlPart.Uri.ToString(), part.OpenXmlPart);
queue.Enqueue(part.OpenXmlPart);
PivotTableCacheDefinitionPart pivotTableCacheDefinitionPart;
if ((pivotTableCacheDefinitionPart = part.OpenXmlPart as PivotTableCacheDefinitionPart) != null)
{
pivotTableCacheDefinitionPart.PivotCacheDefinition.RefreshOnLoad = true;
}
}
}