EPPPlus cannot open RDLC rendered to EXCELOPENXML - rdlc

Does anyone have any idea , or even a solution, to open an xlsx file using EPPlus created from an rdlc report.
At the moment, when I try to do this I get an null exception when I try to access the worksheets.
// filename is the xlsx file created by exported rdlc to excel
FileInfo newFile = new FileInfo(filename);
ExcelPackage pck = new ExcelPackage(newFile);
// I get error here
pck.Workbook.Worksheets.Add("New Sheet");

If you intention is to add onto the file you are using the wrong constructor. The one you are using now is for creating a new package off of a template. The on your want is this (without the extra Boolean) which will open the file directly:
ExcelPackage pck = new ExcelPackage(newFile);
Response to Comments
#A.Ellwood I just tried it with the file you posted and it works fine for me. Here is the code I used:
[TestMethod]
public void RDLC_Test()
{
var datatable = new DataTable("tblData");
datatable.Columns.AddRange(new[]
{
new DataColumn("Col1", typeof (int)), new DataColumn("Col2", typeof (int)),
new DataColumn("Col3", typeof (object))
});
for (var i = 0; i < 10; i++)
{
var row = datatable.NewRow();
row[0] = i;
row[1] = i*10;
row[2] = Path.GetRandomFileName();
datatable.Rows.Add(row);
}
var fi = new FileInfo(#"c:\temp\Report1.xlsx");
using (var pck = new ExcelPackage(fi))
{
var workbook = pck.Workbook;
var worksheet = workbook.Worksheets.Add("Sheet1");
worksheet.Cells.LoadFromDataTable(datatable, true);
pck.Save();
}
}

Related

Index Out of Bound exception while rendering RDLC Reports in ASP.NET Core

My ASP.NET Core MVC project has several reports. To render the reports as PDF, I'm using AspNetCore.Reporting library.
This library works fine for a single report but due to some cache issues it throws an exception while generating another report. The solution I found on the internet was to run report generation as a new process but I don't know how to implement that.
I found the suggestion to use Tmds.ExecFunction to run report generation as a seperate process. But I dont know how to pass parameters to the function.
Here is my code:
string ReportName = "invoiceDine";
DataTable dt = new DataTable();
dt = GetInvoiceItems(invoiceFromDb.Id);
Dictionary<string, string> param = new Dictionary<string, string>();
param.Add("bParam", $"{invoiceFromDb.Id}");
param.Add("gParam", $"{salesOrderFromDb.Guests}");
param.Add("tParam", $"{invoiceFromDb.Table.Table}");
param.Add("dParam", $"{invoiceFromDb.Time}");
param.Add("totalP", $"{invoiceFromDb.SubTotal}");
param.Add("t1", $"{tax1}");
param.Add("t2", $"{tax2}");
param.Add("tA1", $"{tax1Amount}");
param.Add("tA2", $"{tax2Amount}");
param.Add("AT1", $"{totalAmountWithTax1}");
param.Add("AT2", $"{totalAmountWithTax2}");
param.Add("terminalParam", $"{terminalFromDb.Name}");
param.Add("addressParam", $"{t.Address}");
param.Add("serviceParam", "Service Charges of applicable on table of " + $"{personForServiceCharges}" + " & Above");
var result = reportService.GenerateReport(ReportName, param, "dsInvoiceDine", dt);
return File(result,"application/Pdf");
This is my version of the function:
``` public byte[] GenerateReport(string ReportName, Dictionary<string,string> Parameters,string DataSetName,DataTable DataSource )
{
string guID = Guid.NewGuid().ToString().Replace("-", "");
string fileDirPath = Assembly.GetExecutingAssembly().Location.Replace("POS_Website.dll", string.Empty);
string ReportfullPath = Path.Join(fileDirPath, "\\Reports");
string JsonfullPath = Path.Join(fileDirPath,"\\JsonFiles");
string rdlcFilePath = string.Format("{0}\\{1}.rdlc", ReportfullPath, ReportName);
string generatedFilePath = string.Format("{0}\\{1}.pdf", JsonfullPath, guID);
string jsonDataFilePath = string.Format("{0}\\{1}.json", JsonfullPath, guID);
File.WriteAllText(jsonDataFilePath, JsonConvert.SerializeObject(DataSource));
FunctionExecutor.Run((string[] args) =>
{
// 0 : Data file path - jsonDataFilePath
// 1 : Filename - generatedFilePath
// 2 : RDLCPath - rdlcFilePath
ReportResult result;
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);
Encoding.GetEncoding("windows-1252");
LocalReport report = new LocalReport(args[2]);
DataTable dt = JsonConvert.DeserializeObject<DataTable>(File.ReadAllText(args[0]));
report.AddDataSource(args[3], dt);
result = report.Execute(RenderType.Pdf, 1,Parameters);
using (var fs = new FileStream(args[1], FileMode.Create, FileAccess.Write))
{
fs.Write(result.MainStream);
}
}, new string[] {jsonDataFilePath, generatedFilePath, rdlcFilePath, DataSetName });
var memory = new MemoryStream();
using (var stream = new FileStream(Path.Combine("", generatedFilePath), FileMode.Open))
{
stream.CopyTo(memory);
}
File.Delete(generatedFilePath);
File.Delete(jsonDataFilePath);
memory.Position = 0;
return memory.ToArray();
}
But it throws exception "Field marshaling is not supported by ExecFunction" on line:
var result = reportService.GenerateReport(ReportName, param, "dsInvoiceDine", dt);
No Need to run report generation as a seperate process. Just Dont Pass extension as 1
in:
var result = localReport.Execute(RenderType.Pdf, 1, param);
The Solution is:
int ext = (int)(DateTime.Now.Ticks >> 10);
var result = localReport.Execute(RenderType.Pdf, ext, param);

Fill XFA without breaking usage rights

I have an XFA form that I can successfully fill in by extracting the XML modifying and writing back. Works great if you have the full Adobe Acrobat, but fails with Adobe Reader. I have seen various questions on the same thing with answers but they were some time ago so updating an XFA that is readable by Adobe Reader may no longer be doable?
I use this code below and I've utilised the StampingProperties of append as in the iText example but still failing. I'm using iText 7.1.15.
//open file and write to temp one
PdfDocument pdf = new(new PdfReader(FileToProcess), new PdfWriter(NewPDF), new StampingProperties().UseAppendMode());
PdfAcroForm form = PdfAcroForm.GetAcroForm(pdf, true);
XfaForm xfa = form.GetXfaForm();
XElement node = xfa.GetDatasetsNode();
IEnumerable<XNode> list = node.Nodes();
foreach (XNode item in list)
{
if (item is XElement element && "data".Equals(element.Name.LocalName))
{
node = element;
break;
}
}
XmlWriterSettings settings = new() { Indent = true };
using XmlWriter writer = XmlWriter.Create(XMLOutput, settings);
{
node.WriteTo(writer);
writer.Flush();
writer.Close();
}
//We now how to strip an extra xfa line if updating
if(update)
{
string TempXML= CSTrackerHelper.MakePath($"{AppContext.BaseDirectory}Temp", $"{Guid.NewGuid()}.XML");
StreamReader fsin = new(XMLOutput);
StreamWriter fsout = new(TempXML);
string linedata = string.Empty;
int cnt = 0;
while (!fsin.EndOfStream)
{
if (cnt != 3 && linedata != string.Empty)
{
fsout.WriteLine(linedata);
}
linedata = fsin.ReadLine();
cnt++;
}
fsout.Close();
fsin.Close();
XMLOutput = TempXML;
}
xlogger.Info("Populating pdf fields");
//Now loop through our field data and update the XML
XmlDocument xmldoc = new();
xmldoc.Load(XMLOutput);
XmlNamespaceManager xmlnsManager = new(xmldoc.NameTable);
xmlnsManager.AddNamespace("xfa", #"http://www.xfa.org/schema/xfa-data/1.0/");
string[] FieldValues;
string[] MultiNodes;
foreach (KeyValuePair<string, DocumentFieldData> v in DocumentData.FieldData)
{
if (!string.IsNullOrEmpty(v.Value.Field))
{
FieldValues = v.Value.Field.Contains(";") ? v.Value.Field.Split(';') : (new string[] { v.Value.Field });
foreach (string FValue in FieldValues)
{
XmlNodeList aNodes;
if (FValue.Contains("{"))
{
aNodes = xmldoc.SelectNodes(FValue.Substring(0, FValue.LastIndexOf("{")), xmlnsManager);
if (aNodes.Count > 1)
{
//We have a multinode
MultiNodes = FValue.Split('{');
int NodeIndex = int.Parse(MultiNodes[1].Replace("}", ""));
aNodes[NodeIndex].InnerText = v.Value.Data;
}
}
else
{
aNodes = xmldoc.SelectNodes(FValue, xmlnsManager);
if (aNodes.Count >= 1)
{
aNodes[0].InnerText = v.Value.Data;
}
}
}
}
}
xmldoc.Save(XMLOutput);
//Now we've updated the XML apply it to the pdf
xfa.FillXfaForm(new FileStream(XMLOutput, FileMode.Open, FileAccess.Read));
xfa.Write(pdf);
pdf.Close();
FYI I've also tried to set a field directly also with the same results.
PdfReader preader = new PdfReader(source);
PdfDocument pdfDoc=new PdfDocument(preader, new PdfWriter(dest), new StampingProperties().UseAppendMode());
PdfAcroForm pdfForm = PdfAcroForm.GetAcroForm(pdfDoc, true);
XfaForm xform = pdfForm.GetXfaForm();
xform.SetXfaFieldValue("VRM[0].CoverPage[0].Wrap2[0].Table[0].CSID[0]", "Test");
xform.Write(pdfForm);
pdfDoc.Close();
If anyone has any ideas it would be appreciated.
Cheers
I ran into a very similar issue. I was attempting to auto fill an XFA that was password protected while not breaking the certificate or usage rights (it allowed filling). iText7 seems to have made this not possible for legal/practical reasons, however it is still very much possible with iText5. I wrote the following working codeusing iTextSharp (C# version if iText5):
using iTextSharp.text;
using iTextSharp.text.pdf;
string pathToRead = "/Users/home/Desktop/c#pdfParser/encrypted_empty.pdf";
string pathToSave = "/Users/home/Desktop/c#pdfParser/xfa_encrypted_filled.pdf";
string data = "/Users/home/Desktop/c#pdfParser/sample_data.xml";
FillByItextSharp5(pathToRead, pathToSave, data);
static void FillByItextSharp5(string pathToRead, string pathToSave, string data)
{
using (FileStream pdf = new FileStream(pathToRead, FileMode.Open))
using (FileStream xml = new FileStream(data, FileMode.Open))
using (FileStream filledPdf = new FileStream(pathToSave, FileMode.Create))
{
PdfReader.unethicalreading = true;
PdfReader pdfReader = new PdfReader(pdf);
PdfStamper stamper = new PdfStamper(pdfReader, filledPdf, '\0', true);
stamper.AcroFields.Xfa.FillXfaForm(xml, true);
stamper.Close();
pdfReader.Close();
}
}
PdfStamper stamper = new PdfStamper(pdfReader, filledPdf, '\0', true)
you have to use this line.

NPOI export excel directly to frontend without saving it on server

Is it possible to write an excel file (with NPOI) directly to browser without saving it first on the server.
I tried following in my controller without success:
public async Task<IActionResult> ExportExcel(){
var fs = new MemoryStream();
IWorkbook workbook;
workbook = new XSSFWorkbook();
ISheet excelSheet = workbook.CreateSheet("Test);
IRow row = excelSheet.CreateRow(0);
row.CreateCell(0).SetCellValue("ID");
row.CreateCell(1).SetCellValue("Name");
row = excelSheet.CreateRow(1);
row.CreateCell(0).SetCellValue(1);
row.CreateCell(1).SetCellValue("User 1");
workbook.Write(fs);
byte[] bytes = new byte[fs.Length];
fs.Read(bytes, 0, (int)fs.Length);
return File(bytes, "application/vnd.ms-excel", sampleType.Abbreviation+".xlsx");
}
When executing above method I always get following error:
ObjectDisposedException: Cannot access a closed Stream.
...
System.IO.MemoryStream.get_Length()
byte[] bytes = new byte[fs.Length];
...
Or is their another great nuget package to handle (read and write) excel files without storing files on server?
PS: I am using dotnet core 2.1 en nuget package: https://www.nuget.org/packages/NPOI/
Write directly to the Response.Body.
Because Excel is treated as an attachment, we also need to set the Response.Headers
To make life easy, we can create an extension method firstly:
public static class IWorkBookExtensions {
public static void WriteExcelToResponse(this IWorkbook book, HttpContext httpContext, string templateName)
{
var response = httpContext.Response;
response.ContentType = "application/vnd.ms-excel";
if (!string.IsNullOrEmpty(templateName))
{
var contentDisposition = new Microsoft.Net.Http.Headers.ContentDispositionHeaderValue("attachment");
contentDisposition.SetHttpFileName(templateName);
response.Headers[HeaderNames.ContentDisposition] = contentDisposition.ToString();
}
book.Write(response.Body);
}
}
and now we can export the excel file directly :
public async Task ExportExcel(){
IWorkbook workbook;
workbook = new XSSFWorkbook();
ISheet excelSheet = workbook.CreateSheet("Test");
IRow row = excelSheet.CreateRow(0);
row.CreateCell(0).SetCellValue("ID");
row.CreateCell(1).SetCellValue("Name");
row = excelSheet.CreateRow(1);
row.CreateCell(0).SetCellValue(1);
row.CreateCell(1).SetCellValue("User 1");
workbook.WriteExcelToResponse(HttpContext,"test.xlsx");
}

Downloaded file from folder includes %2F char on .NET Core

This must be a dumb question.
I have a web API Rest on c# that have an endpoint to generate and download an excel file. I use NPOI library.
It works fine if the file is generated within the root of my site. But when I try to move my file to a folder, file name include folder and de %2F char. I suppose this is an URL encode problem, but don't know how to fix it. This is my code.
[AllowAnonymous]
[HttpGet("ExportExcel")]
public async Task<IActionResult> ExportExcel(int activos = 1, int idarticulo = -1, string filtro = "", string ordenar = "", int ordenarsentido = 1)
{
var memory = new MemoryStream();
var newFile = #"Export/Articulos.xlsx";
//IArticulosService articulosService = new ArticulosService(_validation.sConnectionString, _validation.sEmpresa);
var mycon = _validation.GetConnectionStringFromClientID("ClientId1");
IArticulosService articulosService = new ArticulosService(mycon, "Ciatema");
try
{
// Genero el excel con el listado
articulosService.ExportarExcel(newFile, activos, filtro, ordenar, ordenarsentido);
using (var stream = new FileStream(newFile, FileMode.Open))
{
await stream.CopyToAsync(memory);
}
memory.Position = 0;
return File(memory, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", newFile);
}
catch (QueryFormatException ex)
{
return BadRequest(ex.Message);
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine(ex);
return StatusCode(StatusCodes.Status500InternalServerError);
}
}
Donwload filename is: Export%2FArticulos.xlsx when i need it to be only Articulos.xlsx.
Thanks!
you should be passing file name only instead of complete path in your return File method like below
return File(memory, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", Path.GetFileName(newFile));

iTextSharp - Using PDFAction.GotoLocalPage in Merged PDF

I have written some code that merges together multiple PDF's into a single PDF that I then display from the MemoryStream. This works great. What I need to do is add a table of contents to the end of the file with links to the start of each of the individual PDF's. I planned on doing this using the GotoLocalPage action which has an option for page numbers but it doesn't seem to work. If I change the action to the code below to one of the presset ones like PDFAction.FIRSTPAGE it works fine. Does this not work because I am using the PDFCopy object for the writer parameter of GotoLocalPage?
Document mergedDoc = new Document();
MemoryStream ms = new MemoryStream();
PdfCopy copy = new PdfCopy(mergedDoc, ms);
mergedDoc.Open();
MemoryStream tocMS = new MemoryStream();
Document tocDoc = null;
PdfWriter tocWriter = null;
for (int i = 0; i < filesToMerge.Length; i++)
{
string filename = filesToMerge[i];
PdfReader reader = new PdfReader(filename);
copy.AddDocument(reader);
// Initialise TOC document based off first file
if (i == 0)
{
tocDoc = new Document(reader.GetPageSizeWithRotation(1));
tocWriter = PdfWriter.GetInstance(tocDoc, tocMS);
tocDoc.Open();
}
// Create link for TOC, added random number of 3 for now
Chunk link = new Chunk(filename);
PdfAction action = PdfAction.GotoLocalPage(3, new PdfDestination(PdfDestination.FIT), copy);
link.SetAction(action);
tocDoc.Add(new Paragraph(link));
}
// Add TOC to end of merged PDF
tocDoc.Close();
PdfReader tocReader = new PdfReader(tocMS.ToArray());
copy.AddDocument(tocReader);
copy.Close();
displayPDF(ms.ToArray());
I guess an alternative would be to link to a named element (instead of page number) but I can't see how to add an 'invisible' element to the start of each file before adding to the merged document?
I would just go with two passes. In your first pass, do the merge as you are but also record the filename and page number it should link to. In your second pass, use a PdfStamper which will give you access to a ColumnText that you can use general abstractions like Paragraph in. Below is a sample that shows this off:
Since I don't have your documents, the below code creates 10 documents with a random number of pages each just for testing purposes. (You obviously don't need to do this part.) It also creates a simple dictionary with a fake file name as the key and the raw bytes from the PDF as a value. You have a true file collection to work with but you should be able to adapt that part.
//Create a bunch of files, nothing special here
//files will be a dictionary of names and the raw PDF bytes
Dictionary<string, byte[]> Files = new Dictionary<string, byte[]>();
var r = new Random();
for (var i = 1; i <= 10; i++) {
using (var ms = new MemoryStream()) {
using (var doc = new Document()) {
using (var writer = PdfWriter.GetInstance(doc, ms)) {
doc.Open();
//Create a random number of pages
for (var j = 1; j <= r.Next(1, 5); j++) {
doc.NewPage();
doc.Add(new Paragraph(String.Format("Hello from document {0} page {1}", i, j)));
}
doc.Close();
}
}
Files.Add("File " + i.ToString(), ms.ToArray());
}
}
This next block merges the PDFs. This is mostly the same as your code except that instead of writing a TOC here I'm just keeping track of what I want to write in the future. Where I'm using file.value you'd use your full file path and where I'm using file.key you'd use your file's name instead.
//Dictionary of file names (for display purposes) and their page numbers
var pages = new Dictionary<string, int>();
//PDFs start at page 1
var lastPageNumber = 1;
//Will hold the final merged PDF bytes
byte[] mergedBytes;
//Most everything else below is standard
using (var ms = new MemoryStream()) {
using (var document = new Document()) {
using (var writer = new PdfCopy(document, ms)) {
document.Open();
foreach (var file in Files) {
//Add the current page at the previous page number
pages.Add(file.Key, lastPageNumber);
using (var reader = new PdfReader(file.Value)) {
writer.AddDocument(reader);
//Increment our current page index
lastPageNumber += reader.NumberOfPages;
}
}
}
}
mergedBytes = ms.ToArray();
}
This last block actually writes the TOC. If we use a PdfStamper we can create a ColumnText which allows us to use Paragraphs
//Will hold the final PDF
byte[] finalBytes;
using (var ms = new MemoryStream()) {
using (var reader = new PdfReader(mergedBytes)) {
using (var stamper = new PdfStamper(reader, ms)) {
//The page number to insert our TOC into
var tocPageNum = reader.NumberOfPages + 1;
//Arbitrarily pick one page to use as the size of the PDF
//Additional logic could be added or this could just be set to something like PageSize.LETTER
var tocPageSize = reader.GetPageSize(1);
//Arbitrary margin for the page
var tocMargin = 20;
//Create our new page
stamper.InsertPage(tocPageNum, tocPageSize);
//Create a ColumnText object so that we can use abstractions like Paragraph
var ct = new ColumnText(stamper.GetOverContent(tocPageNum));
//Set the working area
ct.SetSimpleColumn(tocPageSize.GetLeft(tocMargin), tocPageSize.GetBottom(tocMargin), tocPageSize.GetRight(tocMargin), tocPageSize.GetTop(tocMargin));
//Loop through each page
foreach (var page in pages) {
var link = new Chunk(page.Key);
var action = PdfAction.GotoLocalPage(page.Value, new PdfDestination(PdfDestination.FIT), stamper.Writer);
link.SetAction(action);
ct.AddElement(new Paragraph(link));
}
ct.Go();
}
}
finalBytes = ms.ToArray();
}