How to extract embedded files and attachment from PDF using PdfSharp? - pdf

Is it possible to extract embedded files and attachment from PDF using PdfSharp? if yes how can we achieve it.
Thanks in advance

Seems like PDFSharp is not supporting attachments directly but you may try to implement attachments extraction support: you need to look for /FS streams inside PDF document described in PDF Reference 1.7 in Section 3.10.3 and in note 94 in Appendix H.

you may try Apose.pdf, it works perfect for extract embedded files and attachment from PDF.
Try the following.
Document pdfDocument = new Document(#"C:\tmp\AddAttachment_out.pdf");
// Get embedded files collection
EmbeddedFileCollection embeddedFiles = pdfDocument.EmbeddedFiles;
Console.WriteLine("Total files : {0}", embeddedFiles.Count);
int count = 1;
// Loop through the collection to get all the attachments
foreach (FileSpecification fileSpecification in embeddedFiles)
{
Console.WriteLine("Name: {0}", fileSpecification.Name);
Console.WriteLine("Description: {0}",
fileSpecification.Description);
Console.WriteLine("Mime Type: {0}", fileSpecification.MIMEType);
// Check if parameter object contains the parameters
/*if (fileSpecification.Params != null)
{
Console.WriteLine("CheckSum: {0}",
fileSpecification.Params.CheckSum);
Console.WriteLine("Creation Date: {0}",
fileSpecification.Params.CreationDate);
Console.WriteLine("Modification Date: {0}",
fileSpecification.Params.ModDate);
Console.WriteLine("Size: {0}", fileSpecification.Params.Size);
}*/
// Get the attachment and write to file or stream
byte[] fileContent = new byte[fileSpecification.Contents.Length];
fileSpecification.Contents.Read(fileContent, 0,
fileContent.Length);
FileStream fileStream = new FileStream("C:\\Tmp\\" + count + "_out" + ".pdf",
FileMode.Create);
fileStream.Write(fileContent, 0, fileContent.Length);
fileStream.Close();
count += 1;
}

Related

Detecting file size with MultipartFormDataStreamProvider before file is saved?

We are using the MultipartFormDataStreamProviderto save file upload by clients. I have a hard requirement that file size must be greater than 1KB. The easiest thing to do would of course be the save the file to disk and then look at the file unfortunately i can't do it like this. After i save the file to disk i don't have the ability to access it so i need to look at the file before its saved to disk. I've been looking at the properties of the stream provider to try to figure out what the size of the file is but unfortunately i've been unsuccessful.
The test file i'm using is 1025 bytes.
MultipartFormDataStreamProvider.BufferSize is 4096
Headers.ContentDisposition.Size is null
ContentLength is null
Is there a way to determine file size before it's saved to the file system?
Thanks to Guanxi i was able to formulate a solution. I used his code in the link as the basis i just added a little more async/await goodness :). I wanted to add the solution just in case it helps anyone else:
private async Task SaveMultipartStreamToDisk(Guid guid, string fullPath)
{
var user = HttpContext.Current.User.Identity.Name;
var multipartMemoryStreamProvider = await Request.Content.ReadAsMultipartAsync();
foreach (var content in multipartMemoryStreamProvider.Contents)
{
using (content)
{
if (content.Headers.ContentDisposition.FileName != null)
{
var existingFileName = content.Headers.ContentDisposition.FileName.Replace("\"", string.Empty);
Log.Information("Original File name was {OriginalFileName}: {guid} {user}", existingFileName, guid,user);
using (var st = await content.ReadAsStreamAsync())
{
var ext = Path.GetExtension(existingFileName.Replace("\"", string.Empty));
List<string> validExtensions = new List<string>() { ".pdf", ".jpg", ".jpeg", ".png" };
//1024 = 1KB
if (st.Length > 1024 && validExtensions.Contains(ext, StringComparer.OrdinalIgnoreCase))
{
var newFileName = guid + ext;
using (var fs = new FileStream(Path.Combine(fullPath, newFileName), FileMode.Create))
{
await st.CopyToAsync(fs);
Log.Information("Completed writing {file}: {guid} {user}", Path.Combine(fullPath, newFileName), guid, HttpContext.Current.User.Identity.Name);
}
}
else
{
if (st.Length < 1025)
{
Log.Warning("File of length {FileLength} bytes was attempted to be uploaded: {guid} {user}",st.Length,guid,user);
}
else
{
Log.Warning("A file of type {FileType} was attempted to be uploaded: {guid} {user}", ext, guid,user);
}
var responseMessage = new HttpResponseMessage(HttpStatusCode.BadRequest)
{
Content =
st.Length < 1025
? new StringContent(
$"file of length {st.Length} does not meet our minumim file size requirements")
: new StringContent($"a file extension of {ext} is not an acceptable type")
};
throw new HttpResponseException(responseMessage);
}
}
}
}
}
You can also read the request contents without using MultipartFormDataStreamProvider. In that case all of the request contents (including files) would be in memory. I have given an example of how to do that at this link.
In this case you can read header for file size or read stream and check the file size. If it satisfy your criteria then only write it to desire location.

iText/iTextSharp 5.5.0 has error with pdf burst

Below code works fine on iTextSharp 5.2.1
var file= new FileInfo(args[0]);
string name = file.Name.Substring(0,file.Name.LastIndexOf("."));
// we create a reader for a certain document
var reader = new PdfReader(args[0]);
// we retrieve the total number of pages
int numberOfPages = reader.NumberOfPages;
Console.WriteLine("There are " + n + " pages in the original file.");
Document document;
string filename;
PdfCopy copy;
for (int pageNumber = 1; i <= numberOfPages; i++)
{
filename = pageNumber.ToString();
filename = "_" + filename + ".pdf";
// step 1: creation of a document-object
document = new Document(reader.GetPageSizeWithRotation(pageNumber ));
// step 2: we create a writer that listens to the document
copy = new PdfCopy(document, new FileStream(name + filename,FileMode.Create));
// step 3: we open the document
document.Open();
copy.AddPage(copy.GetImportedPage(reader, pageNumber));
// step 5: we close the document
document.Close();
}
But it throws below error on iTextSharp 5.5.0,
The page 1 was requested but the document has only 0 pages.
it seems like that the last line actually tampered the reader instance. Could someone help me figure it out? Now I walkaround this by recreating a PdfReader instance for each page, but that is slow for large PDF file.
I finally nailed it. It's a change of iText in recent version that caused this issue, specifically, the issue lies in the WriteAllPages of the PdfReaderInstance.cs file. In it, the line
file.Close();
is the culprit. The reason is that Document.Close method will call PdfCopy.Close method which will in turn call PdfReaderInstance.WriteAllPages method and then all hells break loose.
On the surface, it seems a good practice to close the file, but really, it's none of your business, damn you, PdfReaderInstance.
Hope this info helps others.
iTextSharp page numbers are one-based, not zero. These two lines are accounting for this correctly:
pagenumber = i + 1;
document = new Document(reader.GetPageSizeWithRotation(pagenumber));
However this line is still going back to the zero-based index of your loop:
copy.AddPage(copy.GetImportedPage(reader, i));
You could just change that to:
copy.AddPage(copy.GetImportedPage(reader, pagenumber));
Of you could just make your entire loop one-based and not think about having to add one every once in a while
for (int pagenumber = 1; pagenumber <= n; pagenumber++)
{
filename = pagenumber.ToString();
while (filename.Length< digits) filename = "0" + filename;
filename = "_" + filename + ".pdf";
// step 1: creation of a document-object
document = new Document(reader.GetPageSizeWithRotation(pagenumber));
// step 2: we create a writer that listens to the document
copy = new PdfCopy(document, new FileStream(name+filename,FileMode.Create));
// step 3: we open the document
document.Open();
copy.AddPage(copy.GetImportedPage(reader, pagenumber));
// step 5: we close the document
document.Close();
}

Split a "tagged" PDF document into multiple documents, keeping the tagging

In a project I have to split a PDF document into two documents, one containing all blank pages, and one containing all pages with content.
For this job, I use a PdfReader to read the source file, and two pdfCopy objects (one for the blank pages document, one for the pages with content document) to write the files to.
I use GetImportedPage to read a PdfImportedPage, which is then added to one of the PdfCopy writers.
Now, the problem is the following: the source file is using the "tagged PDF format". To preserve this (which is absolutely required), I use the SetTagged() method on both PdfCopy writers, and use the extra third parameter in GetImportedPage(...) to keep the tagged format. However, when calling the AddPage(...) on the PdfCopy writer, I get an invalid cast exception:
"Unable to cast object of type 'iTextSharp.text.pdf.PdfDictionary' to type 'iTextSharp.text.pdf.PRIndirectReference'."
Anyone has any ideas on how to solve this ? Any hints ?
Also: the project currently refers version 5.1.0.0 of the itext libraries. In 5.4.4.0 the third parameter to GetImportedPage does not seem to be there anymore.
Below, you can find a code extract:
iTextSharp.text.Document targetPdf = new iTextSharp.text.Document();
iTextSharp.text.Document blankPdf = new iTextSharp.text.Document();
iTextSharp.text.pdf.PdfReader sourcePdfReader = new iTextSharp.text.pdf.PdfReader(inputFile);
iTextSharp.text.pdf.PdfCopy targetPdfWriter = new iTextSharp.text.pdf.PdfSmartCopy(targetPdf, new FileStream(outputFile, FileMode.Create));
iTextSharp.text.pdf.PdfCopy blankPdfWriter = new iTextSharp.text.pdf.PdfSmartCopy(blankPdf, new FileStream(blanksFile, FileMode.Append));
targetPdfWriter.SetTagged();
blankPdfWriter.SetTagged();
try
{
iTextSharp.text.pdf.PdfImportedPage page = null;
int n = sourcePdfReader.NumberOfPages;
targetPdf.Open();
blankPdf.Open();
blankPdf.Add(new iTextSharp.text.Phrase("This document contains the blank pages removed from " + inputFile));
blankPdf.NewPage();
for (int i = 1; i <= n; i++)
{
byte[] pageBytes = sourcePdfReader.GetPageContent(i);
string pageText = "";
iTextSharp.text.pdf.PRTokeniser token = new iTextSharp.text.pdf.PRTokeniser(new iTextSharp.text.pdf.RandomAccessFileOrArray(pageBytes));
while (token.NextToken())
{
if (token.TokenType == iTextSharp.text.pdf.PRTokeniser.TokType.STRING)
{
pageText += token.StringValue;
}
}
if (pageText.Length >= 15)
{
page = targetPdfWriter.GetImportedPage(sourcePdfReader, i, true);
targetPdfWriter.AddPage(page);
}
else
{
page = blankPdfWriter.GetImportedPage(sourcePdfReader, i, true);
blankPdfWriter.AddPage(page);
blankPageCount++;
}
}
}
catch (Exception ex)
{
Console.WriteLine("Exception at LOC1: " + ex.Message);
}
The error occurs in the call to targetPdfWriter.AddPage(page); near the end of the code sample.
Thank you very much for your help.
Koen.

Split PDF into separate files based on text

I have a large single pdf document which consists of multiple records. Each record usually takes one page however some use 2 pages. A record starts with a defined text, always the same.
My goal is to split this pdf into separate pdfs and the split should happen always before the "header text" is found.
Note: I am looking for a tool or library using java or python. Must be free and available on Win 7.
Any ideas? AFAIK imagemagick won't work for this. May itext do this? I never used and it's
pretty complex so would need some hints.
EDIT:
Marked Answer led me to solution. For completeness here my exact implementation:
public void splitByRegex(String filePath, String regex,
String destinationDirectory, boolean removeBlankPages) throws IOException,
DocumentException {
logger.entry(filePath, regex, destinationDirectory);
destinationDirectory = destinationDirectory == null ? "" : destinationDirectory;
PdfReader reader = null;
Document document = null;
PdfCopy copy = null;
Pattern pattern = Pattern.compile(regex);
try {
reader = new PdfReader(filePath);
final String RESULT = destinationDirectory + "/record%d.pdf";
// loop over all the pages in the original PDF
int n = reader.getNumberOfPages();
for (int i = 1; i < n; i++) {
final String text = PdfTextExtractor.getTextFromPage(reader, i);
if (pattern.matcher(text).find()) {
if (document != null && document.isOpen()) {
logger.debug("Match found. Closing previous Document..");
document.close();
}
String fileName = String.format(RESULT, i);
logger.debug("Match found. Creating new Document " + fileName + "...");
document = new Document();
copy = new PdfCopy(document,
new FileOutputStream(fileName));
document.open();
logger.debug("Adding page to Document...");
copy.addPage(copy.getImportedPage(reader, i));
} else if (document != null && document.isOpen()) {
logger.debug("Found Open Document. Adding additonal page to Document...");
if (removeBlankPages && !isBlankPage(reader, i)){
copy.addPage(copy.getImportedPage(reader, i));
}
}
}
logger.exit();
} finally {
if (document != null && document.isOpen()) {
document.close();
}
if (reader != null) {
reader.close();
}
}
}
private boolean isBlankPage(PdfReader reader, int pageNumber)
throws IOException {
// see http://itext-general.2136553.n4.nabble.com/Detecting-blank-pages-td2144877.html
PdfDictionary pageDict = reader.getPageN(pageNumber);
// We need to examine the resource dictionary for /Font or
// /XObject keys. If either are present, they're almost
// certainly actually used on the page -> not blank.
PdfDictionary resDict = (PdfDictionary) pageDict.get(PdfName.RESOURCES);
if (resDict != null) {
return resDict.get(PdfName.FONT) == null
&& resDict.get(PdfName.XOBJECT) == null;
} else {
return true;
}
}
You can create a tool for your requirements using iText.
Whenever you are looking for code samples concerning (current versions of) the iText library, you should consult iText in Action — 2nd Edition the code samples from which are online and searchable by keyword from here.
In your case the relevant samples are Burst.java and ExtractPageContentSorted2.java.
Burst.java shows how to split one PDF in multiple smaller PDFs. The central code:
PdfReader reader = new PdfReader("allrecords.pdf");
final String RESULT = "record%d.pdf";
// We'll create as many new PDFs as there are pages
Document document;
PdfCopy copy;
// loop over all the pages in the original PDF
int n = reader.getNumberOfPages();
for (int i = 0; i < n; ) {
// step 1
document = new Document();
// step 2
copy = new PdfCopy(document,
new FileOutputStream(String.format(RESULT, ++i)));
// step 3
document.open();
// step 4
copy.addPage(copy.getImportedPage(reader, i));
// step 5
document.close();
}
reader.close();
This sample splits a PDF in single-page PDFs. In your case you need to split by different criteria. But that only means that in the loop you sometimes have to add more than one imported page (and thus decouple loop index and page numbers to import).
To recognize on which pages a new dataset starts, be inspired by ExtractPageContentSorted2.java. This sample shows how to parse the text content of a page to a string. The central code:
PdfReader reader = new PdfReader("allrecords.pdf");
for (int i = 1; i <= reader.getNumberOfPages(); i++) {
System.out.println("\nPage " + i);
System.out.println(PdfTextExtractor.getTextFromPage(reader, i));
}
reader.close();
Simply search for the record start text: If the text from page contains it, a new record starts there.
Apache PDFBox has a PDFSplit utility that you can run from the command-line.
If you like Python, there's a nice library: PyPDF2. The library is pure python2, BSD-like license.
Sample code:
from PyPDF2 import PdfFileWriter, PdfFileReader
input1 = PdfFileReader(open("C:\\Users\\Jarek\\Documents\\x.pdf", "rb"))
# analyze pdf data
print input1.getDocumentInfo()
print input1.getNumPages()
text = input1.getPage(0).extractText()
print text.encode("windows-1250", errors='backslashreplacee')
# create output document
output = PdfFileWriter()
output.addPage(input1.getPage(0))
fout = open("c:\\temp\\1\\y.pdf", "wb")
output.write(fout)
fout.close()
For non coders PDF Content Split is probably the easiest way without reinventing the wheel and has an easy to use interface: http://www.traction-software.co.uk/pdfcontentsplitsa/index.html
hope that helps.

Splitted PDF size is larger when using the iTextSharp

Dear Team,
In my application, i want to split the pdf using itextsharp. If i upload PDF contains 10 pages with file size 10 mb for split, After splitting the combine file size of each pdfs will result into above 20mb file size. If this possible to reduce the file size(each pdf).
Please help me to solve the issue.
Thanks in advance
This may have to do with the resources in the file. If the original document uses an embedded font on each, for example, then there will only be one instance of the font in the original file. When you split it, each file will be required have that font as well. The total overhead will be n pages × sizeof(each font). Elements that will cause this kind of bloat include fonts, images, color profiles, document templates (aka forms), XMP, etc.
And while it doesn't help you in your immediate problem, if you use the PDF tools in Atalasoft dotImage, your task becomes a 1 liner:
PdfDocument.Separate(userpassword, ownerpassword, origPath, destFolder, "Separated Page{0}.pdf", true);
which will take the PDF in orig file and create new pages in the dest folder each named with the pattern. The bool at the end is to overwrite an existing file.
Disclaimer: I work for Atalasoft and wrote the PDF library (also used to work at Adobe on Acrobat versions 1, 2, 3, and 4).
Hi Guys i modified the above code to split a PDF file into multiple Pdf file.
iTextSharp.text.pdf.PdfReader reader = null;
int currentPage = 1;
int pageCount = 0;
//string filepath_New = filepath + "\\PDFDestination\\";
System.Text.UTF8Encoding encoding = new System.Text.UTF8Encoding();
//byte[] arrayofPassword = encoding.GetBytes(ExistingFilePassword);
reader = new iTextSharp.text.pdf.PdfReader(filepath);
reader.RemoveUnusedObjects();
pageCount = reader.NumberOfPages;
string ext = System.IO.Path.GetExtension(filepath);
for (int i = 1; i <= pageCount; i++)
{
iTextSharp.text.pdf.PdfReader reader1 = new iTextSharp.text.pdf.PdfReader(filepath);
string outfile = filepath.Replace((System.IO.Path.GetFileName(filepath)), (System.IO.Path.GetFileName(filepath).Replace(".pdf", "") + "_" + i.ToString()) + ext);
reader1.RemoveUnusedObjects();
iTextSharp.text.Document doc = new iTextSharp.text.Document(reader.GetPageSizeWithRotation(currentPage));
iTextSharp.text.pdf.PdfCopy pdfCpy = new iTextSharp.text.pdf.PdfCopy(doc, new System.IO.FileStream(outfile, System.IO.FileMode.Create));
doc.Open();
for (int j = 1; j <= 1; j++)
{
iTextSharp.text.pdf.PdfImportedPage page = pdfCpy.GetImportedPage(reader1, currentPage);
pdfCpy.SetFullCompression();
pdfCpy.AddPage(page);
currentPage += 1;
}
doc.Close();
pdfCpy.Close();
reader1.Close();
reader.Close();
}
Have you tried setting the compression on the writer?
Document doc = new Document();
using (MemoryStream ms = new MemoryStream())
{
PdfWriter writer = PdfWriter.GetInstance(doc, ms);
writer.SetFullCompression();
}