I have a webservice application that returns a base64 encoded PDF file, created with aspose. This webservice is now installed on a different (windows) server for testing purposes. However, when I call the webservice on the new server, the base64 is different than the original base64 on the first server.
I would like to understand why the base64 on the different servers are different. I converted the base64 to PDF and checked the PDF file, but it looks the same (besides the size of the PDF file that originally was 18kB, but is 14kB on the new server). Later we will need to install this webservice on multiple servers, where we hope that the base64 could be the same on all servers, so the base64 can be checked if the response is correct.
As far as I know there shouldn't be information about the server within the base64, so this couldn't be the different. Besides this the font that is used is also available on both servers. I already checked the metadata and didn't see any information here.
Could anyone help me and explain why these base64 are different and where the difference comes from?
Update:
I just uploaded the 2 PDF files, so it is easier to help me analyse the differences. These are the 2 PDF files:
Original server:
http://www.filedropper.com/pdforiginalserver
New server:
http://www.filedropper.com/pdfnewserver
I hope this makes it easier to help me with this problem.
The PDFs both embed a subset of the font Calibri, but on those two servers apparently different versions of that font were available to the PDF producer to create a subset from:
On the original server Calibri version 6.18 (copyrighted 2016) was used.
On the new server Calibri version 5.9.0 (copyrighted 2014) is used.
Related
This is my first time on Stack Overflow. Thanks to all for providing valuable information and helping one another.
I am currently working on Apache Solr 7. There is a POC I need to complete as I have less time so putting this question here. I have setup SOLR on my windows machine. I have created core and uploaded a PDF document using /update/extract from Admin UI. After uploading, I can see the metadata of the file if I query from the Admin UI using query button. I was wondering if I can get the actusl content of the PDF as well. I can see there is one tlog file gets generated under /data/tlog/tlog000... with raw PDF data but not the actual file.
So the question are,
1. Can I get the PDF content?
2. does Solr stores the actual file somewhere?
a. If it stores then where it does?
b. If it does not store then, is there a way to store THE FILE?
Regards,
Munish Arora
Solr will not sore the actual file anywhere.
Depending on your config it can store the binary content though.
Using the extract request handler Apache Solr relies on Apache Tika[1] to extract the content from the document[2].
So you can search and return the content of the pdf and a lot of other metadata if you like.
[1] https://tika.apache.org/
[2] https://lucene.apache.org/solr/guide/6_6/uploading-data-with-solr-cell-using-apache-tika.html
Problem: Need to convert local html (with local images etc) to pdf from an AIX box running Universe 11.2.5 with System Builder
Current solution: FTP over html file to a Windows server which converts in batches and sends the e-mail to the destination
Proposed Solution: Do everything on the AIX box, from converting html to pdf and sending the e-mail.
Current problem: Unable to find a way to convert local html to PDF on the AIX box. I have been trying many different ways from trying to install Python3, but to no avail.
The only really difficult part of the process is getting the HTML to render into a format will properly display your html into pages that are suitable for printing. There is a fair amount of magic that goes on between HTTP:GET and clicking print on a browser window that needs to be accounted for.
I was trying accomplish something similar many moons ago on AIX but kind of ran into a skill level/time wall because I was going to have essentially create a headless browser to render the html. It looks like there are now some utilities that you might be able to leverage. I found this recent updated article on Super User that actually got me somewhat excited, especially since I don't use AIX anymore so precompiled binaries and well understood and easily attainable dependencies are something I can actually have in my life.
https://superuser.com/questions/280552/how-can-i-render-a-website-as-an-image-from-the-shell
Good Luck.
There seems to be several questions rolled into this one item.
Converting HTML to PDF, while that is just a data manipulation that you could do in basic, writing such code would be a large task. The option you use sending it to another system is valid, but put more points of failure into the system. I would think you could find code to do it on the AIX box.
Rocket plans on getting the MV Python to work on AIX, this will make the converting of html to PDF much easier since there are a lot of open source modules.
As for my suggestion of using sockets, that would be if you intend to send it to a service that will take the htms, and return the pdf document.
i.e. Is there a web service for converting HTML to PDF?
Once you have the pdf document, you can either store it in a UniVerse type-19 file, or do the base64 encoding and store it in UniVerse hash file.
Hope this helps,
Mike
I'm trying to overhaul a pdf report generation application built in CF8 and they have an interface which generates a 50 page legal report as a pdf and sends it out about 100x a day. However, its very cumbersome and bogs down an already overworked server. Is there a good PDF compression script that I can run with coldfusion or a way to integrate with Adobe acrobat to have it compress the pdf before the server sends the pdf via email? The system is already setup using the available Coldfusion resources to try and help with this process, but its still not sufficient.
Update: I had the opportunity to further dig into this issue. The way these documents are compiled its via 4 CF forms where someone manually types in the legal data as it comes in to the system. Some of the form fields are lengthy (accepting in excess of 10,000 characters or more). Once completed, it runs a cfdocument tag that converts everything into a pdf.
CFDocument generates bloated PDFs. We tested GhostScript and used the following parameters to compress a 22.3mb PDF to 4mb. (If set to "screen", the file size shrunk down further to 2.5mb.)
http://www.ghostscript.com/
To use this, you'll have to perform this optimization as an extra step after the generation of your PDF and use cftry/catch in case there are any issues or timeouts.
<CFSET ThePDFFile = "C:\test\OriginalPDF.pdf">
<cfexecute name="c:\Program Files\gs\gs9.14\bin\gswin64.exe"
arguments="-sDEVICE=pdfwrite -dPDFSETTINGS=/prepress -dNOPAUSE -dQUIET -dBATCH -sOutputFile=#replace(ThePDFFile,'.pdf','-opt.pdf')# #ThePDFFile#" timeOut="160">
</cfexecute>
Another solution would be to generate your PDFs using WKHTML2PDF. The resultant files are much smaller. The quality is consistent on ColdFusion 8-11. You can embed TrueType fonts without having to register them and it doesn't have any of the HTML/CSS quirks that are present with CFDocument.
http://wkhtmltopdf.org/
Here's a link to an article and some sample ColdFusion code that you can use to compare the results of WKHTMLTOPDF to CFDocument.
http://gamesover2600.tumblr.com/post/108490381084/wkhtmltopdf-demo-to-compare-w-adobe-coldfusion
I was actually able to resolve it utilzing the NeeviaPDF compression application and tied it into ColdFusion with the following code:
<cfset pdf_file_name = 'qryGetFileJustUploaded' />
<cfexecute name="C:\Program Files (x86)\neeviaPDF.com\PDFcompress\cmdLine\CLcompr.exe"
arguments="C:\inetpub\wwwroot\testingFolder\PDFCompression2\pdf\#pdf_file_name# C:\inetpub\wwwroot\testingFolder\PDFCompression2\pdf\#pdf_file_name# -co -ci jpg -cq 10 -gi jpg -gq 10 -mi jbig2 -mq 1"
outputfile="C:\inetpub\wwwroot\testingFolder\PDFCompression2\output.txt"
timeout="250">
</cfexecute>
where you can pass in a value to the variable #pdf_file_name# and if you want to set name of the output compressed pdf, just pick a name and place that name where #pdf_file_name# is referenced in the second C:\ line.
I am new to PowerBuilder. I got an assignment to create a PDF file using PowerBuilder. How can I do that?
Our organization used to use Ghostscript, but has instead moved to Amyuni.
as suggested by Alberto Megia, download PDF creator, but dont use SAVE AS.
After you install pdf creator it will install a printer, use that printer to save the
datawindow with the print function.
after call print function, you will see a "Save as" dialog.
If you use "saveas" function, the pdf will not have the format that the datawindow shows.
What version of PowerBuilder are you using? The most recent versions have PDF capability built in (using Ghostscript).
Install Ghostscript.
Get PDFCreator for free there and install it.
Then you can save as PDF any datawindow or datastore with the statement:
dw_1.saveAs(path_where_to_save_with_name_of_file.pdf, PDF!, true)
Third parameter is for override if the file exists with that name. I hope it works for you.
Regards,
Alberto
We just use Ghostscript. I wrote Ghostscript setup instructions earlier. We also print Word documents we've filled in to PDF from our app by printing them to 'Sybase DataWindow PS' printer then running Ghostscript to make the PDF.
Good Question - There really isn't an easy way other than finding a third party tool. I've tried the prior method mentioned and it does work but not without headaches and you are left with deployment headaches, deploying ghost script and having to make sure Post Script drivers are on the client.
I ended up trying many PDF converters, both free and paid, the one that worked most seamlessly was one that installed as a "printer" such as if you have Adobe installed on the PC, but you need to dynamically verify existence of the printer via RegistryGet and if it doesn't exist ask user to install or install it dynamically via code, and registry entries (not fun).
After several headaches mostly related to deployment issues I ended up going with a server solution, but it requires having a server that you can have a process (distiller) running that grabs post script files and distills them to PDF. I used a response window with progress bar, the PB app printed post script file to server location upon which the distiller grabs and converts. My PB app polls the server until it finds the PDF, or the user cancels whichever comes first. With a good distiller the process is fast (< 5 seconds) which was acceptable to our users.
Upon existence of the PDF, we'd attach it to an email and send via Oracle (mapi). This solution limits the requirements on client to post script driver which in most corporate environments is there, but you need to check it via Registry. Maybe there is a better solution out there since I did this last, around 2008.
fyi- I usually don't make vendor recommendations but will in this case because there was one that stood out in ease of use and quality, it was called PDFCreator which installs as a windows printer. It looks to be open-source right now but I recall that we would have had to pay to use it in corporate environment.
Good Luck.
Use the tutorial How to use PowerBuilder to create PDF file?.
I have a PDF generated by 3rd party system. Using PDF editor or els software I have modified it.
Is it possible to detect if PDF file was modified, without original file?
I will add some more details.
There is no encryption and no signature features.
Document is created by IT system. User receives document and modifies it.
Is it possible to track that change somehow?
I thought that all these applications leaves some data in PDF header or somewhere encoded inside file and it is possible to check it. However properties showed by windows explorer shows nothing... so I was interested if there is something smarter than viewing properties/header in explorer.
The problem with this is that just opening the PDF on a Mac in Preview and hitting Command-S to save the file will replace both the Creation and Modification date to match the current date/time. So even the creation date will be wrong. Even novice users can unknowingly do this, so if you're trying to track someone who may be purposefully modifying the document, it may lead to a false positive.
What you're asking is just too easy to spoof and fool unfortunately.
You could always check the md5sum of the pdf file. I'm not sure what environment you are using but that should help get you started.
It's going to be rough without the original file unless there were security features like encryption or digital signatures applied to it, which it doesn't sound like there was. Do you have access to any information at all about the original file? A file size, creation date, any of the metadata, etc.?
If the tool used to modify the PDF is working according to the PDF spec then in the Info dictionary it should update ModDate but leave CreationDate alone. You may also see some non-zero generation numbers on the objects although it is just as possible that all the objects have been regenerated and will therefore be generation 0. The trial version of CosEdit will allow you to look at these 2 items.
If however the tool has been used to intentionally modify the PDF without leaving a trace then they would be spoofing those bits of data so they won't help you.
Are the users modifying the PDF using Acrobat? If so then what Danio mentioned above should work. Strictly speaking, modifying the PDF should change its ModDate or xmp:ModifyDate without changing its CreationDate. However not all tools adhere to this; quite a few simply leave all metadata untouched, so this method of checking isn't 100% reliable unless you know what PDF editor your users employ.
If the editor your users use does change ModDate or xmp:ModifyDate, then you should be able to see it in two places. One is when you open the document in Acrobat and hit Ctrl-D to view Document Properties. The Creation field and Modified field should have different timestamps. There may also be APIs that can be used to programmatically retrieve this metadata. The other way you can visualize it is to simply open the PDF in Notepad and search for the properties. Most of the document won't be human readable but these timestamps should be. If they do get changed appropriately, you can always parse for them in your application. Good luck!
If you're using Ubuntu linux 18.04 and using Document Viewer then, you can
click on File options (3 vertical line ellipsis)
click on Properties...
look for Created / Modified fields in the Properties pop up
Beware: A sufficiently knowledgeable user can manipulate the PDF contents without changing the Created and Modified time stamps in the PDF metadata and the file system.
You can use some tools to get the pdf file property.
I use pdfinfo, you can get many property of the file, and check it.
pdfinfo 58dcc41d01293.pdf
Author: worker
Creator: Microsoft® Word 2016
Producer: Microsoft® Word 2016
CreationDate: Sat Aug 24 16:02:29 2019
ModDate: Sat Aug 24 16:02:29 2019
Tagged: yes
UserProperties: no
Suspects: no
Form: none
JavaScript: no
Pages: 55
Encrypted: no
Page size: 841.92 x 595.32 pts (A4)
Page rot: 0
File size: 3346838 bytes
Optimized: no
PDF version: 1.7