I tried to find proper services for generating PDF files in Liferay, however I have found only class PDFProcessorUtil. How to use it to generate PDF file? How to save the generated file then? I think I should use
DLAppLocalServiceUtil.addFileEntry to save file into Liferay storage.
Liferay's PDF-conversion works by converting documents in the document library and offering them for download - this is implemented through Open Office. Install Open Office or Libre Office, run it in server mode and configure Liferay to use it, then you can choose to select downloads as PDF. The HTML format has a few limitations, as it can include so many external resources, so I'm not sure what your result will be.
If you're generating the HTML output yourself, you might want to consider any other (Liferay-independent) means of generating PDF, as you might not need to upload your files to the Document Library (e.g. if you're generating reports on the fly and just want the generator result to be PDF, but not store them). If this is what you need, you can use any pdf converter library you want - Liferay does not limit you in your choice.
You can also generate the PDFs from the serve resource phase of a portlet.
You put a button or a link somewhere, and when you click on it, you download the PDF.
In this simple example, the PDF is generated from a Freemarker template that generates an HTML that is converted to PDF:
https://github.com/roclas/pdfUtil
Related
I have a Notes app that was designed for the browser, not the client. It allowed upload of files into the documents, so nearly all the documents have files. The files are stored in the NSF as $FILE and displayed in the documents as links.
I am using Adobe Acrobat Pro to create PDFs from the documents and need to include the file attachments within the PDFs, however the PDFs just include links to the files, not the attachments. Can I write an agent to run against the documents to get those files and embed them within the documents? When I view those documents through the client, I see all of the HTML etc. and then at the bottom of the document, the file attachments appear. When I view these same documents in the browser, the file attachments do not appear. If I could merely ensure that they are there, then when running the PDF generator in Acrobat Pro, they would be included in the PDFs and executable.
I am really stuck here, with no other way to 'archive' this notes database with all the data intact.
Thanks in advance for any insights!!
Ginni
There is a commercial product from Swing Software that does this. I hear that it's quite good, but I've never used it. Let me explain why...
The way I usually end up doing this is just quick-and-dirty. I write an agent to export the files, using the document UNID as part of the filename. The same agent exports all the data fields from the document into a CSV file, and I add a column with the filename of the extracted attachment. In your case, I would add two columns -- one for the extracted attachment(s), and one for the generated PDF. The CSV serves as an index for the exported data. It can be imported into something more friendly, or just left as-is and brought up in Excel, depending on the customer's usage requirements and available systems. I've recommended Swing Software's product and offered to explore other ideas for developing code (e.g., using wkhtmltopdf for Domino web apps to capture a WYSIWYG rendering based on an HTML crawl) for PDF rendering of Notes documents for a couple of clients, but none of them have justified the cost that would be involved in buying licenses and/or writing the code. Quick and dirty always seems to win, even when there are retention and eDiscovery considerations taken into account.
I'm currently designing a full text search system where users perform text queries against MS Office and PDF documents, and the result will return a list of documents that best match the query. The user will then be to select any document returned and view that document within MS Word, Excel, or a PDF viewer.
Can I use ElasticSearch or Solr to import the raw binary documents (ie. .docx, .xlsx, .pdf files) into its "data store", and then export the document to the user's device on command for viewing.
Previously, I used MongoDB 2.6.6 to import the raw files into GridFS and the extracted text into a separate collection (the collection contained a text index) and that worked fine. However, MongoDB full text searching is quite basic and therefore I'm now looking at either Solr or ElasticSearch to perform more complex text searching.
Nick
Both Solr and Elasticsearch will index the content of the document. Solr has that built-in, Elasticsearch needs a plugin. Easy either way and both use Tika under the covers.
Neither of them will store the document itself. You can try making them do it, but they are not designed for it and you will suffer.
Additionally, neither Solr nor Elasticsearch are currently recommended as a primary storage. They can do it, but it is not as mission critical for them as - say - for a filesystem implementation.
So, I would recommend having the files somewhere else and using Solr/Elasticsearch for searching only. That's where they shine.
I would try the Elasticsearch attachment plugin. Details can be found here:
https://www.elastic.co/guide/en/elasticsearch/plugins/2.2/mapper-attachments.html
https://github.com/elasticsearch/elasticsearch-mapper-attachments
It's built on top of Apache Tika:
http://tika.apache.org/1.7/formats.html
Attachment Type
The attachment type allows to index different "attachment" type field
(encoded as base64), for example, Microsoft Office formats, open
document formats, ePub, HTML, and so on (full list can be found here).
The attachment type is provided as a plugin extension. The plugin is a
simple zip file that can be downloaded and placed under
$ES_HOME/plugins location. It will be automatically detected and the
attachment type will be added.
Supported Document Formats
HyperText Markup Language
XML and derived formats
Microsoft Office document formats
OpenDocument Format
iWorks document formats
Portable Document Format
Electronic Publication Format
Rich Text Format
Compression and packaging formats
Text formats
Feed and Syndication formats
Help formats
Audio formats
Image formats
Video formats
Java class files and archives
Source code
Mail formats
CAD formats
Font formats
Scientific formats
Executable programs and libraries
Crypto formats
A bit late to the party but this may help someone :)
I had a similar problem and some research led me to fscrawler. Description:
This crawler helps to index binary documents such as PDF, Open Office, MS Office.
Main features:
Local file system (or a mounted drive) crawling and index new files,
update existing ones and removes old ones. Remote file system over SSH
crawling.
REST interface to let you "upload" your binary documents to elasticsearch.
Regarding solr:
If the docs only need to be returned on metadata searches, Solr features a BinaryField fieldtype, to which you can send binary data base64 encoded.Keep in mind that in general people recommend against doing this, as it may increase your index (RAM requirements/performance), and if possible a set-up where you store the files externally (and the path to the file in solr) might bea better choice.
If you want solr to automatically index the text inside the pdf/doc -- that's possible with the extractingrequesthandler: https://wiki.apache.org/solr/ExtractingRequestHandler
Elasticsearch do store documents (.pdfs, .docs for instance) in the _source field. It can be used as a NoSQL datastore (same as MongoDB).
I'm looking for a way to convert a PDF document into multiple ics files that staff can use to add their fortnight roster to their smart phone calendars or outlook calendar on their desktops. The information required to create the multiple files would be pulled from the PDF by searching for selected initials from each column then referencing data from the same row as the initials. Is their a particular order I need the data to appear in the ics file to allow it to import to a smartphone calendar??
You can search for pdf APIs for more details in handling a pdf using programmatically.
and here are some online converters that could help. They convert a pdf into word
http://www.pdftoword.com/success.aspx
http://www.pdfescape.com/account/?expired
However, reconstructing structured data from PDF is not trivial because a program has to deduct the semantics in the layout. So most programs can only restore scattered data from a pdf.
I've done this with PERL and windows Adobe PDF viewer to highlight all the text in the PDF and cut and paste to a text file. As the previous answer said, you have to write PERL (or any other text processing language) to pick out the format of the PDF you have. Then you can print it with PERL to csv or to ical or whatever format you want. I've shared my code on github.com. I'm not sure if you know GIT, but send me a private message if you want me to send the PERL code outside of GIT.
The PDF's I've converted are here:
http://recplexonline.com/sports/hockey/old-geezers-hockey-35
The Git hub of my PERL code and the input files I used are here:
https://github.com/jdeltoft/PdfParse
It's pretty ugly perl, sorry for that. But it works. I'll try to clean it up soon.
I have a problem about PDF file encryption using php.
Case: Let's say I have a local system (web based) to upload and download files, such as 4sh*red (dot) com, but it just allows PDF file. A user sign up and login to download the PDF files using his or her own personal computer. After users downloaded a PDF file from my system, the file can be viewed only on computer where they downloaded the file. But, if another user copy it (I mean: downloaded PDF file) to another computer, the file can't be viewed on that computer.
Note: I don't mean here about protecting the PDF files using password because nowadays there are a lot of softwares used to remove PDF's password protection. But, the file can't be viewed at all if copied to another computer.
Can we do that in php? If yes, do you know any algorithm to solve the case?
I really appreciate your response or answers.
Thank you.
The PDF format is an open format by Adobe. This means there are a lot of programs out there that can read it and quite same that can modify it.
If you write your own program and add some stuff to the PDF, then maybe you can do this.
Another question is - why don't you just make the document visible in the web browser to the user? Of course there's still going to be a way around for savvy users to get it, but most noobs wouldn't know how and you can easily close the simplest blocks (like right click / save).
What maybe interesting to do is what a lot of companies are doing with videos nowadays: you can dynamically add some hidden or visible 'info' to a PDF that identifies who you sent it to. In that way, if the PDF shows up somewhere else - you know who spread it.... Again - PDF is an open format, so anyone can always erase whatever you write in the main contents, so you'd have to add a hidden image to the content or something.
I'm looking to write an automated script that
Opens up a browser instance with a specific URL
Print the page as PDF output to a pre-defined location and document name
Simulate a click event on the web page that goes to the next report
Repeat 2 and 3 for a fixed number of times.
I'm not sure how to start doing this. Thought of using Javascript, but it won't be able to automate the printing process.
There is no control of the server, therefore I cannot use a query to get the collection of those reports.
The reason for the script is that there are many such reports, and the server can be very slow at times, it would be better to have them locally.
UPDATE: Forgot to mention that log in is required for the server.
I think scripting an off-the-shelf browser is very much the Hard Way to solve your problem. If you can at all predict the URLs for the individual report, use a command-line tool such as wget or curl to download them, and then look at this community wiki for rendering the downloaded HTML as PDF.
Or do you even need to go to PDF? If all you're interested in is having the reports available locally, why not keep them as HTML and view them in a browser (with a file: URL) rather than a PDF viewer?