ExportAsFixedFormat for VBA saving Files - vba

i have a problem regarding the process of saving a bunch of pdfs (exported from WordDocuments).
The runtime of my program just behaves a bit weird and thats why i am asking.
So I want to save the files on a global drive.
In my program i create a folder (in that drive where) i put all the pdfs.
Somehow if i do this operation for the first time it is really slow.
But if I do this operation ( for the same fodler a second time) it is somehow really fast. (after I deleted the "old" pdfs, or the old pdfs where overwritten)
I am a bit frustrated and I cannot explain why that is
Could somebody help pls
Would be very happy for an answer
Greetings
Jonas
Using this simple code
doc.ExportAsFixedFormat wholefile, ExportFormat:=wdExportFormatPDF

There are multiple reasons why the ExportAsFixedFormat method can be slow. But I would start from dealing with local files only. Then I'd play with arguments specifying the document quality and exported format. It makes sense to play with other parameters left in your code sample to its default values.

Related

Replace words/phrases in existing PDF or docx with other words

I am trying to make a dynamic PDF generator as an .NET Core API. I want to take an existing PDF, or .docx file, and edit it so it replaces the current name (John Doe) with something that can be replaced like #NAME_PLACEHOLDER.
I then want to transform #NAME_PLACEHOLDER -> John Doe (or whatever is in the KeyValuePair or Dictionary<string, string>).
I am running this on a Docker environment, so I can easily execute commands and I am willing to do that as well.
So far I have tried a few things:
1) pdf2htmlEX
Executes as pdf2htmlEX file.pdf
Does the job pretty well
Can be converted back to PDF using Google Chrome headless or similar
Problem: Only the characters used in the PDF can be used to replace. So if I only use A, B, C as characters, it will make D into Times New Roman (or default font)
2) LibreOffice ODT to PDF
This was pretty nice, because I could simply unzip the .odt file, open content.xml, search and replace, then save it as an .odt file again
Could be converted into PDF rather easily using soffice --convert-to pdf
LibreOffice is quite nice
Problem 1: Microsoft Word -> Save as ODT tends to break the formatting, so we have to use LibreOffice to go and change it back again
Problem 2: We don't want to move away from Microsoft's Office suite
3) HTML to PDF using Chrome Headless
What you see is what you get
By far the best option, if we're all developers aaand have unlimited time
Problem 1: Only our developers can make changes, since our marketing department do not know HTML
Problem 2: Our existing PDFs would have to be rewritten in HTML
As you can see, I have tried a bunch of things. None of them, except Chrome Headless, has lived up to my expectations. What I really like about #3 is what you see is what you get. I can make the whole thing in HTML, press CTRL+P and see what it looks like as a finished PDF, basically.
I am looking for a better solution, though. It can be paid. It can be free. All I need is to change out words/phrases with other words dynamically, which apparently seems like a tough thing to do.
Thanks for specifying what you've already found clearly. It helps a lot providing a succinct answer.
The conversion is always tricky - I'm sure you know Word has trouble displaying/editing some Word documents itself.
I have experience regarding point #2 "LibreOffice ODT to PDF" and can suggest a few things to test:
Don't use Microsoft to do the docx->odt conversion. It's not good as you know. Use LibreOffice itself to do this step. The rest of your process remains the same.
For some documents, Libre Office does doc->odt much better. So, you can instead work with DOC format and get a better result without any other changes.
You won't be able to remove the devs from the process, but you can certainly reduce their role allowing your business/marketing teams to have more direct input simply by:
get the starting point document to the devs to run through the conversion process. The devs can "clean up" the document to make it convert nicely.
make this version of the document the "official" starting point. The business or technical teams can load it, adjust it, and put it back into the process.
if possible, expose a test-platform to the business teams so they can download, adjust, upload and render to PDF. This cycle means they will be able to achieve more and if they're good, do impressive stuff without any dev input.
the above steps simply mean don't expect perfect conversion of arbitrary complex documents. Starting from a (even complex) working baseline is great.
Some of that might show you that your #2 is actually going to get the best overall results.
I hope that helps.

Extract text from illustrator file without opening file

Any idea if it would be possible to extract text from a illustrator file without opening it?
I have an AppleScript currently extracting the text but it takes a long time when I'm working on hundreds of files. I was wondering if it would be possible to get the information without opening the AI file.
+1 for show your own code first. (Also, typo in first line: I think you meant “Illustrator”, not “photoshop”.)
If you’re only getting plain text it should only take a fraction of a second per document (opening the file will take longer):
tell application "Adobe Illustrator"
get contents of every text frame of document 1
end tell
(i.e. Never iterate over individual application objects, querying each one, when a single query will do everything for you. Apple events are relatively expensive for apps to resolve; sending lots of them unnecessarily really kills performance.)
Also be aware that AppleScript also has serious performance problems when iterating over large lists, but that’s a separate issue, the solution to which should already be covered elsewhere.

How to read a Executable(.EXE) file in OpenVMS

When am trying to open any .EXE file am getting information in encoded form. Any idea how to see the content of an .EXE file ????
I need to know what Database tables are used in the particular .EXE.
Ah, now we are getting closer to the real question.
It is probably much more productive to ask the targeted databases about the SQL queries being execute during the run, or a top-ten shortly afterwards.
The table-names might not be hard-coded recognizably as such in the executable.
They might be obtained by a lookup, and some fun pre-fixing or other transformation might be in place.
Admittedly they like are clear text.
Easiest is probably to just transfer to a Unix server and use STRINGS on the image.
I want to include the source here with but that failed, and I cannot find how to attach a file. Below you'll find a link OpenVMS macro program source for a STRINGS like tool. Not sure how long the link will survive.
Just read for instructions, save (strings.mar), compile ($ MACRO strings), link ($link strings), and activate ($ mcr sys$login:strings image_to_test.exe)
OpenVMS Macro String program text
Good luck!
Hein
Use analyze/image to view the contents of an executable image file.
I'm guessing you are trying to look in the EXE because you do not have access to the source. I do something like this:
$ dump/record/byte/hex/out=a.a myexe.exe
Then look at a.a with any text editor (132 columns). The linker groups string literals together, and they are mostly near the beginning of the EXE, so you don't have to look to far into the file. Of course this only helps if the database references are string literals.
The string literal might be broken across a block (512 byte) boundary, so if you use search in your editor, try looking for substrings.
Aksh - you are chasing your tail on this one. Its a false dawn. Even if you could (and you can't) find the database tables, you will need the source of the .exe to do anything sensible with it, or the problem you are trying to solve. Its possible to write a program which just lists all the tables in a database without reading any of 'em. So you could spend and awful lot of effort and get nowhere. Hope this helps

Improve performance for large PPT Slide-Copy task

Here is the thing: I need a VBA script that generates power point presentations from others. The main difficulty here are the large file sizes. The final PPTs are going to contain up to 1000 slides each. I have to open the initial PPTs and re-sort them to tell the long story short.
A huge factor will be the reopening task. I can not open all those files at once at this point since the machine would run out of memory very quick.
Is there somhow a time saving or memory saving method to fulfil this task? Since this task is mainly a re organizational thing there might be a way to accomplish my needs.
I would be thankful for any help.
Best regards.
In order to insert slides from one file into another, you either need to open the source file yourself or let PPT do it implicitly behind the scenes.
If you let it do the work (ie, by using the Slides.InsertFromFile method) you can only insert contiguous ranges, and for each invocation of the method, PPT will open/close the file. If you need to work with non-contiguous ranges, you'll save time by opening the file (windowless if you like) and managing the copy process yourself. That way you can do a bit of pre-sorting and open each source file only one time.
Also, current versions of PPT will open files considerably faster if they're saved as PPTX rather than PPT, I've noticed. The difference isn't especially apparent with small files but can become quite noticeable as files get larger.

Find duplicate PDFs

I'm looking for a utility that will help me find duplicate PDFs. The problem: I have a 1000s of PDF files. Some are duplicates. They are not easy to detect due differing files names and small differences in file size. Is there a utility/algorithm/library that can help me find the duplicates or show me files that are very similar (or degree of difference)?
Create an MD5 hash for each file and store it in a database. Identical files will then sort next to each other, or you can quickly search for a pre-existing key.
The problem is not yet solved in any way. What I do, is I use fdupes http://premium.caribe.net/~adrian2/fdupes.html to find exact duplicates.
But most of all, I use a workflow which minimizes duplicates. Every document that enters my system gets indexed with this perl-script I wrote: http://seegras.discordia.ch/Programs/fileindex which puts some name and an md5-sum of it into ~/.fileindex.md5 Now I can change metadata of the local PDF-files or whatever (and run fileindex again), and whenever I accidently download the same file again, I will stil lhave the md5-sum of the original file, and thus can detect whether it's a duplicate.
There's also exif-meta and exif-rename on http://seegras.discordia.ch/Programs/ which help with setting PDF metadata and with renaming PDF-files according to metadata; and if you're tagging all the files correctly, you will end up with duplicate filenames, indicating that they might be the same document within a different file.
If the files were created by the different tools, they could look the same but generate very different results because they are structured totally differently. I made some suggestions in a blog article at https://blog.idrsolutions.com/2010/09/comparing-2-pdf-files/
DiffPDF looks like something that might help you.
I remember that there is a UNIX utility called pdf2txt (see the package poppler-utils). You can try to extract the text from the files and make a textual diff.