Improve performance for large PPT Slide-Copy task - vba

Here is the thing: I need a VBA script that generates power point presentations from others. The main difficulty here are the large file sizes. The final PPTs are going to contain up to 1000 slides each. I have to open the initial PPTs and re-sort them to tell the long story short.
A huge factor will be the reopening task. I can not open all those files at once at this point since the machine would run out of memory very quick.
Is there somhow a time saving or memory saving method to fulfil this task? Since this task is mainly a re organizational thing there might be a way to accomplish my needs.
I would be thankful for any help.
Best regards.

In order to insert slides from one file into another, you either need to open the source file yourself or let PPT do it implicitly behind the scenes.
If you let it do the work (ie, by using the Slides.InsertFromFile method) you can only insert contiguous ranges, and for each invocation of the method, PPT will open/close the file. If you need to work with non-contiguous ranges, you'll save time by opening the file (windowless if you like) and managing the copy process yourself. That way you can do a bit of pre-sorting and open each source file only one time.
Also, current versions of PPT will open files considerably faster if they're saved as PPTX rather than PPT, I've noticed. The difference isn't especially apparent with small files but can become quite noticeable as files get larger.

Related

ExportAsFixedFormat for VBA saving Files

i have a problem regarding the process of saving a bunch of pdfs (exported from WordDocuments).
The runtime of my program just behaves a bit weird and thats why i am asking.
So I want to save the files on a global drive.
In my program i create a folder (in that drive where) i put all the pdfs.
Somehow if i do this operation for the first time it is really slow.
But if I do this operation ( for the same fodler a second time) it is somehow really fast. (after I deleted the "old" pdfs, or the old pdfs where overwritten)
I am a bit frustrated and I cannot explain why that is
Could somebody help pls
Would be very happy for an answer
Greetings
Jonas
Using this simple code
doc.ExportAsFixedFormat wholefile, ExportFormat:=wdExportFormatPDF
There are multiple reasons why the ExportAsFixedFormat method can be slow. But I would start from dealing with local files only. Then I'd play with arguments specifying the document quality and exported format. It makes sense to play with other parameters left in your code sample to its default values.

Creating files in synced location too slow

I have a multitude of macros breaking down information into multiple files, e.g. for each row, create a separate worksheet; or for each row, create a .docx and a .pdf document.
Now to test these macros, I always need to move outside my folders synced to OneDrive/Sharepoint because whenever a new file is created in a synced location, Office takes its damned time to do some stuff related to the synchronisation, thus considerably slowing down the macro execution.
This is equally, or even moreso a problem in production where the macro is run on a much larger sample and by other users, so I have to train them to move the file outside of the shared location (dedicated to collaboration) to their own drive.
Is there a way to defer these actions after the macro execution (besides disabling OneDrive app)? This is causing me issues with development as I am used to the file being autosaved and to having my own version control. This is equally important during the testing, when I change a lot of the code.

Extract text from illustrator file without opening file

Any idea if it would be possible to extract text from a illustrator file without opening it?
I have an AppleScript currently extracting the text but it takes a long time when I'm working on hundreds of files. I was wondering if it would be possible to get the information without opening the AI file.
+1 for show your own code first. (Also, typo in first line: I think you meant “Illustrator”, not “photoshop”.)
If you’re only getting plain text it should only take a fraction of a second per document (opening the file will take longer):
tell application "Adobe Illustrator"
get contents of every text frame of document 1
end tell
(i.e. Never iterate over individual application objects, querying each one, when a single query will do everything for you. Apple events are relatively expensive for apps to resolve; sending lots of them unnecessarily really kills performance.)
Also be aware that AppleScript also has serious performance problems when iterating over large lists, but that’s a separate issue, the solution to which should already be covered elsewhere.

Reducing PDF size - From 5MB to ~200KB

INTRO
I have this 2,7 MB PDF file.
It's a certificate with two fields that I have to fill: name and course.
After filling those fields I save it for later printing.
THE PROBLEM
After saving, the new file comes up with ~5MB.
I have tried many saving options and but I only managed to reduce it to the final size of 4,7MB (still larger than the original file).
For instance, I tried open the original file (2,7MB) and save it right after opening (without making any change). The result is the same: a new ~5MB file.
That means that it isn't the information (Name and Course) the faulty.
SOLVING
At some point, trying new methods of saving, I managed to save it to the size of 180KB.
Unfortunately, I'm not being able to reproduce this made.
After several hours trying to achieve this made again and not succeeding, I came here ask for help :(
As you are in Acrobat, you might use "save as optimized…" (where you are already, in order to show the space usage), and remove as much as possible (mainly structure information, private data (which means data allowing the original-creating application to edit the file again), etc.).
You might also start from a minimum-sized blank file, and copy/paste the form fields into it. (although I don't think that would cause much reduction, as, AFAIK, fonts used in form fields are counted in the Fonts item).

FIleSystemWatcher.Created how does it work?

I am working on a project that will copy files to a database every time something is added to a specific directory. Now the program works fine when I'm testing with a small set of data but I was wondering if someone could explain how the FileSystemWatcher.Created event work.
My main concern is when I use this on a larger scale the program may slow down when it handles 100,000+ files.
If this is an issue could anyone explain if there is some sort of workaround to polling the original folder, lets call that "C:\folder", and maybe poll a temp folder instead.
I have not tested the watcher with 100,000 files. However, in most cases you should not have so many files in a folder awaiting processing. I recommend a structure like
C:\folder
C:\folder\processing
C:\folder\archive
C:\folder\error
As soon as you begin working on a given file, move it into processing. If you successfully process it, move the file again to archive. If there is an error while processing a file, instead move it into error.
This will make it easier for you to keep the files organized and diagnose problems that occur in production.
With that file structure, you will not run into issues with large numbers of files in the folder you are watching, unless you receive files in incredibly large bursts compared to the speed with which they can be moved into the processing state.