long time reader, first time poster. Trying to automate a process to take many .PDF floorplan files and combine them into a single .PDF floorplan which will be referenced by a website.
To cut down on manual cut-and-paste from network shares to a web server as is current practice, I've written a PowerShell command as follows:
$SourcePath = '\\network\share\location\CAD Miniatures'
$DestinationPath = 'C:\inetpub\wwwroot\floorplans'
$LogFile = 'C:\Floorplan Transfer Logs\TransferLog.txt'
Robocopy $SourcePath $DestinationPath *.pdf /E /MIR /ZB /DCOPY:DAT /R:5 /W:10 /LOG+:$LogFile
My plan is to have this script run every hour as a Scheduled Task to mirror our local files and web files to ensure they remain up-to-date automatically.
The curve ball is the files being copied are individual files, within directories. I would like to take all .pdf files in a given folder and combine it into a single .pdf.
File structure is as such:
/floorplans
/ABC
/ABC-01.pdf
/ABC-02.pdf
/ABC-03.pdf
/XYZ
/XYZ-01.pdf
/XYZ-02.pdf
/XYZ-03.pdf
/XYZ-04.pdf
/XYZ-05.pdf
/XYZ-06.pdf
Within each directory (or in a subdirectory), I would like to have the combined output file be simple abc.pdf and xyx.pdf as per the examples above.
The file naming always follows the same format, but the number of files varies from a single file to over a dozen.
I would like to run the Robocopy and PDFtk tasks in the same script if possible (the idea to update all files, and combine them together). There would also be no need to merge files in which no updates have been detected.
Related
I have multiple files in a folder and essentially want all of them to be compared with 1 main file. How do I go on about in doing this as it's only limited to two files opened at once?
Thanks.
Beyond Compare is limited to 2-way comparison. If the main file and the other files are all located in the same folder, load the folder in the Folder Compare. Then select the main file and one of the other files. Right click and select Open to launch the two files in the Text Compare. Repeat for each file that must be compared to main.
Beyond Compare also includes support for command-line scripting you can use to automate the comparison.
Example script to compare main to 3 files and output comparison results as HTML:
text-report layout:side-by-side options:ignore-unimportant,display-mismatches output-to:out1.html output-options:html-color c:\main.txt c:\1.txt
text-report layout:side-by-side options:ignore-unimportant,display-mismatches output-to:out2.html output-options:html-color c:\main.txt c:\2.txt
text-report layout:side-by-side options:ignore-unimportant,display-mismatches output-to:out3.html output-options:html-color c:\main.txt c:\3.txt
To run the script, use the command line:
bcompare.exe #c:\script.txt
The # character makes Beyond Compare run a file as a script instead of loading it for interactive comparison.
Beyond Compare Scripting Resources:
Help File > Scripts
Help File > Scripting Reference
Scripting Forum
When i give print command ,print job file gets stored into the /var/spool/cups directory but that is in PDF format.Is there a way to decode that pdf file so that i can spy what data is there in that pdf file and accordingly take action on that user?
The scheduler stores job files in a spool directory, typically
/var/spool/cups. Two types of files will be found in the spool
directory: control files starting with the letter "c" ("c00001",
"c99999", "c100000", etc.) and data files starting with the letter "d"
("d00001-001", "d99999-001", "d100000-001", etc.) Control files are
IPP messages based on the original IPP Print-Job or Create-Job
messages, while data files are the original print files that were
submitted for printing. There is one control file for every job known
to the system and 0 or more data files for each job.
https://www.cups.org/doc/spec-design.html
You have to search for files like d000234 (data files, not c000234 print control files).
You can do a file d000234 to find information about the file format.
E.g.:
[root#pc cups]# file d000234
d000234: PostScript document text conforming DSC level 3.0, Level 2
For this job, I've printed a PDF with my default system print dialog. Somewhere it was converted to PhostScript. Open it with any application with PostScript capabilities.
E.g.:
okular d000234
Data files are only available if you've enabled the "PreserveJobFiles" and "PreserveJobHistory" in cupsd.conf.
I am executing a Pig script, which reads files from a directory, performs some operation and stores to some output directory. In output directory I'm getting one or more "part" files, one _SUCCESS file and one _logs directory. My questions are:
Is there any way to control the name of files generated (upon execution of STORE command) in output directory. To be specific, I don't want the names to be "part-.......". I want Pig to generate files according to the file name pattern I specify.
Is there any way to suppress the _SUCCESS file and the _log directory? Basically I don't want the _SUCCESS and _logs to be generated in the output directory.
Regards
Biswajit
See this post.
To remove _SUCCESS, use SET mapreduce.fileoutputcommitter.marksuccessfuljobs false;. I'm not 100% sure how to remove _logs but you could try SET pig.streaming.log.persist false;.
I have 7 files with extensions like xyz.rar.001 - xyz.rar.007 clearly they are parts of a single file. I have all the 7 parts. I join them using a file joiner into a single file xyz.rar and try to unrar them with WINRAR , it says that archive is corrupted It is clear that 1 or 2 parts are corrupted. IS THERE ANY WAY TO FIND THEM ? Please help I don't want to re download all of them NOTE- winrar can detect a corrupt part if the parts were splitted using winrar (with extensions like part1.rar , part2.rar etc. ) but not if they are named as rar.001
Parts .001 - .006 should have the same size. Check if there is a file with a different byte size.
Are there multiple files in the RAR or just the one? With multiple you could run a Test and see which is the first file to fail.
I think it's strange that there is a second tool used to split the RAR archive up. (e.g. HJSplit) This lets me think that .002 could be a RAR archive too. Try opening xyz.rar.001 with WinRAR and test/exctract. It happens more that RAR archives have the extension .001 instead of .rar. An example.
Naming your archives in WinRAR like this can be accomplished by putting "xyz.rar.001" as Archive name on the General tab and checking "Old style volume names" on the Advanced tab.
If I then join the files with HJSplit, I get one .rar file (that is corrupt). When I Test it, it says "Next volume is required". In the diagnostic messages I can see "The required volume is absent" and "CRC failed in X. The file is corrupt"
If there is one file stored inside the RAR and the RAR is indeed just chopped up into 7 pieces, there is no way of telling without additional files such as .sfv or .par2. (unless the RAR does not use compression: you can parse the underlying file for errors and calculate the part where it goes wrong)
Is it possible to use xcopy to copy files from several directories into one directory using only one xcopy command?
Assuming that I have the directory tree
root\Source\Sub1\Sub2
I want to copy all .xml files from the directory root\Source including sub folder to root\Destination. I don't want to copy the folder structure, just the files.
As DandDI said, you don't need xcopy. for statement helps much. However, you don't need to state process outcome of dir command as well, this command helps better
for /R c:\source %f in (*.xml) do copy "%f" x:\destination\
By the way, when you use it from a batch file, you need to add spare % in front of variable %f hence your command line should be;
for /R c:\source %%f in (*.xml) do copy %%f x:\destination\
when you use it within a batch
Should surround %f with double quotes otherwise it will fail copying file names with spaces
You don't need xcopy for that.
You can get a listing of all the files you want and perform the copy that way.
For example in windows xp command prompt:
for /f "delims==" %k in ('dir c:\source\*.xml /s /b') do copy "%k" x:\destination\
The /s goes into all subdirectories and the /b lists only the files name and path. Each file inturn is assigned to the %k variable, then the copy command copies the file to the destination. The only trick is making sure the destination is not part of the source.
The Answer to this problem which I think is "How to gather all your files out of all the little subdirectories into one single directory" is to download a piece of software called XXCOPY. This is freely available via XXCOPY.COM and there's a free non-commercial version fortunately. One of the Frequently Asked Questions on the help facility on XXCOPY.COM is effectively "How do I gather all my files into one directory" and it tells you which switch to use. XXCOPY is though a surefire way of doing this and it comes in a .zip archive so unzipping it can be not that straightforward but it's not particularly difficult either. There is an unzipping program called ZipGenius available through the ZipGenius.it website so maybe before you download XXCOPY then download ZipGenius then it's a smallpart smalltime double wammy(!)
Might not be the exact answer but if anyone would like to do this without coding.
You can search the name of the item inside a specific folder, and then you can copy the results and later paste it into your desired folder. It will rename the same file to be the folder I believe as the prefix and then the repeated name.