I use Foxit Phantompdf, the complete version and ACCESS.
In our program, we have to save multiple pdf files, some of them should be merged in single files when saved.
Here is the code I use;
Dim phApp As PhantomPDF.Application
Dim n1 As String
Dim n2 As String
n1 = "c:\Temp\F3769-190136-GROUPE OCÉAN.pdf"
n2 = "c:\Temp\f3769-190136-GROUPE OCÉAN- facture.pdf"
Set phApp = CreateObject("PhantomPDF.Application")
Dim phCreator As PhantomPDF.Creator
Set phCreator = phApp.Creator
***'Call phCreator.CombineFiles("c:\Temp\F3769-190136-GROUPE OCÉAN.pdf|c:\Temp\f3769-190136-GROUPE OCÉAN- facture.pdf", "c:\Temp\F3769-190136-GROUPE OCÉAN.pdf", COMBINE_ADD_CONTENTS)***
Call phCreator.CombineFiles("""c:\Temp\" & n1 & "|'" & n2 & """" & ", " & """c:\Temp\F3769-190136-GROUPE OCÉAN.pdf"""" &", COMBINE_ADD_CONTENTS)
phApp.Exit
When I try it with the complete files names (in bold) the code works perfectly.
However, when I try to use variables, I get a
"Argument not optional"
error.
Can somebody help me ?
Thanks
Your string definitions in call line is incorrect.
You have defined n1 and n2 with the c:\temp already, and in your string conversion you add this again. I do not know if this is the route cause to your issue.
Furthermore I do not know the actual needed syntax for this phcreator.combine()
But is it not possible using only:
call pHcreator.combine(n1 & "|" & n2, …
The 'Argument not option' might imply you should add another input to your pHcreator, I would guess it could have something to do with FOXIT's combine function page settings. Try adding a input variable integer at the end of the function maybe?
But the fact that it works when writing strings in clear text would maybe suggest that the string manipulations is not correct?
I'm not a vba professional, but interested in the outcome, working myself with Foxit and also want to combine with vba. I'm currently not using version 9 som I seem to be out of luck, and only upgrading it I know what I want to do is possible.
I tried it, but got the same error message. So I took advantage of the fact that the function works when the file names are in plain text. I copied the files to be merged in a temporary folder and rename them. The renamed files are the then used in the merge function. It works perfectly, but Foxit adds a table of content page, and I don't know, yet, how to remove it. Here is my solution:
Private Sub Command4_Click()
Dim addi As String 'file to be merged to main file
Dim princi As String 'main file
Dim phApp As PhantomPDF.Application
'A temporary folder, in this case c:\t2, should be present
'In this example c:\Temp is the working folder
addi = "c:\Temp\filetomerge.pdf" 'full path of file to be merged
princi = "c:\Temp\mainfile.pdf" 'full path of main file
'fadd,pdf and fmain.pdf are the temporay files used in Foxit's function
FileCopy addi, "c:\t2\fadd.pdf" 'temporary file to be merged in temporary folder
FileCopy princi, "c:\t2\fmain.pdf" 'temporary main file in temporary folder
'Merge action
Set phApp = CreateObject("PhantomPDF.Application")
Dim phCreator As PhantomPDF.Creator
Set phCreator = phApp.Creator
Call phCreator.CombineFiles("c:\t2\fmain.pdf|c:\t2\fadd.pdf", "c:\t2\fmain.pdf", COMBINE_ADD_CONTENTS)
phApp.Exit
'Save merged file in working folder under main file name
Kill princi
FileCopy "c:\t2\fmain.pdf", princi
'delete temporary files
Kill "c:\t2\fadd.pdf"
Kill "c:\t2\fmain.pdf"
End Sub
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
You can open PDFs in text editors to see the structure of how the PDF is written.
Using VBA I have opened a PDF as a text file and go to extract the text and save it as a string variable in VBA. I want to look through this text to find a specific element; a polyline (called sTreamTrain) and get the vertices of the polyline by using the InStr function.
When i add more vertices to the polyline I cannot seem to extract the text string of the pdf. I get the error 'Run time error 62' which I do not understand what it means or what about the PDF has changed to now have this error.
Attached (via the link) is a PDF that I can read (Document 15) and a PDF I cannot read (Document 16). I have checked in excel so see that the vertices are present in both files. Also there is a copy of my VBA script as a notepad document and also my excel file (but it is difficult to find in my excel file - the script is "Module 6" function called "CoordExtractor_TestBuild01()")
Link:
https://drive.google.com/open?id=1zOhwnFWZZfy9bTAxKiQFSl7qiQLlYIJV
Code snippet of the text extraction process below to reproduce the problem (given an applicable pdf is used):
Sub CoordExtractor_TestBuild01()
'Opening the PDF and getting the coordinates
Dim TextFile As Integer
Dim FilePath As String
Dim FileContent As String
'File Path of Text File
FilePath = "C:\Users\KAllan\Documents\WorkingInformation\sTreamTrain\Document16 - Original.pdf"
'Determine the next file number available for use by the FileOpen function
TextFile = FreeFile
'Open the text file in a Read State
Open FilePath For Input As TextFile
'Store file content inside a variable
Dim Temp As Long
Temp = LOF(TextFile)
FileContent = Input(LOF(TextFile), TextFile)
'Clost Text File
Close TextFile
End Sub
I would like someone to let me know what runtime error 62 is in this context and propose any workflows to get around it in future. Also, I would like to know whether there certain characters you cannot store as strings? - Perhaps these are included when I increase the number of vertices past a certain number.
Also I would prefer to keep the scrips quite simple and not use external libraries because I want to share the script when it is done so others can use it thus its simpler if it works without extra dependencies etc, however, any and all advice welcome since this is only the first half of this project.
Thank you very much.
According to the MSDN documentation, this error is caused by the file containing
...blank spaces or extra returns at the end of the file or the syntax
is not correct.
Since your code works sometimes on documents with very similar names and content to documents where it doesn't work, we can rule out syntax errors in this case.
You can clean up the file contents before processing it any further by replacing the code at the top of your macro with the one below. With this I can read and extract information from your Document16.pdf:
Sub CoordExtractor_TestBuild01()
'Purpose to link together the extracting real PDF information and outputting the results onto a spreadsheet
'########################################################################################
'Opening the PDF and getting the coordinates
Dim n As Long
Dim TextFile As Integer
Dim FilePath As String
Dim FileContent As String
'File Path of Text File
FilePath = "C:\TEST\Document16.pdf" ' change path
'Determine the next file number available for use by the FileOpen function
TextFile = FreeFile
'Open the text file in a Read State
Open FilePath For Input As TextFile
Dim strTextLine As String
Dim vItem As Variant
Line Input #1, strTextLine
vItem = Split(strTextLine, Chr(10))
' clean file of garbage information line by line
For n = LBound(vItem) To UBound(vItem)
' insert appropriate conditions here - in this case if the string "<<" is present
If InStr(1, vItem(n), "<<") > 0 Then
If FileContent = vbNullString Then
FileContent = vItem(n)
Else
FileContent = FileContent & Chr(10) & vItem(n)
End If
End If
Next n
'Clost Text File
Close TextFile
' insert the rest of the code here
I'm trying to get a macro setup that will import specific lines from a text file into a Excel spreadsheet. I am currently using the instr function to locate a specific word then read how many letters over I need to import data into the cells.
The reason I am doing it this way is due to the file being over 3500 lines and is not delimited or comma separated in any sense. Some of the data is the same as well which I run into problems with the above tactic.
What I need help with is how to import only like 20 specific lines into the spreadsheet(multiple sheets will be used, but can reuse this code), while using a technique like I mentioned earlier so I can decide where its reading from.
Thanks!
This is what I use constantly. Just replace the file location with your text file and this little blip will read it line by line. Then you can throw logic against a single line and decide what to do with it. Its a good solution when your dealing with data that regex statements can't handle accurately.
Sub LoadSettings()
Dim fso As Scripting.FileSystemObject
Dim F As File
Dim F2 As TextStream
Dim TS As TextStream
Dim lngCount As Long
Set fso = New Scripting.FileSystemObject
fileloc = "Whereyourtextfileislocated"
Set F = fso.GetFile(fileloc)
Set TS = F.OpenAsTextStream(1, -2)
S = ""
Do
RptText = TS.ReadLine
if rpttext = whatever criteria, or even if instr(rpttext,"whateveryouwant") >0 then
do something with rpttext
end if
Loop
End Sub
I would like to write some data to a csv file. I am using this code:
Dim Filename As String, line As String
Dim A As Integer
Filename = "D:" & "\testfile.csv"
Open Filename For Output As #1
For A = 1 To 100
Print #1, "test, test, test"
Next A
Close #1
but the problem is, that this code rewrite this cvs file from the beginning. but I would like to add data at the end of csv file ( For example if I run this code three times, I would like to have 300 lines in this csv file)
what should I do?
In which case you need Open Filename For Append As #1.
You might also find that Write #1, behaves better than Print #1, if you line contains quotation characters.
One last thing, don't hardcode the #1 as someone else may be using that handle. Instead, use
Dim n as Integer
n = Freefile 'Let VBA find a free file handle
'use #n rather than #1 from here.
Here is your Error:
Open Filename For Output As #1
which should be:
Open Filename For Append As #1
This will append your new text to the end of a stream.
I believe I have come up with a very efficient way to read very, very large files line-by-line. Please tell me if you know of a better/faster way or see room for improvement. I am trying to get better at coding, so any sort of advice you have would be nice. Hopefully this is something that other people might find useful, too.
It appears to be something like 8 times faster than using Line Input from my tests.
'This function reads a file into a string. '
'I found this in the book Programming Excel with VBA and .NET. '
Public Function QuickRead(FName As String) As String
Dim I As Integer
Dim res As String
Dim l As Long
I = FreeFile
l = FileLen(FName)
res = Space(l)
Open FName For Binary Access Read As #I
Get #I, , res
Close I
QuickRead = res
End Function
'This function works like the Line Input statement'
Public Sub QRLineInput( _
ByRef strFileData As String, _
ByRef lngFilePosition As Long, _
ByRef strOutputString, _
ByRef blnEOF As Boolean _
)
On Error GoTo LastLine
strOutputString = Mid$(strFileData, lngFilePosition, _
InStr(lngFilePosition, strFileData, vbNewLine) - lngFilePosition)
lngFilePosition = InStr(lngFilePosition, strFileData, vbNewLine) + 2
Exit Sub
LastLine:
blnEOF = True
End Sub
Sub Test()
Dim strFilePathName As String: strFilePathName = "C:\Fld\File.txt"
Dim strFile As String
Dim lngPos As Long
Dim blnEOF As Boolean
Dim strFileLine As String
strFile = QuickRead(strFilePathName) & vbNewLine
lngPos = 1
Do Until blnEOF
Call QRLineInput(strFile, lngPos, strFileLine, blnEOF)
Loop
End Sub
Thanks for the advice!
My two cents…
Not long ago I needed reading large files using VBA and noticed this question. I tested the three approaches to read data from a file to compare its speed and reliability for a wide range of file sizes and line lengths. The approaches are:
Line Input VBA statement
Using the File System Object (FSO)
Using Get VBA statement for the whole file and then parsing the string read as described in posts here
Each test case consists of three steps:
Test case setup that writes a text file containing given number of lines of the same given length filled by the known character pattern.
Integrity test. Read each file line and verify its length and contents.
File read speed test. Read each line of the file repeated 10 times.
As you can notice, Step #3 verifies the true file read speed (as asked in the question) while Step #2 verifies the file read integrity and therefore simulates real conditions when string parsing is needed.
The following chart shows the test results for the File read speed test. The file size is 64M bytes for all tests, and the tests differ in line length that varies from 2 bytes (not including CRLF) to 8M bytes.
CONCLUSION:
All the three methods are reliable for large files with normal and abnormal line lengths (please compare to Graeme Howard’s answer)
All the three methods produce almost equivalent file reading speed for normal line lengths
“Superfast way” (Method #3) works fine for extremely long lines while the other two don’t.
All this is applicable to different Offices, different PCs, for VBA and VB6
You can use Scripting.FileSystemObject to do that thing.
From the Reference:
The ReadLine method allows a script to read individual lines in a text file. To use this method, open the text file, and then set up a Do Loop that continues until the AtEndOfStream property is True. (This simply means that you have reached the end of the file.) Within the Do Loop, call the ReadLine method, store the contents of the first line in a variable, and then perform some action. When the script loops around, it will automatically drop down a line and read the second line of the file into the variable. This will continue until each line has been read (or until the script specifically exits the loop).
And a quick example:
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile("C:\FSO\ServerList.txt", 1)
Do Until objFile.AtEndOfStream
strLine = objFile.ReadLine
MsgBox strLine
Loop
objFile.Close
Line Input works fine for small files. However, when file sizes reach around 90k, Line Input jumps all over the place and reads data in the wrong order from the source file.
I tested it with different filesizes:
49k = ok
60k = ok
78k = ok
85k = ok
93k = error
101k = error
127k = error
156k = error
Lesson learned - use Scripting.FileSystemObject
With that code you load the file in memory (as a big string) and then you read that string line by line.
By using Mid$() and InStr() you actually read the "file" twice but since it's in memory, there is no problem.
I don't know if VB's String has a length limit (probably not) but if the text files are hundreds of megabyte in size it's likely to see a performance drop, due to virtual memory usage.
I would think , in a large file scenario using a stream would be far more efficient, because memory consumption would be very small.
But your algorithm could alternate between using a stream and loading the entire thing in memory based on the file size. I wouldn't be surprised if one is only better than the other under certain criteria.
'you can modify above and read full file in one go
and then display each line as shown below
Option Explicit
Public Function QuickRead(FName As String) As Variant
Dim i As Integer
Dim res As String
Dim l As Long
Dim v As Variant
i = FreeFile
l = FileLen(FName)
res = Space(l)
Open FName For Binary Access Read As #i
Get #i, , res
Close i
'split the file with vbcrlf
QuickRead = Split(res, vbCrLf)
End Function
Sub Test()
' you can replace file for "c:\writename.txt to any file name you desire
Dim strFilePathName As String: strFilePathName = "C:\writename.txt"
Dim strFileLine As String
Dim v As Variant
Dim i As Long
v = QuickRead(strFilePathName)
For i = 0 To UBound(v)
MsgBox v(i)
Next
End Sub
My take on it...obviously, you've got to do something with the data you read in. If it involves writing it to the sheet, that'll be deadly slow with a normal For Loop. I came up with the following based upon a rehash of some of the items there, plus some help from the Chip Pearson website.
Reading in the text file (assuming you don't know the length of the range it will create, so only the startingCell is given):
Public Sub ReadInPlainText(startCell As Range, Optional textfilename As Variant)
If IsMissing(textfilename) Then textfilename = Application.GetOpenFilename("All Files (*.*), *.*", , "Select Text File to Read")
If textfilename = "" Then Exit Sub
Dim filelength As Long
Dim filenumber As Integer
filenumber = FreeFile
filelength = filelen(textfilename)
Dim text As String
Dim textlines As Variant
Open textfilename For Binary Access Read As filenumber
text = Space(filelength)
Get #filenumber, , text
'split the file with vbcrlf
textlines = Split(text, vbCrLf)
'output to range
Dim outputRange As Range
Set outputRange = startCell
Set outputRange = outputRange.Resize(UBound(textlines), 1)
outputRange.Value = Application.Transpose(textlines)
Close filenumber
End Sub
Conversely, if you need to write out a range to a text file, this does it quickly in one print statement (note: the file 'Open' type here is in text mode, not binary..unlike the read routine above).
Public Sub WriteRangeAsPlainText(ExportRange As Range, Optional textfilename As Variant)
If IsMissing(textfilename) Then textfilename = Application.GetSaveAsFilename(FileFilter:="Text Files (*.txt), *.txt")
If textfilename = "" Then Exit Sub
Dim filenumber As Integer
filenumber = FreeFile
Open textfilename For Output As filenumber
Dim textlines() As Variant, outputvar As Variant
textlines = Application.Transpose(ExportRange.Value)
outputvar = Join(textlines, vbCrLf)
Print #filenumber, outputvar
Close filenumber
End Sub
Be careful when using Application.Transpose with a huge number of values. If you transpose values to a column, excel will assume you are assuming you transposed them from rows.
Max Column Limit < Max Row Limit, and it will only display the first (Max Column Limit) values, and anithing after that will be "N/A"
I just wanted to share some of my results...
I have text files, which apparently came from a Linux system, so I only have a vbLF/Chr(10) at the end of each line and not vbCR/Chr(13).
Note 1:
This meant that the Line Input method would read in the entire file, instead of just one line at a time.
From my research testing small (152KB) & large (2778LB) files, both on and off the network I found the following:
Open FileName For Input: Line Input was the slowest (See Note 1 above)
Open FileName For Binary Access Read: Input was the fastest for reading the whole file
FSO.OpenTextFile: ReadLine was fast, but a bit slower then Binary Input
Note 2:
If I just needed to check the file header (first 1-2 lines) to check if I had the proper file/format, then FSO.OpenTextFile was the
fastest, followed very closely by Binary Input.
The drawback with the Binary Input is that you have to know how many characters
you want to read.
On normal files, Line Input would also be a good
option as well, but I couldn't test due to Note 1.
Note 3:
Obviously, the files on the network showed the largest difference in read speed. They also showed the greatest benefit from reading the file a second time (although there are certainly memory buffers that come into play here).