trouble with interrupting a Batch file started with VBA - vba

I have an Excel file with a lot of PC's names in a server, I want to execute the "systeminfo" command and get the OS out of it. Then the OS shall be put into an Excel cell automatically. To do so, I used the following codes, respectively in the VBA file and the batch file.
however, whenever the server can't reach a pc, the cmd window is stuck until I manually close it. Since the list is actually 148 names long, knowing of a way to automatically close those Windows after, say, 8 seconds would be really helpful.
I tried to look up for a way to multi-thread VBA, just to find out that It is a single-threaded Language. I then tried to start another batch file with the one I'm actually using as to forcefuly kill it afetr a set of time, but it seems that the second batch starts only after the first is terminated, making it useless.
VBA
Sub Test()
'
' Test Macro
' I'm not an expert in VBA, I just picked it up for this task, so a lot of code will result redundant. Bear with me
'
'
Dim i As Integer
'a is basically i-1.
a = 1
' I needed 148 cells for the project
Dim models(1 To 147) As String
For i = 2 To 148
models(a) = Cells(i, 3).Value
a = a + 1
Next i
a = 1
For i = 2 To 148
'not totally sure what the next five lines actually do, but "metodo" is the name of the batch file.
Dim strShellCommand As String
strShellCommand = "C:\Users\Administrator\Desktop\metodo.bat " + models(a)
Set oSh = CreateObject("WScript.Shell")
Set oEx = oSh.Exec(strShellCommand)
strBuf = oEx.StdOut.readAll
'I took out of the string everything that wasn't purely the OS name
Dim FinalString As String
FinalString = Right(strBuf, 26)
FinalString = Left(FinalString, 25)
'this is the line that prints the OS names into Excel cells
ActiveSheet.Cells(i, 10) = FinalString
a = a + 1
Next i
End Sub
then there is the Batch file
set nome=%1
shift
systeminfo /s %nome% |findstr /c:"Microsoft Windows "

u can do a control loop after the Set oEx = oSh.Exec(strShellCommand)
like :
Set oEx = oSh.Exec(strShellCommand)
LoopCount = 0
Do 'Control loop
wscript.Sleep 1000
If TimeOut > 0 Then LoopCount = LoopCount + 1
Loop Until (oEx.Status <> 0) Or (LoopCount > TimeOut * 8)
If oEx.Status = 0 Then 'Timeout occured
oEx.Terminate
ReturnValue = "[Process terminated after timeout!]" & VbCrLf
Else
ReturnValue = "[Process completed]" & VbCrLf
End If
each loop takes 1 second (wscript.Sleep 1000) and the (LoopCount > TimeOut * 8) sets the total time to 8 seconds
good luck

Related

Fastest Method to (read, remove, write) to a Text File

I coded a simple program that reads from a Textfile Line by Line and If the current readed Line has alphabetics (a-z A-Z) it will write that Line into an other txt file.
If the current readed line doesn't have alphabetics it wont write that line into a new text file.
I created this for the purpose that I have members registering at my website and some of them are using only numbers as Username. I will filter them out and only save the alphabetic Names. (Focus on this Project please I know i could just use php stuff)
That works great already but it takes a while to read line by line and write into the other text file (Write speed 150kb in 1 Minute - Its not my drive I have a fast ssd).
So I wonder if there is a faster way. I could "readalllines" first but on large files it just freezes my program so I don't know if that works too (I want to focus on large +1gb files)
This is my code so far:
If System.IO.File.Exists(FILE_NAME) = True Then
Dim objReader As New System.IO.StreamReader(FILE_NAME)
Do While objReader.Peek() <> -1
Dim myFile As New FileInfo(output)
Dim sizeInBytes As Long = myFile.Length
If sizeInBytes > splitvalue Then
outcount += 1
output = outputold + outcount.ToString + ".txt"
File.Create(output).Dispose()
End If
count += 1
TextLine = objReader.ReadLine() & vbNewLine
Console.WriteLine(TextLine)
If CheckForAlphaCharacters(TextLine) Then
File.AppendAllText(output, TextLine)
Else
found += 1
Label2.Text = "Removed: " + found.ToString
TextBox1.Text = TextLine
End If
Label1.Text = "Checked: " + count.ToString
Loop
MessageBox.Show("Finish!")
End If
First of all, as hinted by #Sean Skelly updating UI controls - repeatedly - is an expensive operation.
But your bigger problem is File.AppendAllText:
If CheckForAlphaCharacters(TextLine) Then
File.AppendAllText(output, TextLine)
Else
found += 1
Label2.Text = "Removed: " + found.ToString
TextBox1.Text = TextLine
End If
AppendAllText(String, String)
Opens a file, appends the specified string to the file, and then
closes the file. If the file does not exist, this method creates a
file, writes the specified string to the file, then closes the file.
Source
You are repeatedly opening and closing a file, causing overhead. AppendAllText is a convenience method since it performs several operations in one single call but you can now see why it's not performing well in a big loop.
The fix is easy. Open the file once when you start your loop and close it at the end. Make sure that you always close the file properly even when an exception occurs. For that, you can either invoke the Close in a Finally block, or use a context manager, that is keep your file write operations within a Using block.
And you could remove the print to console as well. Display management has a cost too. Or you could print status updates every 10K lines or so.
When you've done all that, you should notice improved performance.
My Final Code - It works a lot faster now (500mbs in 1 minute)
Using sw As StreamWriter = File.CreateText(output)
For Each oneLine As String In File.ReadLines(FILE_NAME)
Try
If changeme = True Then
changeme = False
GoTo Again2
End If
If oneLine.Contains(":") Then
Dim TestString = oneLine.Substring(0, oneLine.IndexOf(":")).Trim()
Dim TestString2 = oneLine.Substring(oneLine.IndexOf(":")).Trim()
If CheckForAlphaCharacters(TestString) = False And CheckForAlphaCharacters(TestString2) = False Then
sw.WriteLine(oneLine)
Else
found += 1
End If
ElseIf oneLine.Contains(";") Or oneLine.Contains("|") Or oneLine.Contains(" ") Then
Dim oneLineReplac As String = oneLine.Replace(" ", ":")
Dim oneLineReplace As String = oneLineReplac.Replace("|", ":")
Dim oneLineReplaced As String = oneLineReplace.Replace(";", ":")
If oneLineReplaced.Contains(":") Then
Dim TestString3 = oneLineReplaced.Substring(0, oneLineReplaced.IndexOf(":")).Trim()
Dim TestString4 = oneLineReplaced.Substring(oneLineReplaced.IndexOf(":")).Trim()
If CheckForAlphaCharacters(TestString3) = False And CheckForAlphaCharacters(TestString4) = False Then
sw.WriteLine(oneLineReplaced)
Else
found += 1
End If
Else
errors += 1
textstring = oneLine
End If
Else
errors += 1
textstring = oneLine
End If
count += 1
Catch
errors += 1
textstring = oneLine
End Try
Next
End Using

Inconsistent page count of a PDF document

I'm trying to get the number of pages in the PDF document. Some of my PDFs are created in Word (saved as PDF), some of them are Xeroxed into the directory (not sure if this matters).
After hours of research I've come to find out that this is easier said than done. The page count rarely comes back giving me the correct number of pages, even though most PDF's do in fact have /Count inside the Binary Code.
For example I've used the following code; it is supposed to open the document in Binary Mode, look for /Count or /N and get the number next to it which is supposed to give me the page count.
Public Sub pagecount(sfilename As String)
On Error GoTo a
Dim nFileNum As Integer
Dim s As String
Dim c As Integer
Dim pos, pos1 As Integer
pos = 0
pos1 = 0
c = 0
' Get an available file number from the system
nFileNum = FreeFile
'OPEN the PDF file in Binary mode
Open sfilename For Binary Lock Read Write As #nFileNum
' Get the data from the file
Do Until EOF(nFileNum)
Input #1, s
c = c + 1
If c <= 10 Then
pos = InStr(s, "/N")
End If
pos1 = InStr(s, "/count")
If pos > 0 Or pos1 > 0 Then
Close #nFileNum
s = Trim(Mid(s, pos, 10))
s = Replace(s, "/N", "")
s = Replace(s, "/count", "")
s = Replace(s, " ", "")
s = Replace(s, "/", "")
For i = 65 To 125
s = Replace(s, Chr(i), "")
Next
pages = Val(Trim(s))
If pages < 0 Then
pages = 1
End If
Close #nFileNum
Exit Sub
End If
'imp only 1000 lines searches
If c >= 1000 Then
GoTo a
End If
Loop
Close #nFileNum
Exit Sub
a:
Close #nFileNum
pages = 1
Exit Sub
End Sub
However, most of the time, it defaults to pages = 1 (under a:).
I've also updated this to 10000 to be sure that it hits the /Count line, yet it still does not give me the correct count.
If c >= 10000 Then
GoTo a
End If
I also came across this reddit
Is there another way to do this, something I can utilize in my app?
Any help is greatly appreciated.
Background:
This is for a legacy vb6 app where I'm attempting to let the user manipulate the PDF files. I added a ListBox that displays all PDF documents in a particular directory. When user double clicks on any one of the files, i display it in a WebBrowser component inside my application.
EDIT: Image containing the BinaryMode line Count for 3 different documents:
I double checked the page count, and /Count displays the correct page count for each of the three documents.
Regular expressions have limits, but I prefer to use them for searching for strings and I think this would be a good place to use one. You may want to play with the pattern because I did this relatively quickly with only a little testing.
Add a reference to Microsoft VBScript Regular Expressions 5.5 to your project. Then you can try the sample code below.
Private Sub Command1_Click()
Dim oRegEx As RegExp
Dim fHndl As Integer
Dim sContents As String
Dim oMatches As MatchCollection
On Error GoTo ErrCommand1_Click
'Open and read in the file
fHndl = FreeFile
Open some pdf file For Binary Access Read As fHndl
sContents = String(LOF(fHndl), vbNull)
Get #fHndl, 1, sContents
Close #fHndl 'We have the file contents so close it
fHndl = 0
'Instantiate and configure the RegEx
Set oRegEx = New RegExp
oRegEx.Global = True
oRegEx.Pattern = "((?:/Count )(\d+))"
Set oMatches = oRegEx.Execute(sContents)
'Look for a match
If oMatches.Count > 0 Then
If oMatches(0).SubMatches.Count > 0 Then
MsgBox CStr(oMatches(0).SubMatches(0)) & " Pages"
End If
End If
Exit Sub
ErrCommand1_Click:
Debug.Print "Error: " & CStr(Err.Number) & ", " & Err.Description
If Not oRegEx Is Nothing Then Set oRegEx = Nothing
If Not oMatches Is Nothing Then Set oMatches = Nothing
End Sub
An explanation of the RegEx pattern:
() creates a group
?: inside the parenthesis makes the group non-capturing
<</Linearized is a literal string
.* greedy quantifier, match any character 0 or more times
/N literal string
\d+ greedy qualtifier, match digits 1 or more times
>> literal string

Word VBA: iterating through characters incredibly slow

I have a macro that changes single quotes in front of a number to an apostrophe (or close single curly quote). Typically when you type something like "the '80s" in word, the apostrophe in front of the "8" faces the wrong way. The macro below works, but it is incredibly slow (like 10 seconds per page). In a regular language (even an interpreted one), this would be a fast procedure. Any insights why it takes so long in VBA on Word 2007? Or if someone has some find+replace skills that can do this without iterating, please let me know.
Sub FixNumericalReverseQuotes()
Dim char As Range
Debug.Print "starting " + CStr(Now)
With Selection
total = .Characters.Count
' Will be looking ahead one character, so we need at least 2 in the selection
If total < 2 Then
Return
End If
For x = 1 To total - 1
a_code = Asc(.Characters(x))
b_code = Asc(.Characters(x + 1))
' We want to convert a single quote in front of a number to an apostrophe
' Trying to use all numerical comparisons to speed this up
If (a_code = 145 Or a_code = 39) And b_code >= 48 And b_code <= 57 Then
.Characters(x) = Chr(146)
End If
Next x
End With
Debug.Print "ending " + CStr(Now)
End Sub
Beside two specified (Why...? and How to do without...?) there is an implied question – how to do proper iteration through Word object collection.
Answer is – to use obj.Next property rather than access by index.
That is, instead of:
For i = 1 to ActiveDocument.Characters.Count
'Do something with ActiveDocument.Characters(i), e.g.:
Debug.Pring ActiveDocument.Characters(i).Text
Next
one should use:
Dim ch as Range: Set ch = ActiveDocument.Characters(1)
Do
'Do something with ch, e.g.:
Debug.Print ch.Text
Set ch = ch.Next 'Note iterating
Loop Until ch is Nothing
Timing: 00:03:30 vs. 00:00:06, more than 3 minutes vs. 6 seconds.
Found on Google, link lost, sorry. Confirmed by personal exploration.
Modified version of #Comintern's "Array method":
Sub FixNumericalReverseQuotes()
Dim chars() As Byte
chars = StrConv(Selection.Text, vbFromUnicode)
Dim pos As Long
For pos = 0 To UBound(chars) - 1
If (chars(pos) = 145 Or chars(pos) = 39) _
And (chars(pos + 1) >= 48 And chars(pos + 1) <= 57) Then
' Make the change directly in the selection so track changes is sensible.
' I have to use 213 instead of 146 for reasons I don't understand--
' probably has to do with encoding on Mac, but anyway, this shows the change.
Selection.Characters(pos + 1) = Chr(213)
End If
Next pos
End Sub
Maybe this?
Sub FixNumQuotes()
Dim MyArr As Variant, MyString As String, X As Long, Z As Long
Debug.Print "starting " + CStr(Now)
For Z = 145 To 146
MyArr = Split(Selection.Text, Chr(Z))
For X = LBound(MyArr) To UBound(MyArr)
If IsNumeric(Left(MyArr(X), 1)) Then MyArr(X) = "'" & MyArr(X)
Next
MyString = Join(MyArr, Chr(Z))
Selection.Text = MyString
Next
Selection.Text = Replace(Replace(Selection.Text, Chr(146) & "'", "'"), Chr(145) & "'", "'")
Debug.Print "ending " + CStr(Now)
End Sub
I am not 100% sure on your criteria, I have made both an open and close single quote a ' but you can change that quite easily if you want.
It splits the string to an array on chr(145), checks the first char of each element for a numeric and prefixes it with a single quote if found.
Then it joins the array back to a string on chr(145) then repeats the whole things for chr(146). Finally it looks through the string for an occurence of a single quote AND either of those curled quotes next to each other (because that has to be something we just created) and replaces them with just the single quote we want. This leaves any occurence not next to a number intact.
This final replacement part is the bit you would change if you want something other than ' as the character.
I have been struggling with this for days now. My attempted solution was to use a regular expression on document.text. Then, using the matches in a document.range(start,end), replace the text. This preserves formatting.
The problem is that the start and end in the range do not match the index into text. I think I have found the discrepancy - hidden in the range are field codes (in my case they were hyperlinks). In addition, document.text has a bunch of BEL codes that are easy to strip out. If you loop through a range using the character method, append the characters to a string and print it you will see the field codes that don't show up if you use the .text method.
Amazingly you can get the field codes in document.text if you turn on "show field codes" in one of a number of ways. Unfortunately, that version is not exactly the same as what the range/characters shows - the document.text has just the field code, the range/characters has the field code and the field value. Therefore you can never get the character indices to match.
I have a working version where instead of using range(start,end), I do something like:
Set matchRange = doc.Range.Characters(myMatches(j).FirstIndex + 1)
matchRange.Collapse (wdCollapseStart)
Call matchRange.MoveEnd(WdUnits.wdCharacter, myMatches(j).Length)
matchRange.text = Replacement
As I say, this works but the first statement is dreadfully slow - it appears that Word is iterating through all of the characters to get to the correct point. In doing so, it doesn't seem to count the field codes, so we get to the correct point.
Bottom line, I have not been able to come up with a good way to match the indexing of the document.text string to an equivalent range(start,end) that is not a performance disaster.
Ideas welcome, and thanks.
This is a problem begging for regular expressions. Resolving the .Characters calls that many times is probably what is killing you in performance.
I'd do something like this:
Public Sub FixNumericalReverseQuotesFast()
Dim expression As RegExp
Set expression = New RegExp
Dim buffer As String
buffer = Selection.Range.Text
expression.Global = True
expression.MultiLine = True
expression.Pattern = "[" & Chr$(145) & Chr$(39) & "]\d"
Dim matches As MatchCollection
Set matches = expression.Execute(buffer)
Dim found As Match
For Each found In matches
buffer = Replace(buffer, found, Chr$(146) & Right$(found, 1))
Next
Selection.Range.Text = buffer
End Sub
NOTE: Requires a reference to Microsoft VBScript Regular Expressions 5.5 (or late binding).
EDIT:
The solution without using the Regular Expressions library is still avoiding working with Ranges. This can easily be converted to working with a byte array instead:
Sub FixNumericalReverseQuotes()
Dim chars() As Byte
chars = StrConv(Selection.Text, vbFromUnicode)
Dim pos As Long
For pos = 0 To UBound(chars) - 1
If (chars(pos) = 145 Or chars(pos) = 39) _
And (chars(pos + 1) >= 48 And chars(pos + 1) <= 57) Then
chars(pos) = 146
End If
Next pos
Selection.Text = StrConv(chars, vbUnicode)
End Sub
Benchmarks (100 iterations, 3 pages of text with 100 "hits" per page):
Regex method: 1.4375 seconds
Array method: 2.765625 seconds
OP method: (Ended task after 23 minutes)
About half as fast as the Regex, but still roughly 10ms per page.
EDIT 2: Apparently the methods above are not format safe, so method 3:
Sub FixNumericalReverseQuotesVThree()
Dim full_text As Range
Dim cached As Long
Set full_text = ActiveDocument.Range
full_text.Find.ClearFormatting
full_text.Find.MatchWildcards = True
cached = full_text.End
Do While full_text.Find.Execute("[" & Chr$(145) & Chr$(39) & "][0-9]")
full_text.End = full_text.Start + 2
full_text.Characters(1) = Chr$(96)
full_text.Start = full_text.Start + 1
full_text.End = cached
Loop
End Sub
Again, slower than both the above methods, but still runs reasonably fast (on the order of ms).

VBA "out of memory" error when Excel consume only 70MB

Q: Why out of memory when my system have plenty of it left (and office is 64bit)
Q: Could it be that data when split cause such strange behavior?
Q: If splitting that string cause trouble then how to sanititize/restore it for just operations of storing/restoring that string?
Specs: Win 8.1 Pro + Office 2013 64bit, 8GB RAM in system
And here is the code, which just get single LARGE (~1-2MB) string, and split it into multiple cells, so that 32k chars per cell limit do not cause harm:
Public Sub SaveConst(str As String)
Dim i As Long
i = 0
' Clear prior data
Do While LenB(Range("ConstJSON").Offset(0, i)) <> 0
Range("ConstJSON").Offset(0, i) = ""
i = i + 1
Loop
Dim strLen As Long
With Range("ConstJSON")
.Offset(0, 0) = Left$(str, 30000)
i = 1
strLen = Len(str)
Debug.Print strLen
Do While strLen > i * 30000
.Offset(0, i) = Mid$(str, i * 30000 + 1, 30000)
Debug.Print i
i = i + 1
Loop
End With
End Sub
Right now Len(str) report ~270k characters, and i goes up to 4 iteration, and then "Out of memory" bug kick in.
Now that is n-th iteration of that bug in this place. But I have simplified/modified code so that it works sometimes. For exact same data set.
UPDATE:
Thx to Jean code, I'm confident that its SAVING partial string to the cell that cause that error.
.Offset(0, i) = Mid$(str, i * 30000 + 1, 30000)
Or
Range("ConstJSON").Resize(nPieces).Value2 = v
Both cause errors.
UPDATE 2:
I was saving that string to single cell without any fuss. But now that string grew too big to fit, splitting sometimes cause that error "Out of the memory".
Exemplary string:
[...]
""ebiZlecenias"":[{""id"":""91a75940-6d3e-06f8-bcf7-28ecd49e85f2"",""lp"":null,""name"":""ZLECENIE
GŁÓWNE"",""date_entered"":""2014-04-15
08:13:18"",""date_modified"":""2014-04-15
08:13:18"",""modified_user_id"":""2"",""budowa_id"":""8614aab5-29da-ffac-4865-e8c5913c729c"",""rodzaj"":""1"",""etap"":""1"",""data_akceptacji"":null,""opis"":null,""user_id"":null,""data_bazowa_od"":null,""data_bazowa_do"":null,""data_rzeczywista_od"":null,""data_rzeczywista_do"":null,""archiwalny"":null,""deleted"":null,""termin_raportowania"":null,""okres_raportowania"":null,
[...]
EDIT: I believe the problem with your specimen string is that some of the substrings begin with a "-". When that happens, Excel thinks the contents is a formula, and that is what causes the error. Pre-formatting the cell as text did not correct the problem, but preceding each entry with a 'single quote', which coerces the entry to text and will not show up except in the formula bar, seems to have corrected the problem in my macros, even when using your specimen string above as the "base" string.
EDIT2: What seems to be happening is that, if the string length is greater than 8,192 characters (the longest allowed in a formula), and also starts with a token that makes Excel think it might be a formula (e.g: -, +, =), the write to the cell will fail with an out of memory error EVEN IF the cell is formatted as text. This does not happen if the single quote is inserted first.
Below is some code that works on much longer strings.
The code below first creates a long string, in this case the string is slightly more than 100,000,000 characters, and then splits it into sequential columns. No errors:
Option Explicit
Sub MakeLongString()
Dim S As String
Const strLEN As Long = 100 * 10 ^ 6
Const strPAT As String = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
S = strPAT
Do
S = S & S
Loop Until Len(S) > strLEN
Debug.Print Format(Len(S), "#,###")
SplitString (S)
Debug.Print Range("a1").End(xlToRight).Column
End Sub
Sub SplitString(STR)
Dim R As Range
Dim strLEN As Long
Set R = [a1]
Dim I As Long
strLEN = Len(STR)
Do Until I > strLEN
R(1, I / 30000 + 1) = "'" & Mid(STR, I + 1, 30000)
I = I + 30000
Loop
End Sub
I just ran a test where the range being written to was a multi-cell range, and the target was set by the Offset method as you did, and it also ran to completion without error, filling in the first four rows.
Sub SplitString(STR)
Dim R As Range
Dim strLEN As Long
Set R = [a1:a4]
Dim I As Long
strLEN = Len(STR)
Do Until I > strLEN
R.Offset(, I / 30000) = "'" & Mid(STR, I + 1, 30000)
I = I + 30000
Loop
End Sub
This is worth a try: first split the string into an array, then slap that entire array onto the sheet at once.
Const pieceLength As Long = 3000
Dim s As String
Dim i As Long
Dim nPieces As Long
Dim v As Variant
s = ... ' whatever your string is...
nPieces = WorksheetFunction.Ceiling(Len(s) / pieceLength, 1)
ReDim v(1 To nPieces, 1 To 1)
For i = 1 To nPieces
v(i, 1) = Mid(s, (pieceLength * i) + 1, pieceLength)
Next i
Range("ConstJSON").Resize(nPieces).Value2 = v
I haven't tested your code, so can't say exactly what's wrong with it, but I know that writing to (or reading from) individual cells one at a time is slow and expensive; it's usually much better to read/write large swaths of cells to/from arrays, and manipulate the arrays (instead of the cells).

Reading excel rows and simultaneously processing it

I have an Excel sheet and its humongous.. Around like 200000 rows that needs to be processed.
All I have to do is read it and process them with a query on a DB2 table. I have written the program where its more than 8 hours to process 5000 rows.
Is there a way where I can simultaneously read the excel and execute the query. I want them to be independent of the process. I cannot use Parallel.for as reading and creating so many instance of threads is no advantage. ANy pipes and queues are of no use. THis is a dom method using and it does not read a row, it reads a string.. if there is a null value on the row, it executes the row and throws an null exception. I am well with Background workers and TPL's. Any idea or code would be appreciated. No DLL can be used apart from OPENXML
Ideally I do not want to add to array,, I want it in 2 diff variables and process them when read..
Read a row( only 2 columns, ignore other cols
create a thread to execute the row and in Parallel, execute the read row.
Merge into one single table.
display results.. Sounds simple but there are challenges.
.
Try
Using spreadsheetDocument As SpreadsheetDocument = spreadsheetDocument.Open(fileName, False)
Dim workbookPart As WorkbookPart = spreadsheetDocument.WorkbookPart
Dim worksheetPart As WorksheetPart = workbookPart.WorksheetParts.First()
Dim sheetData As SheetData = worksheetPart.Worksheet.Elements(Of SheetData)().First()
For Each r As Row In sheetData.Elements(Of Row)()
For Each c As Cell In r.Elements(Of Cell)()
Dim text As String
Try
text = c.CellValue.Text.ToString
Debug.Print(text.ToString)
If text IsNot Nothing AndAlso Trim(text).Length > 0 Then
Arr.Add(text.ToString)
End If
text = Nothing
j += 1
Catch
End Try
Next
text = Nothing
Next
End Using
Catch ex As Exception
MsgBox("Exception caught: " + ex.Message)
Debug.Print(ex.Message.ToString)
End
End Try
myArr = CType(Arr.ToArray(GetType(String)), String())
This is the process which is dividing the data into 2 parameters
For i As Integer = 2 To myArr.Count - 1 Step 2
If i And 1 Then
i = i - 1
Else
dstr = DateTime.FromOADate(Double.Parse(myArr(i).ToString())).ToShortDateString()
'Debug.Print(dstr.ToString & "----->" & i.ToString & "TCID--->" & myArr(i + 1).ToString)
DQueue.Enqueue(DateTime.FromOADate(Double.Parse(myArr(i).ToString())).ToShortDateString())
Tqueue.Enqueue((myArr(i + 1).ToString()))
TCArr.Add((myArr(i + 1).ToString()))
dc.Merge(ProcessQueryODBC(dstr, myArr(i + 1).ToString))
If dc.Rows.Count > 0 Then
dt.Merge(dc)
Else
nFound.Merge(CreateDT(dstr, myArr(i + 1).ToString()))
End If
End If
Next
Instead of opening a DB connection through ODBC. Can you export your data to a CSV file and then let DB2 perform the import?
somestring = "import from "myfile.csv" of DEL ...."
DoCmd.RunSQL somestring