VBA Reading From a UCS-2 Little Endian Encoded Text File - vba

I have a whole bunch of text files that are exported from Photoshop that I need to import into an Excel document. I wrote a macro to get the job done and it seemed to work just fine for my test document but when I tried loading in some of the actual files produced by Photoshop Excel started putting all the data in a separate column except for the first line.
My code that reads the text file:
Open currentDocPath For Input As stream
Do Until EOF(stream)
Input #stream, currentLine
columnContents = Split(currentLine, vbTab)
For n = 0 To UBound(columnContents)
ActiveSheet.Cells(row, Chr(64 + colum + n)).Value = columnContents(n)
Next n
row = row + 1
Loop
Close stream
The text files I am reading look like this, only with much more data:
"Name" "Data" "Info" "blah"
"Name1" "Data1" "Info1" "blah1"
"Name2" "Data2" "Info2" "blah2"
The problem seemed pretty trivial, but when I load it into excel, instaed of looking like it does above it looks like this:
ÿþ"Name" "Data" "Info" "blah"
Name1
Data1
Info1
blah1
Name2
Data2
Info2
blah2
Now I am not sure why this is happening. It seems like the first two characters in the first row are there because those bytes declare the text encoding. Somehow those characters keep the first row formatted correctly while the remaining rows lose their quotation marks and all get moved to new lines.
Could someone who understands UCS-2 Little Endian text encoding explain how I can work around this? When I convert the files to ASCII it works fine.
Cheers!
edit: Okay so I understand now that the encoding is UTF-16 (I don't know a whole lot about character encoding). My main issue is that it's formatting strangely and I don't understand why or how to fix it. Thanks!

As I mentioned in my comment, it appears the file you're trying to import is encoded in UTF-16.
In this vbaexpress.com article, someone suggested that the following should work:
Dim GetOpenFile As String
Dim MyData As String
Dim r As Long
GetOpenFile = Application.GetOpenFilename
r = 1
Open GetOpenFile For Input As #1
Do While Not EOF(1)
Line Input #1, MyData
Cells(r, 1).Value = MyData
r = r + 1
Loop
Close #1
Obviously I can't test it myself, but maybe it'll help you.

Why not just tell excel to import the file. MS has probably put hundreds of thousands of person hours into that code. Record the importation to get easy code.
Remember Excel is a tool for non programmers to do programming things. Use it instead of trying to replace it.
These are the replacement file functions that you use for new code. Add a reference to Microsoft Scripting Runtime.
Opens a specified file and returns a TextStream object that can be used to read from, write to, or append to the file.
object.OpenTextFile(filename[, iomode[, create[, format]]])
Arguments
object
Required. Object is always the name of a FileSystemObject.
filename
Required. String expression that identifies the file to open.
iomode
Optional. Can be one of three constants: ForReading, ForWriting, or ForAppending.
create
Optional. Boolean value that indicates whether a new file can be created if the specified filename doesn't exist. The value is True if a new file is created, False if it isn't created. If omitted, a new file isn't created.
format
Optional. One of three Tristate values used to indicate the format of the opened file. If omitted, the file is opened as ASCII.
The format argument can have any of the following settings:
Constant Value Description
TristateUseDefault
-2
Opens the file using the system default.
TristateTrue
-1
Opens the file as Unicode.
TristateFalse
0
Opens the file as ASCII.

Related

VB writeline writes corrupt lines to text file

Is it possible for the following code to produce NUL values within a text file?
var temp_str = "123456;1234567"
My.Computer.FileSystem.WriteAllText(Path & "stats.txt", temp_str, False)
It seems simple, but it writes quite often and I'm seeing several files that get accessed by the application that have Strings written to as:
When opening the file with Notepad++. Some other editors show just squares, and it seems like each character is represented by a block/NUL.
So far I've been unable to reproduce this on my test system. I just find the files on a COMX module's file system that's been running in the field and comes back faulty, but I've been seeing enough of these files to make it a problem that needs to be solved.
Does anyone have an idea to prevent this behaviour?
Hard to say what the problem is without more code, but try this if you want to replace the existing contents of the file:
Dim fileContent = "My UTF-8 file contents"
Using writer As IO.StreamWriter = IO.File.CreateText(fullPathIncludingExtension)
writer.Write(fileContent)
End Using
Or this if you want to append UTF-8 text:
Dim newLines = "My UTF-8 content to append"
Using writer As IO.StreamWriter = IO.File.AppendAllText(fullPathIncludingExtension)
writer.Write(fileContent)
End Using
If you want to append Unicode text, you must use a different constructor for StreamWriter:
Using writer As IO.StreamWriter = New IO.StreamWriter("full/path/to/file.txt", True, Text.Encoding.Unicode)
writer.Write(MyContentToAppend)
End Using
Note that the True argument to the constructor specifies that you want to append text.

VBS Find/ replace double paragraph spacing with single spacing

I wasn't sure how to post a "question" that I found an answer to, but thought that it might be worth sharing my solution to save others the time I spent in figuring out how to do this.
Essentially, I have a PDF (with lots of pages/ formatting) that I want to strip the text out of, and paste into something else. However, a simple copy/paste will still leave text in its columns and automatically insert paragraph spaces that you then need to press end, delete, space, then repeat sequence indefinitely. Well, that's what programming was made for - doing repeated tasks for you so you don't have to.
My answer is posted below. If anyone has a better solution please let me know!
Below I pasted my code from a vbscript that I generated to do so. You will still need to go back through your text file and fix some bits & pieces after running the script that didn't follow the standard template that you programmed for.
Also, I'll note that I used notepad++ to determine how (in windows) Adobe reader handled carriage returns versus line feed (since the distinction is rather blurred today). I reference this article and the answer by AAT, which helped me in understanding the difference. The accepted answer is useful when specifically referencing vbs.
REM Set constants, then open file and copy into a buffer (contents)
Const ForReading = 1, ForWriting = 2
Dim fs, txt, contents
Set fs = CreateObject("Scripting.FileSystemObject")
Set txt = fs.OpenTextFile("originalTextFile.txt", ForReading)
contents = txt.ReadAll
txt.Close
REM Replace a double carriage return with un-repeatable text that as placeholder
contents = Replace(contents, vbCrLf & vbCrLf, "$%^&")
REM then replace leftover carriage returns with blank,
contents = Replace(contents, vbCrLf, "")
contents = Replace(contents, vbCrLf, "")
REM finally, restore original carriage returns for paragraph spacing
contents = Replace(contents, "$%^&", vbCrLf & vbCrLf)
contents = Replace(contents, "$%^&", vbCrLf & vbCrLf)
REM Write to file
Set txt = fs.OpenTextFile("textFileRemovedSpaces.txt", ForWriting)
txt.Write contents
txt.Close
MsgBox("Done!")
Step 1: Save pdf as a text file - this strips out the pictures/ etc. With Adobe Reader, do File -> Save as other -> Text.
Step 2: Save above as Something.vbs, and edit file names in script as appropriate. Make sure to also create the empty text file for the script to save the edited text in. Note in vbs, the text "REM" signifies a comment follows.
Step 3: Run Script.
Step 4: Profit!
I've find this useful, as it for the most part saves a lot of effort in editing a 300 page pdf that I needed to convert to a word document.
Again, if anyone has a better solution please let me know!

Validate a csv file

This is my sample file
#%cty_id1,#%ccy_id2,#%cty_src,#%cty_cd3,#%cty_nm4,#%cty_reg5,#%cty_natnl6,#%cty_bus7,#%cty_data8
690,ALL2,,AL,ALBALODMNIA,,,,
90,ALL2,,,AQ,AKNTARLDKCTICA,,,
161,IDR2,,AZ,AZLKFMERBALFKIJAN,,,,
252,LTL2,,BJ,BENLFMIN,,,,
206,CVE2,,BL,SAILFKNT BAFSDRTHLEMY,,,,
360,,,BW2,BOPSLFTSWLSOANA,,,,
The problem is for #%cty_cd3 is a standard column(NOT NULL) with length 2 letters only, but in sql server the record shifts to the other column,(due to a extra comma in btw)how do i validate a csv file,to make sure that
when there's a 2 character word need to be only in 4 column?
there are around 10000 records ?
Set of rules Defined !
Should have a standard set of delimiters for eachrow
if not
Check for NOT NULL values having Null values
If found Null
remove delimiter at the pointer
The 3 ,,, are not replaced with 2 ,,
#UPDATED : Can i know if this can be done using a script ?
Updated i need only a function That operates on records like
90,ALL2,,,AQ,AKNTARLDKCTICA,,, correct them using a Regex or any other method and put back into the source file !
Your best bet here may be to use the tSchemaComplianceCheck component in Talend.
If you read the file in with a tFileInputDelimited component and then check it with the tSchemaComplianceCheck where you set cty_cd to not nullable then it will reject your Antarctica row simply for the null where you expect no nulls.
From here you can use a tMap and simply map the fields to the one above.
You should be able to easily tweak this as necessary, potentially with further tSchemaComplianceChecks down the reject lines and mapping to suit. This method is a lot more self explanatory and you don't have to deal with complicated regex's that need complicated management when you want to accommodate different variations of your file structure with the benefit that you will always capture all of the well formatted rows.
You could try to delete the empty field in column 4, if column no. 4 is not a two-character field, as follows:
awk 'BEGIN {FS=OFS=","}
{
for (i=1; i<=NF; i++) {
if (!(i==4 && length($4)!=4))
printf "%s%s",$i,(i<NF)?OFS:ORS
}
}' file.csv
Output:
"id","cty_ccy_id","cty_src","cty_nm","cty_region","cty_natnl","cty_bus_load","cty_data_load"
6,"ALL",,"AL","ALBANIA",,,,
9,"ALL",,"AQ","ANTARCTICA",,,
16,"IDR",,"AZ","AZERBAIJAN",,,,
25,"LTL",,"BJ","BENIN",,,,
26,"CVE",,"BL","SAINT BARTH�LEMY",,,,
36,,,"BW","BOTSWANA",,,,
41,"BNS",,"CF","CENTRAL AFRICAN REPUBLIC",,,,
47,"CVE",,"CL","CHILE",,,,
50,"IDR",,"CO","COLOMBIA",,,,
61,"BNS",,"DK","DENMARK",,,,
Note:
We use length($4)!=4 since we assume two characters in column 4, but we also have to add two extra characters for the double quotes..
The solution is to use a look-ahead regex, as suggested before. To reproduce your issue I used this:
"\\,\\,\\,(?=\\\"[A-Z]{2}\\\")"
which matches three commas followed by two quoted uppercase letters, but not including these in the match. Ofc you could need to adjust it a bit for your needs (ie. an arbitrary numbers of commas rather than exactly three).
But you cannot use it in Talend directly without tons of errors. Here's how to design your job:
In other words, you need to read the file line by line, no fields yet. Then, inside the tMap, do the match&replace, like:
row1.line.replaceAll("\\,\\,\\,(?=\\\"[A-Z]{2}\\\")", ",,")
and finally tokenize the line using "," as separator to get your final schema. You probably need to manually trim out the quotes here and there, since tExtractDelimitedFields won't.
Here's an output example (needs some cleaning, ofc):
You don't need to entry the schema for tExtractDelimitedFields by hand. Use the wizard to record a DelimitedFile Schema into the metadata repository, as you probably already did. You can use this schema as a Generic Schema, too, fitting it to the outgoing connection of tExtractDelimitedField. Not something the purists hang around, but it works and saves time.
About your UI problems, they are often related to file encodings and locale settings. Don't worry too much, they (usually) won't affect the job execution.
EDIT: here's a sample TOS job which shows the solution, just import in your project: TOS job archive
EDIT2: added some screenshots
Coming to the party late with a VBA based approach. An alternative way to regex is to to parse the file and remove a comma when the 4th field is empty. Using microsoft scripting runtime this can be acheived the code opens a the file then reads each line, copying it to a new temporary file. If the 4 element is empty, if it is it writes a line with the extra comma removed. The cleaned data is then copied to the origonal file and the temporary file is deleted. It seems a bit of a long way round, but it when I tested it on a file of 14000 rows based on your sample it took under 2 seconds to complete.
Sub Remove4thFieldIfEmpty()
Const iNUMBER_OF_FIELDS As Integer = 9
Dim str As String
Dim fileHandleInput As Scripting.TextStream
Dim fileHandleCleaned As Scripting.TextStream
Dim fsoObject As Scripting.FileSystemObject
Dim sPath As String
Dim sFilenameCleaned As String
Dim sFilenameInput As String
Dim vFields As Variant
Dim iCounter As Integer
Dim sNewString As String
sFilenameInput = "Regex.CSV"
sFilenameCleaned = "Cleaned.CSV"
Set fsoObject = New FileSystemObject
sPath = ThisWorkbook.Path & "\"
Set fileHandleInput = fsoObject.OpenTextFile(sPath & sFilenameInput)
If fsoObject.FileExists(sPath & sFilenameCleaned) Then
Set fileHandleCleaned = fsoObject.OpenTextFile(sPath & sFilenameCleaned, ForWriting)
Else
Set fileHandleCleaned = fsoObject.CreateTextFile((sPath & sFilenameCleaned), True)
End If
Do While Not fileHandleInput.AtEndOfStream
str = fileHandleInput.ReadLine
vFields = Split(str, ",")
If vFields(3) = "" Then
sNewString = vFields(0)
For iCounter = 1 To UBound(vFields)
If iCounter <> 3 Then sNewString = sNewString & "," & vFields(iCounter)
Next iCounter
str = sNewString
End If
fileHandleCleaned.WriteLine (str)
Loop
fileHandleInput.Close
fileHandleCleaned.Close
Set fileHandleInput = fsoObject.OpenTextFile(sPath & sFilenameInput, ForWriting)
Set fileHandleCleaned = fsoObject.OpenTextFile(sPath & sFilenameCleaned)
Do While Not fileHandleCleaned.AtEndOfStream
fileHandleInput.WriteLine (fileHandleCleaned.ReadLine)
Loop
fileHandleInput.Close
fileHandleCleaned.Close
Set fileHandleCleaned = Nothing
Set fileHandleInput = Nothing
KillFile (sPath & sFilenameCleaned)
Set fsoObject = Nothing
End Sub
If that's the only problem (and if you never have a comma in the field bt_cty_ccy_id), then you could remove such an extra comma by loading your file into an editor that supports regexes and have it replace
^([^,]*,[^,]*,[^,]*,),(?="[A-Z]{2}")
with \1.
i would question the source system which is sending you this file as to why this extra comma in between for some rows? I guess you would be using comma as a delimeter for importing this .csv file into talend.
(or another suggestion would be to ask for semi colon as column separator in the input file)
9,"ALL",,,"AQ","ANTARCTICA",,,,
will be
9;"ALL";,;"AQ";"ANTARCTICA";;;;

VB.net will not read text file correctly

I've been trying to use StreamReader to read a log file. I cannot verify what it is encoded in, as when I open it in notepad++ and select ANSI encoding, I get this result:
I'm getting the characters needed when using ANSI but they are followed by things like [NULL][EOT][SOH][NUL][SI]
When I try and read the file in VB (using StreamReader or ReadAll) with ANSI encoding selected the resulting string I get back is completely wrong.
How could I read a file like this in VB.net?
You could use the IO.File.ReadAllText("File Location", encoding as System.Text.Encoding) method,
Dim textFromFile as string = IO.File.ReadAllText("C:\Users\Jason\Desktop\login20130417.rdb", System.Text.Encoding.ASCII) 'Or Unicode, UFT32, UFT8, UFT7, BigEndianUnicode or default. Default is ANSI.
If you still don't get the text you need by using the default encoding (ANSI), then you can always try the other 6 different encoding methods.
Update...
It appears that your file is corrupt, using the code below I was able to get a binary representation of whatever is in the file, I got this,
1111111111111101000001110000010000000000000001010000000000010011000000000000100000000000000111100000000000100110000000000011100000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000111111111111110100000111000001000000000000000101000000000001001100000000000010000000000000011110000000000010100000000000111111111111110100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000110011100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
The massive amount of null data would suggest that the file is corrupt, which would also explain why we are not getting a lot of data whenever we try to read the file.
The code,
Dim fileData As String = IO.File.ReadAllText("C:\Users\Jason\Desktop\login20130417.rdb")
Dim i As Integer = 0
Dim binaryData As String = ""
Dim ch As String = ""
Do Until i = fileData.Length
ch = fileData.Chars(i)
bin = bin & System.Convert.ToString(AscW(ch), 2).PadLeft(8, "0")
i = i + 1
Loop
As #Daniel A. White suggested in his comment, that file does not appear to be encoded like a "normal" text file. A StreamReader will not work in this situation. I would attempt to use a BinaryReader.
Rdb file? Never heard of it. Quick google makes it less clear - n64 database file, Darkbot, etc...
However considering the name you have, and the general look of the opened file, i would say its a binary file.
If you want to read the file in vb.net you'll need a library of sorts, and i can't help you with one until you are able to shed some light on what the file may be, or what it was created with.

How to : streamreader in csv file splits to next if lowercase followed by uppercase in line

I am using asp.Net MVC application to upload the excel data from its CSV form to database. While reading the csv file using the Stream Reader, if line contains lower case letter followed by Upper case, it splits in two line . EX.
Line :"1,This is nothing but the Example to explanationIt results wrong, testing example"
This line splits to :
Line 1: 1,This is nothing but the Example to explanation"
Line 2:""
Line 3:It results wrong, testing example
where as CSV file generates right as ""1,This is nothing but the Example to explanationIt results wrong, testing example"
code :
Dim csvFileReader As New StreamReader("my csv file Path")
While Not csvFileReader.EndOfStream()
Dim _line = csvFileReader.ReadLine()
End While
Why should this is happening ? how to resolve this.
When a cell in an excel spreadsheet contains multiple lines, and it is saved to a CSV file, excel separates the lines in the cell with a line-feed character (ASCII value 0x0A). Each row in the spreadsheet is separated with the typical carriage-return/line-feed pair (0x0D 0x0A). When you open the CSV file in notepad, it does not show the lone LF character at all, so it looks like it all runs together on one line. So, in the CSV file, even though notepad doesn't show it, it actually looks like this:
' 1,"This is nothing but the Example to explanation{LF}It results wrong",testing example{CR}{LF}
According to the MSDN documentation on the StreamReader.Readline method:
A line is defined as a sequence of characters followed by a line feed ("\n"), a carriage return ("\r"), or a carriage return immediately followed by a line feed ("\r\n").
Therefore, when you call ReadLine, it will stop reading at the end of the first line in a multi-line cell. To avoid this, you would need to use a different "read" method and then split on CR/LF pairs rather than on either individually.
However, this isn't the only issue you will run into with reading CSV files. For instance, you also need to properly handle the way quotation characters in a cell are escaped in CSV. In such cases, unless it's really necessary to implement it in your own way, it's better to use an existing library to read the file. In this case, Microsoft provides a class in the .NET framework that properly handles reading CSV files (including ones with multi-line cells). The name of the class is TextFieldParser and it's in the Microsoft.VisualBasic.FileIO namespace. Here's the link to a page in the MSDN that explains how to use it to read a CSV file:
http://msdn.microsoft.com/en-us/library/cakac7e6
Here's an example:
Using reader As New TextFieldParser("my csv file Path")
reader.TextFieldType = FieldType.Delimited
reader.SetDelimiters(",")
While Not reader.EndOfData
Try
Dim fields() as String = reader.ReadFields()
' Process fields in this row ...
Catch ex As MalformedLineException
' Handle exception ...
End Try
End While
End Using