Read from end of file in Fortran 90 - file-io

My question is how to read beginning from the end of a text file.
For example,
do
read(1,*) i
print *,i
end do
will read each line in the file 1, and print the contents to the terminal. How would I do this starting from the end of the file?

You can achieve what you want by using inquire, access=stream, and the pos tag in the read statement. A quick 10 minute exercise gives the following.
program foo
implicit none
integer fd, i, j, m, n
character, allocatable :: s(:)
character :: c
open(newunit=fd,file='foo.f90',access='stream',status='old', &
position='append')
inquire(fd, pos=n)
allocate(s(n))
m = 1
do i = n-1, 1, -1
read(fd, pos=i) c
s(m:m) = c
if (iachar(c) == 10) then
do j = m, 1, -1
write(*,'(A1)',advance='no') s(j)
end do
m = 1
else
m = m + 1
end if
end do
write(*,*)
do j = m-1, 1, -1
write(*,'(A1)',advance='no') s(j)
end do
write(*,*)
close(fd)
end program foo
Save this into a file named foo.f90, compile, and run. Someone here can make it more elegant. Edited original version: don't need to use array sections for s(j).

It is not possible. Reading always starts from a certain point, and proceeds forward. By default it starts from the first byte; but you can jump to some other location.
The critical issue is, in a free-format file, there is no way to know where the lines start unless you read the file. You could jump to the last byte, check whether it is a line separator or not, then jump to the byte before it if it wasn't, crawling back till you find a newline, but it would be very inefficient.
A better way would be to read the whole file once, remembering where the newlines were; then iterate this array backwards, jumping to each line's start position to read it. This is not as inefficient, but it is not great, either.
The best way, if you have the memory for it, is to read in the whole file into an array of lines, then reverse the array in-memory.

Related

Split a cell in UFT

I have a text document (.txt), there I have n lines that have to be split up, but the point is I don't have delimiter. I know the length of each variable that doesn't change.
For example, the first variable is the from the 25 character to 35; the second one, from 36 to 47; then from 48 to 78, then from 79 to 119, and this until the 360th character of the line.
I guess that the solution is by double loop, one for each line and the other one for each variable, but I cannot get it.
If you need more information just ask, I am completely lost.
Thankfully,
Steps you need to take:
Open the file
Read a line
Confirm the line is 360 characters
Assign chunks of the line to different variables
Do things with the variables
Read another line and repeat until EOF
1 & 2:
Your workbook needs a reference to the Microsoft Scripting Runtime in order to give you access to the FileSystemObject. I'll let you research that.
Create a FileSystemObject and use that to create a TextStream with the path to your file.
currentLine = textStream.ReadLine()
Do Until textStream.EOF
If Len(currentLine) = 360 Then
firstChunk = Mid$(currentLine, 25, 10)
secondChunk = Mid$(currentLine, 36, 11)
thirdChunk = Mid$(currentLine, 48, 30)
fourthChunk = Mid$(currentLine, 78, 30)
' Do stuff with chunks
End If
currentLine = textStream.ReadLine()
Loop
In due course you could get fancy and have an array populated with paired items detailing the starting point of a chunk and how many chars it is, something like:
Dim arrChunkPoints As Variant
Dim arrChunks As Variant
arrChunkPoints = Array(25,10, _
36,11, _
48,30, _
78,30)
ReDim arrChunks(UBound(arrChunkPoints)\2) ' Integer returned
This would allow you to step over the items in arrChunkPoints and populate each element of arrChunks with a section of currentLine using Mid$(), but populated with the values from arrChunkPoints. But this is probably for another day.

how to search and display specific line from a text file vb.net

Hi I am trying to search for a line which contains whats the user inputs in a text box and display the whole line. My code below doesnt display a messsagebox after the button has been clicked and i am not sure if the record has been found
Dim filename, sr As String
filename = My.Application.Info.DirectoryPath + "\" + "mul.txt"
Dim file As String()
Dim i As Integer = 0
file = IO.File.ReadAllLines(filename)
Dim found As Boolean
Dim linecontain As Char
sr = txtsr.ToString
For Each line As String In file
If line.Contains(sr) Then
found = True
Exit For
End If
i += 1
If found = True Then
MsgBox(line(i))
End If
Next
End Sub
You should be calling ReadLines here rather than ReadAllLines. The difference is that ReadAllLines reads the entire file contents into an array first, before you can start processing any of it, while ReadLines doesn't read a line until you have processed the previous one. ReadAllLines is good if you want random access to the whole file or you want to process the data multiple times. ReadLines is good if you want to stop processing data when a line satisfies some criterion. If you're looking for a line that contains some text and you have a file with one million lines where the first line matches, ReadAllLines would read all one millions lines whereas ReadLines would only read the first.
So, here's how you display the first line that contains specific text:
For Each line In File.ReadLines(filePath)
If line.Contains(substring) Then
MessageBox.Show(line)
Exit For
End If
Next
With regards to your original code, your use of i makes no sense. You seem to be using i as a line counter but there's no point because you're using a For Each loop so line contains the line. If you already have the line, why would you need to get the line by index? Also, when you try to display the message, you are using i to index line, which means that you're going to get a single character from the line rather than a single line from the array. If the index of the line is greater than the number of characters in the line then that is going to throw an IndexOutOfRangeException, which I'm guessing is what's happening to you.
This is what comes from writing code without knowing what it actually has to do first. If you had written out an algorithm before writing the code, it would have been obvious that the code didn't implement the algorithm. If you have no algorithm though, you have nothing to compare your code to to make sure that it makes sense.

Extract newest log lines from a log file based on timestamp on line start

I have a simple .txt log file to which an application adds lines as it does its work. The lines consist of a timestamp and a variable-length text:
17-06-25 06:37:43 xxxxxxxxxxxxxxx
17-06-25 06:37:46 yyyyyyy
17-06-25 06:37:50 zzzzzzzzzzzzzzzzzzzzzzzzzzzz
...
I need to extract all lines with a timestamp greater than a certain date-time. This typically is about the last, say, 20-40 log entries (lines).
The problem is, that the file is large and growing.
If all lengths would be equal, I'd invoke a binary search. But they aren't, and so I end up using something like:
Private Sub ExtractNewestLogs(dEarliest As Date)
Dim sLine As String = ""
Dim oSRLog As New StreamReader(gsFilLog)
sLine = oSRLog.ReadLine()
Do While Not (sLine Is Nothing)
Debug.Print(sLine)
sLine = oSRLog.ReadLine()
Loop
End Sub
which, well, isn't really fast.
Is there a method with which I can read such files "backwards", i.e., last line first? If not, what other option do I have?
The function below will return the last x number of characters from a file as an array of strings using a binary reader. You can then pull the last records that you want much more quickly than reading the entire log file. You can fine tune the number of bytes to read according to a rough approximation of how many bytes are taken by the last 20-40 log entries. On my pc - it took <10ms to read the last 10,000 characters of a 17mb text file.
Of course this code assumes that your log file is plain ascii text.
Private Function ReadLastbytes(filePath As String, x As Long) As String()
Dim fileData(x - 1) As Byte
Dim tempString As New StringBuilder
Dim oFileStream As New FileStream(filePath, FileMode.Open, FileAccess.Read)
Dim oBinaryReader As New BinaryReader(oFileStream)
Dim lBytes As Long
If oFileStream.Length > x Then
lBytes = oFileStream.Length - x
Else
lBytes = oFileStream.Length
End If
oBinaryReader.BaseStream.Seek(lBytes, SeekOrigin.Begin)
fileData = oBinaryReader.ReadBytes(lBytes)
oBinaryReader.Close()
oFileStream.Close()
For i As Integer = 0 To fileData.Length - 1
If fileData(i)=0 Then i+=1
tempString.Append(Chr(fileData(i)))
Next
Return tempString.ToString.Split(vbCrLf)
End Function
I attempted a binary search anyway, eventhough the file has not static line lengths.
First some considerations, then the code:
Sometimes it is needed, that the last n lines of a log file are extracted, based on an ascending sort key at the beginning of the line. The key really could be anything, but in log files typically represents a date-time, usually in the format YYMMDDHHNNSS (possibly with some interpunction).
Log files typically are text based files, consisting of multiple lines, at times millions of them. Often log files feature fixed-length line widths, in which case a specific key is quite easy to access with a binary search. However, probably also as often, log files have a variable line width. To access these, one can use an estimate of an average line width in order to calculate a file position from the end, and then process from there sequentially to the EOF.
But one can employ a binary approach also for this type of files, as demonstrated here. The advantage comes in, as soon as file sizes grow. A log file's maximum size is determined by the file system: NTFS allows for 16 EiB (16 x 2^60 B), theoretically; in practice under Windows 8 or Server 2012, it's 256 TiB (256 x 2^40 B).
(What 256 TiB actually means: a typical log file is designed to be readable by a human and rarely exceeds many more than 80 characters per line. Let's assume your log file logs along happily and completely uninterrupted for astonishing 12 years for a total of 4,383 days at 86,400 seconds each, then your application is allowed to write 9 entries per millisecond into said log file, to eventually meet the 256 TiB limit in its 13th year.)
The great advantage of the binary approach is, that n comparisons suffice for a log file consisting of 2^n bytes, rapidly gaining advantage as the file size becomes larger: whereas 10 comparisons are required for file sizes of 1 KiB (1 per 102.4 B), there are only 20 comparisons needed for 1 MiB (1 per 50 KiB), 30 for 1 GiB (1 per 33⅓ MiB), and a mere 40 comparisons for files sized 1 TiB (1 per 25 GiB).
To the function. These assumptions are made: the log file is encoded in UTF8, the log lines are separated by a CR/LF sequence, and the timestamp is located at the beginning of each line in ascending order, probably in the format [YY]YYMMDDHHNNSS, possibly with some interpunction in between. (All of these assumptions could easily be modified and cared for by overloaded function calls.)
In an outer loop, binary narrowing is done by comparing the provided earliest date-time to match. As soon as a new position within the stream has been found binarily, an independent forward search is made in an inner loop to locate the next CR/LF-sequence. The byte after this sequence marks the start of the record's key being compared. If this key is larger or equal the one we are in search for, it is ignored. Only if the found key is smaller than the one we are in search for its position is treated as a possible condidate for the record just before the one we want. We end up with the last record of the largest key being smaller than the searched key.
In the end, all log records except the ultimate candidate are returned to the caller as a string array.
The function requires the import of System.IO.
Imports System.IO
'This function expects a log file which is organized in lines of varying
'lengths, delimited by CR/LF. At the start of each line is a sort criterion
'of any kind (in log files typically YYMMDD HHMMSS), by which the lines are
'sorted in ascending order (newest log line at the end of the file). The
'earliest match allowed to be returned must be provided. From this the sort
'key's length is inferred. It needs not to exist neccessarily. If it does,
'it can occur multiple times, as all other sort keys. The returned string
'array contains all these lines, which are larger than the last one found to
'be smaller than the provided sort key.
Public Shared Function ExtractLogLines(sLogFile As String,
sEarliest As String) As String()
Dim oFS As New FileStream(sLogFile, FileMode.Open, FileAccess.Read,
FileShare.Read) 'The log file as file stream.
Dim lMin, lPos, lMax As Long 'Examined stream window.
Dim i As Long 'Iterator to find CR/LF.
Dim abEOL(0 To 1) As Byte 'Bytes to find CR/LF.
Dim abCRLF() As Byte = {13, 10} 'Search for CR/LF.
Dim bFound As Boolean 'CR/LF found.
Dim iKeyLen As Integer = sEarliest.Length 'Length of sort key.
Dim sActKey As String 'Key of examined log record.
Dim abKey() As Byte 'Reading the current key.
Dim lCandidate As Long 'File position of promising candidate.
Dim sRecords As String 'All wanted records.
'The byte array accepting the records' keys is as long as the provided
'key.
ReDim abKey(0 To iKeyLen - 1) '0-based!
'We search the last log line, whose sort key is smaller than the sort
'provided in sEarliest.
lMin = 0 'Start at stream start
lMax = oFS.Length - 1 - 2 '0-based, and without terminal CRLF.
Do
lPos = (lMax - lMin) \ 2 + lMin 'Position to examine now.
'Although the key to be compared with sEarliest is located after
'lPos, it is important, that lPos itself is not modified when
'searching for the key.
i = lPos 'Iterator for the CR/LF search.
bFound = False
Do While i < lMax
oFS.Seek(i, SeekOrigin.Begin)
oFS.Read(abEOL, 0, 2)
If abEOL.SequenceEqual(abCRLF) Then 'CR/LF found.
bFound = True
Exit Do
End If
i += 1
Loop
If Not bFound Then
'Between lPos and lMax no more CR/LF could be found. This means,
'that the search is over.
Exit Do
End If
i += 2 'Skip CR/LF.
oFS.Seek(i, SeekOrigin.Begin) 'Read the key after the CR/LF
oFS.Read(abKey, 0, iKeyLen) 'into a string.
sActKey = System.Text.Encoding.UTF8.GetString(abKey)
'Compare the actual key with the earliest key. We want to find the
'largest key just before the earliest key.
If sActKey >= sEarliest Then
'Not interested in this one, look for an earlier key.
lMax = lPos
Else
'Possibly interesting, remember this.
lCandidate = i
lMin = lPos
End If
Loop While lMin < lMax - 1
'lCandidate is the position of the first record to be taken into account.
'Note, that we need the final CR/LF here, so that the search for the
'next CR/LF sequence following below will match a valid first entry even
'in case there are no entries to be returned (sEarliest being larger than
'the last log line).
ReDim abKey(CInt(oFS.Length - lCandidate - 1)) '0-based.
oFS.Seek(lCandidate, SeekOrigin.Begin)
oFS.Read(abKey, 0, CInt(oFS.Length - lCandidate))
'We're done with the stream.
oFS.Close()
'Convert into a string, but omit the first line, then return as a
'string array split at CR/LF, without the empty last entry.
sRecords = (System.Text.Encoding.UTF8.GetString(abKey))
sRecords = sRecords.Substring(sRecords.IndexOf(Chr(10)) + 1)
Return sRecords.Split(ControlChars.CrLf.ToCharArray(),
StringSplitOptions.RemoveEmptyEntries)
End Function

How do I do user inputs on the same line in Lua?

Here is my code:
while true do
opr = io.read() txt = io.read()
if opr == "print" then
print(txt)
else
print("wat")
end
end
What I'm trying to do is make it where you type print and then whatever you want like this:
print text
And it'll print text but I can't seem to do it on the same line without having to press enter after typing print. I always end up having to write it like:
print
text
If anyone knows how I can fix this please answer.
When called without arguments, io.read() reads a whole line. You could read the line and get the words using pattern matching:
input = io.read()
opr, txt = input:match("(%S+)%s+(%S+)")
The above code assumes that there is just one word for opr and one word for txt. If there might be zero or more txt, try this:
while true do
local input = io.read()
local i, j = input:find("%S+")
local opr = input:sub(i, j)
local others = input:sub(j + 1)
local t = {}
for s in others:gmatch("%S+") do
table.insert(t, s)
end
if opr == "print" then
print(table.unpack(t))
else
print("wat")
end
end
Well, that is because io.read() nactually reads an entire line.
What you have to do is read a line:
command = op.read()
and then analyze the string.
For what you want to do, the best is probably to iterate the string looking for spaces to separate each word and save it into a table. Then you can pretty much do with it whatever you want.
You can also interpret the command on the fly while iterating:
Read in the first word;
if it is "print" then read in the rest of the line and print it;
if it is "foo" read in the next 3 words as aprameters and call bar();
etc.
For now I am leaving the implementation for to you. If you need help with that leave a comment.

Interface between csh and fortran code

I have a script (csh) which calls a fortran executable.
Each time the script calls the fortran code a counter should be incremented and using that counter I have to create a new output file.
Can I pass a variable to my fortran code or is there a simple way of doing the same.
I have tried this code:
program callsave
c
implicit none
integer i,j
c
do j = 1, 10
call trysave(i)
print *, i
end do
stop
end
c
subroutine trysave(i)
integer k
data k /1/
save k
i = k
k = k + 1
end subroutine
c
This works fine individually. But when I call this subroutine separately in my fortran code through the script, it is not getting incremented. It just has the initial value '1', and the output files been overwritten.
Any type of help/suggestion would greatly be appreciated.
Thanks
Praveen.
Is it that you're looking for a way of passing an integer value from the shell script to the fortran code?
In that case, one way would be to use command line arguments, see e.g. here: http://gcc.gnu.org/onlinedocs/gfortran/GETARG.html
I'm not sure what is the official status of the getarg() routine, but by experience it works fine in gfortran, intel and PGI compilers.