ftell/fseek fail when near end of file - file-io

Reading a text file (which happens to be a PDS Member FB 80)
hFile = fopen(filename,"r");
and have reached up to the point in the file where there is only an empty line left.
FilePos = ftell(hFile);
Then read the last line, which only contains a '\n' character.
fseek(hFile, FilePos, SEEK_SET);
fails with:-
errno=(27) EDC5027I The position specified to fseek() was invalid.
The position specified to fseek() was returned by ftell() a few lines earlier. It has the value 841 in the specific error case I have seen. Checking through the debugger, this is also the value returned by ftell a few lines earlier. It has not been corrupted.
The same code works at other positions in the file, and only fails at the point where there is a single empty line left to read when the position is remembered.
My understanding of how ftell/fseek should work is succinctly captured by another answer on SO.
The value returned from ftell on a text stream has no predictable relationship to the number of characters you have read so far. The only thing you can rely on is that you can use it subsequently as the offset argument to fseek or fseeko to move back to the same file position.
It would seem that I cannot rely on the one thing I should be able to rely on.
My questions is, why does fseek fail in this way?

As z/OS has some file formats that are unique you might find the answer in this Knowledge Center article.
Given that you are processing a PDS member I would suspect that this is record level I/O which is handled differently than stream I/O which is more common in distributed implementations.

I do not know why fseek fails in this way, but if your common usage pattern is to use ftell to get the position and then fseek to go to that position, I strongly suggest using fgetpos and fsetpos instead for data set I/O. Not only will you avoid this problem that you are finding, but it is also better performing for certain data set characteristics.

Related

can not get a same solution from SNOPT when I try to solve a Nonlinear Program in Drake with a same initial guess

I am trying to solve a nonlinear program with Direct collocation in drake. I want to regain the solution when SNOPT solved the program successfully. First I saved the initial guess of each variable in a .txt file, then I read the initial guess and set decision variables by SetInitialGuess() , and change nothing else, but I did not get the same solution, WHY?
MOREOVER, when I run more times, the latter solutions are same.
e.g.
solution1 != solution2; solution2 == solution3; solution3 == solution4;.....
I have check each initial guess to make sure they are same. Is there options in SNOPT or initial settings in NP should be set beside the initial guess of decision variables to get a same solution?
Without knowing more about your program, one possible reason is that when you save the initial guess to a.txt, the floating number get truncated when print it to a txt file. So in the second run, the initial guess is not exactly the same as the first run. And this tiny difference in the initial guess causes the SNOPT to find a different solution. For solution 2, 3, 4, do they all load the initial guess from a.txt?
In order to print the floating number to the txt file, you could use setprecision command.

Fortran read statement reading sequence

Suppose I have a file that each line contains an array index followed by the array value
i array(i)
Can I read in the data by just a naive read(unit=10, *) i, array(i)? Will Fortran always read i first and then use this i value to assign array(i)? Will certain read specifications or compiler flags influence the behavior?
The data transfer statement
read(unit=10,*) i, array(i)
is a legitimate one, and its behaviour is as desired: from the record the value for i is first read, then that value is used to identify the element of the array array(i) for the second value read.
This is a requirement of the Fortran specification, such as with (Fortran 2018, 12.6.4.5.1):
All values needed to determine which entities are specified by an input/output list item are determined at the beginning of the processing of that item.
Of course, although this data transfer statement potentially works, that doesn't mean that it is desirable in all but the simplest cases where you trust the input data. In particular, it is not possible to do any checking of the bounds during this read statement. If the i value read corresponds to an invalid array element specification, the program is broken. You may want to use an intermediate value for the array element merely to handle potential problems with the input file.

Preventing VB.NET's multiline/verbatim string from destroying your entire code file?

Let's say you have a module that's several hundreds of lines long. At the very top of your code file, you go to start up a string, so you type a quote. Total wreckage ensues as the string remains unterminated for a time, causing everything within your entire code file to be subject to erratic encapsulation by your string (see image for actual example of all the errors generated). No big deal, right? You just finish your string and all the errors will go away. While true, you may find the IDE has had its way with other strings in your document. For example, these lines...
oLog.writeLogFile("Starting System Update and Version Update ")
oLog.writeLogFile("Starting Script for Fetching Data from Source to Dest")
...get changed to this:
oLog.writeLogFile("Starting System Update And Version Update ")
oLog.writeLogFile("Starting Script For Fetching Data from Source To Dest")
Notice how and changes to And, for to For, and to to To. What's happening here is that, as other strings in the document become... eh... "destrung"... so some of the words that were once part of a string are now interpreted as keywords by the IDE. Because it's VB, it modifies capitalization automatically. When you finally terminate your string, all the other strings further down in the document become properly terminated as well, but the jarring effects still remain.
Is there a way to prevent this from occurring?
Why not first type a double ", then return in between them and start typing your string? I do it all the time to prevent this. I find that the short delay in between typing your first " and the moment the IDE starts capitalizing keywords is long enough for me to (remember to) type the second ".

00626 SQL Loader error

How to avoid
"characterset conversion buffer overflow" error in sql*loader? error # 00626.
I am not able to find this on internet please suggest me the solution for this.
What is the character set of the input datafile? You might try specifying the character set in the control file:
CHARACTERSET char_set_name LENGTH SEMANTICS CHARACTER
By default, if not specified, Oracle will use byte length semantics. Thus, if you define a field length in your control file as VARCHAR(20), in byte semantics you'd have 20 byte buffer, but in character length semantics you might have a 40 byte buffer. This would be my guess as to what could be the source of the error.
It's not a lot of help, but here's what the Oracle error manual has to say about that error:
SQL*Loader-00626: Character set
conversion buffer overflow.
Cause: A conversion from the datafile character set to the client
character set required more space than
that allocated for the conversion
buffer. The size of the conversion
buffer is limited by the maximum size
of a varchar2 column.
Action: The input record is rejected. The data will not fit into
the column.
It sounds like there isn't any way to work around this within SQLLoader. If it is affecting a small number of records then it may be easiest to simply handle those manually. If it is many records, then you probably need to find or create a different loading tool.
Just a few ideas for you to think about:
You could try to load different parts of the "string" into different fields in the database .. maybe that way you can work around the limitation.
You could try to do the character set conversion in a different tool .. some text editors may give you some options .. and then load the file without it requiring the conversion.
Not sure if there's any merit in these ideas, but hopefully you can work something out.
Thanks for all your help. This problem has been resolved. We split the file and loaded in chunks and it worked fine

Asc(Chr(254)) returns 116 in .Net 1.1 when language is Hungarian

I set the culture to Hungarian language, and Chr() seems to be broken.
System.Threading.Thread.CurrentThread.CurrentCulture = "hu-US"
System.Threading.Thread.CurrentThread.CurrentUICulture = "hu-US"
Chr(254)
This returns "ţ" when it should be "þ"
However, Asc("ţ") returns 116.
This: Asc(Chr(254)) returns 116.
Why would Asc() and Chr() be different?
I checked and the 'wide' functions do work correctly: ascw(chrw(254)) = 254
Chr(254) interprets the argument in a system dependent way, by looking at the System.Globalization.CultureInfo.CurrentCulture.TextInfo.ANSICodePage property. See the MSDN article about Chr. You can check whether that value is what you expect. "hu-US" (the hungarian locale as used in the US) might do something strange there.
As a side-note, Asc() has no promise about the used codepage in its current documentation (it was there until 3.0).
Generally I would stick to the unicode variants (ending on -W) if at all possible or use the Encoding class to explicitly specify the conversions.
My best guess is that your Windows tries to represent Chr(254)="ţ" as a combined letter, where the first letter is Chr(116)="t" and the second ("¸" or something like that) cannot be returned because Chr() only returns one letter.
Unicode text should not be handled character-by-character.
It sounds like you need to set the code page for the current thread -- the current culture shouldn't have any effect on Asc and Chr.
Both the Chr docs and the Asc docs have this line:
The returned character depends on the code page for the current thread, which is contained in the ANSICodePage property of the TextInfo class. TextInfo.ANSICodePage can be obtained by specifying System.Globalization.CultureInfo.CurrentCulture.TextInfo.ANSICodePage.
I have seen several problems in VBA on the Mac where characters over 127 and some control characters are not treated properly.
This includes paragraph marks (especially in text copied from the internet or scanned), "¥", and "Ω".
They cannot always be searched for, cannot be used in file names - though they could in the past, and when tested, come up as another ascii number. I have had to write algorithms to change these when files open, as they often look like they are the right character, but then crash some of my macros when they act strangely. The character will look and act right when I save the file, but may be changed when it is reopened.
I will eventually try to switch to unicode, but I am not sure if that will help this issue.
This may not be the issue that you are observing, but I would not rule out isolated problems with certain characters like this. I have sent notes to MS about this in the past but have received no joy.
If you cannot find another solution and the character looks correct when you type it in, then I recommend using a macro snippet like the one below, which I run when updating tables. You of course have to setup theRange as the area you are looking at. A whole file can take a while.
For aChar = 1 To theRange.Characters.count
theRange.Characters(aChar).Select
If Asc(Selection.Text) = 95 And Selection.Text <> "_" Then Selection.TypeText "Ω"
Next aChar