To I came upon this line of code:
fprintf(stdout, "message", fflush(stdout));
Note that the message does not contain any %-tag.
Is that safe in visual c++? fflush() returns 0 on success and EOF on failure. What will fprintf() do with this extra parameter?
I first thought that this was a strange hack to add a fflush() call without needing an extra line. But written like this, the fflush() call will be executed before the fprintf() call so it does not flush the message being printed right now but the ones waiting to be flushed, if any... am I right?
It's safe. Here's what C (C99 atleast, paragraph
7.19.6.1) says about it
If the format is exhausted while
arguments remain, the excess arguments
shall be evaluated but are otherwise
ignored.
If the goal was to avoid a line, i'd rather do
fflush(stdout); fprintf(stdout, "message");
if for nothing else than to prevent the person later reading that code to hunt me down with a bat.
fprintf doesn't know the exact number of parameters, it only tries to load one argument per '%'. If you provide less arguments than '%', it results in undefined behavior, if you provide more arguments, they are ignored.
Ad second question, yes, this would only flush messages in queue, the new message won't be flushed.
I think fprintf is using varargs to process parameters, so any extra parameters should be safely ignored (not that it's a good practice or anything). And you are right that fflush will be called before fprintf, so this is kind of a pointless hack.
With enough warning flags enabled (like -Wall for gcc) you will get a warning
Related
I am writing an interpreter of CIL code, and now it is time to interpreter leave in a filter block.
How to handle it?
First idea was is that this is an invalid instruction being in a filter block, so such a code can't be executed. But then I looked at the standard and found strange things.
The standard ECMA-335 says that leave can be used (!) to exit a filter block:
The leave instruction is similar to the br instruction, but the former can be used to exit a try,
filter, or catch block whereas the ordinary branch instructions can only be used in such a
block to transfer control within it.
At the same time:
Control cannot be transferred out of a filter block except through the use of a throw instruction or
executing the final endfilter instruction. In particular, it is not valid to execute a ret or leave
instruction within a filter block.
Seems to be a contradiction.
Today I've learned that in Pharo the execution of:
[v := 1] ensure: [self halt. v := 2]
will end up setting v = 2, even when we abandon the process at the halt window(!).
I find this debatable. For me, the semantics of #ensure: means that the sequence
self halt. v := 2
must be executed, regardless the circumstances with the receiver block, not regardless the logic of the argument block. And since the logic of #halt includes the event of terminating the process, I find it intrusive the obstinate evaluation of the second sentence.
Next I tried the following:
[v := 1] ensure: [1 / 0. v := 2]
When the ZeroDivide exception popped-up I closed the debugger and still the value of v was 2 (same as with #halt.)
Finally, I evaluated:
[v := 1] ensure: [n := 1 / 0. v := v + n]
and closed the debugger on the ZeroDivide exception. This time the value of v was 1 but I got no exception from the fact that v + n cannot be evaluated. In other words, the error went on silently.
So my question is. What's the rational behind this behavior? Shouldn't the process just terminate at the point it would terminate under "normal" circumstances, i.e., with no #ensure: involved?
Interesting one. It seems that your answer lies in the method BlockClosure>>valueNoContextSwitch, which is called by #ensure:. If you read the comment there, it says that it creates an exact copy of BlockClosure>>value (in a primitive), and the return value of that copy gets returned, not the return value of the original block containing your halt which you terminated. So the copy gets executed (apparently ignoring the copied halt), even if the original doesn't get to finish.
My guess is that this is intended to ensure (no pun intended) that the ensure: block always runs, but has the unintended side effect of ignoring the termination of the original block. I agree with you that this is not only counter-intuitive, but also probably not what was intended.
I guess this is behavior which is not fully defined by any (ANSI) standard, but correct me, if I am wrong.
Other Smalltalks seem to behave different. I tried it in Smalltalk/X, where the Debugger offers 3 options: "Continue" (i.e. proceed), "Abort" (i.e. unwind) and "Terminate" (i.e. kill the process). I guess "Terminate" corresponds to what Squeak does when you close the debugger.
With "Abort" and "Terminate", the rest of the ensure block is NOT executed, with "Continue" it is. I guess that is ok, and what you would expect.
On Abort and Terminate (which are both unwinds to corrsponding exception handlers), it should not try to reevaluate or proceed the potentially wrong/bad/failing ensure block.
It is be the choice of the handler (which the Debugger basically is) if it wants to proceed or not. If not, then it should get out of the ensure block and continue to execute any other ensure blocks which may be above in the calling chain.
This is consistent with the behavior of exception handling blocks, which are also not reevaluated or proceeded if the same exception is raised within. In ST/X, there is explicit code in the exception classes which cares for this situation, so it is definitely by purpose and not by side effect.
My guess is that this is wrong in Squeak and the Squeak developers should be told.
I'm wrestling with the concept of code "order of execution" and so far my research has come up short. I'm not sure if I'm phrasing it incorrectly, it's possible there is a more appropriate term for the concept. I'd appreciate it if someone could shed some light on my various stumbling blocks below.
I understand that if you call one method after another:
[self generateGrid1];
[self generateGrid2];
Both methods are run, but generateGrid1 doesn't necessarily wait for generateGrid2. But what if I need it to? Say generateGrid1 does some complex calculations (that take an unknown amount of time) and populate an array that generateGrid2 uses for it's calculations? This needs to be done every time an event is fired, it's not just a one time initialization.
I need a way to call methods sequentially, but have some methods wait for others. I've looked into call backs, but the concept is always married to delegates in all the examples I've seen.
I'm also not sure when to make the determinate that I can't reasonably expect a line of code to be parsed in time for it to be used. For example:
int myVar = [self complexFloatCalculation];
if (myVar <= 10.0f) {} else {}
How do I determine if something will take long enough to implement checks for "Is this other thing done before I start my thing". Just trial and error?
Or maybe I'm passing a method as parameter of another method? Does it wait for the arguments to be evaluated before executing the method?
[self getNameForValue:[self getIntValue]];
I understand that if you call one method after another:
[self generateGrid1];
[self generateGrid2];
Both methods are run, but generateGrid1 doesn't necessarily wait for generateGrid2. But what if I need it to?
False. generateGrid1 will run, and then generateGrid2 will run. This sequential execution is the very basis of procedural languages.
Technically, the compiler is allowed to rearrange statements, but only if the end result would be provably indistinguishable from the original. For example, look at the following code:
int x = 3;
int y = 4;
x = x + 6;
y = y - 1;
int z = x + y;
printf("z is %d", z);
It really doesn't matter whether the x+6 or the y-1 line happens first; the code as written does not make use of either of the intermediate values other than to calculate z, and that can happen in either order. So if the compiler can for some reason generate more efficient code by rearranging those lines, it is allowed to do so.
You'd never be able to see the effects of such rearranging, though, because as soon as you try to use one of those intermediate values (say, to log it), the compiler will recognize that the value is being used, and get rid of the optimization that would break your logging.
So really, the compiler is not required to execute your code in the order provided; it is only required to generate code that is functionally identical to the code you provided. This means that you actually can see the effects of these kinds of optimizations if you attach a debugger to a program that was compiled with optimizations in place. This leads to all sorts of confusing things, because the source code the debugger is tracking does not necessarily match up line-for-line with the code the compiled code the compiler generated. This is why optimizations are almost always turned off for debug builds of a program.
Anyway, the point is that the compiler can only do these sorts of tricks when it can prove that there will be no effect. Objective-c method calls are dynamically bound, meaning that the compiler has absolutely no guarantee about what will actually happen at runtime when that method is called. Since the compiler can't make any guarantees about what will happen, the compiler will never reorder Objective-C method calls. But again, this just falls back to the same principle I stated earlier: the compiler may change order of execution, but only if it is completely imperceptible to the user.
In other words, don't worry about it. Your code will always run top-to-bottom, each statement waiting for the one before it to complete.
In general, most method calls that you see in the style you described are synchronous, that means they'll have the effect you desire, running in the order the statements were coded, where the second call will only run after the first call finishes and returns.
Also, when a method takes parameters, its parameters are evaluated before the method is called.
I know how to do this in gdb. I'd attach, and follow with:
break myfunction
commands
return
cont
end
cont
I'm wondering if there's a way of doing this in c? I already have my code working for reading memory addresses and writing to memory addresses. And it automatically finds the pid and does related stuff. I'm stuck with implementing that use of breakpoints.
If you are talking about some sort of hand-written debugger, you can use IP value to set a breakpoint; Literally, when IP hits some certain value, you stop the program being debugged and perform some routine (for example, heading away to debugger process). To use function names, you should use symbol tables like it is done in GDB.
It's not quite clear what you are trying to achieve.
The GDB sequence you've show will simply make myfunction immediately return.
Assuming you want your mini-debugger to have the same effect, simply write the opcode for ret (0xC3 on x86) to the address of myfunction; no need to do the breakpoint at all.
I've been banging my head a lot because of this. In the way that $etrap (error handling special variable) was conceived you must be careful to really trap all errors. I've been partially successful in doing this. But I'm still missing something, because when run in user mode (application mode) there are internal Cache library errors that are still halting the application.
What I did was:
ProcessX(var)
set sc=$$ProcessXProtected(var)
w !,"after routine call"
quit sc
ProcessXProtected(var)
new $etrap
;This stops Cache from processing the error before this context. Code
; will resume at the line [w !,"after routine call"] above
set $etrap="set $ECODE = """" quit:$quit 0 quit"
set sc=1
set sc=$$ProcessHelper(var)
quit sc
ProcessHelper(var)
new $etrap
; this code tells Cache to keep unwindind error handling context up
; to the previous error handling.
set $etrap="quit:$quit 0 quit"
do AnyStuff^Anyplace(var)
quit 1
AnyStuffFoo(var)
; Call anything, which might in turn call many sub routines
; The important point is that we don't know how many contexts
; will be created from now on. So we must trap all errors, in any
; case.
;Call internal Cache library
quit
After all this, I can see that when I call the program from a prompt it works! But when I call from Cache Terminal Script (application mode, I was told) it fails and aborts the program (the error trapping mechanism doesn't work as expected).
Is is possible that an old-style error trap ($ZTRAP) is being set only in Usermode?
The documentation on this is pretty good, so I won't repeat it all here, but a key point is that $ZTRAP isn't New-ed in the same way as $ETRAP. In a way, it is "implicitly new-ed", in that its value only applies to the current stack level and subsequent calls. It reverts to any previous value once you Quit up past the level it was set in.
Also, I'm not sure if there's a defined order of precedence between $ETRAP and $ZTRAP handlers, but if $ZTRAP is of higher precedence, that would override your $ETRAPs.
You could try setting $ZTRAP yourself right before you call the library function. Set it to something different than $ETRAP so you can be sure which one was triggered.
Even that might not help though. If $ZTRAP is being set within the library function, the new value will be in effect, so this won't make a difference. This would only help you if the value of $ZTRAP came from somewhere further up the stack.
You didn't mention what library function caused this. My company has source code for some library functions, so if you can tell me the function name I'll see what I can find. Please give me the value of $ZVersion too so I can be sure we're talking about the same version of Cache.