Can I detect integer overflow flaws with valgrind? - valgrind

Can I detect integer overflow flaws with valgrind? and which tool in it can do that?

Valgrind has no tool which can detect integer overflow.
You might maybe catch these bugs using the gcc option:
-ftrapv This option generates traps for signed overflow on addition, subtraction, multiplication
operations.

Related

from net471 to .NET Standard noob questions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a very old VB project I am looking to modernize to which I created a new solution (will convert code to C# later on), am re-adding the libraries and web projects to the new solution to eliminate the old project having old .publishproj, references to mscorlib 2.0 (despite best efforts to resolve through re-adding references) and several other issues that will likely go away.
In the process, I figured to try to go to .NET Standard for the standardized PCL that will allow for future use with Xamarin, Mono, etc. I am not fully versed in .NET Standard so need some input (attempting 2.0 based on the scaling effect of 2.0 down from what I read)
The problems I am running into are:
1) I have several basic CORE functions from the .NET Framework that are not recognized in .NET Standard:
IsNumeric, IsNothing, IsDBNull, IIF
Any suggestions as to why this is?
(re-edit to remove Hold)
Thank you to jmcilhinney for answering :-)
All four of IsNumeric, IsNothing, IsDBNull and IIf are VB-specific. They can't be part of .NET Standard if they've never been accessible to other languages without referencing the Microsoft.VisualBasic assembly. You really shouldn't have been using any of them previously anyway as they are holdovers from VB6.
In the case of IsNumeric, it uses Double.TryParse internally anyway. In fact, Double.TryParse was written specifically to improve the performance of IsNumeric. You should be using the TryParse method of the appropriate numeric type yourself if you want to know whether a String can be converted to that type.
In the case of IsNothing, you should simply be comparing your reference to Nothing, e.g. instead of:
If IsNothing(myThing) Then
you should be using:
If myThing Is Nothing then
In the case of IsDBNull, you should be doing much as above, e.g. instead of:
If IsDBNull(myThing) Then
you should be using:
If myThing Is DBNull.Value Then
That said, both a DataRow and a data reader have their own dedicated methods to tell you whether one of their fields is NULL.
In the case of IIf, it always had it's issues because it is a method that people tried to treat like an operator in many cases. I think it was VB 2008 that actually did introduce an If operator that works much like the C# ternary operator, so you should have been using that since then anyway, e.g. instead of:
myVar = IIf(someBooleanExpression, someValue, someOtherValue)
you should have been using:
myVar = If(someBooleanExpression, someValue, someOtherValue)
There are some subtle differences between IIf and If but I'll leave you to read about how If works for yourself.

SIGSEGV in optimizated ifort

If I compile with -O0 in ifort, the program can run correctly. But as long as I open the optimization option, like -O, -O3, -fast, there will be a SIGSEGV segmentation fault come out.
This error occurred in a subroutine named maketable(). And the belows are the phenomenons:
(1) I call fftw library in this subroutine. If I comment the sentences about fftw, it'll be ok. But I think it's not the fault of fftw, because I also use fftw in some other places of this code, and they are good.
(2) the fftw is called in a loop, and the loop can run several times when the program crashed. The segfault does not happen at the first time of entering the loop.
(3) I considered the stack overflow, but I don't think so now. I have the executable file complied by others long time ago, it's can be executed in my computer. I think that suggests it's not due to the system stack overflow.
The version of ifort is 10.0, of fftw is fftw-2.1.5. The cpu type is intel xeon 5130. Thanks a lot.
There are two common causes of segmentation faults in Fortran programs:
Attempting to access an element outside the bounds of an array.
Mismatching actual and dummy arguments in a procedure call.
Both are relatively easy to find:
Your compiler will have an option to generate code which performs array bounds checking at run time. Check your compiler documentation, rebuild your code and rerun it. If this is the cause of the problem you will get an error message identifying where your code goes awry.
Program explicit interfaces for any subroutines and functions in your program, or use modules so that the compiler generates such interfaces for you, or use a compiler option (see the documentation) to check that argument types match at compile-time.
It's not unusual that such errors (seem to) arise only when optimisation is turned up high.
EDIT
Note that I'm not suggesting that optimisation causes the error you observe, but that it causes the error to affect the execution of your program and become evident.
It's not unknown for incorrect programs to run many times apparently without fault only for, say, recompilation with a new compiler version to create an executable which crashes every time.
Your wish to switch off optimisation only for the subroutine where the segmentation fault seems to arise is, I suggest, completely wrong-headed. I expect my programs to execute correctly at any level of optimisation (save for clear evidence of a compiler bug, such things are not unknown). I think that by turning off optimisation you are sweeping a real problem with your program under the carpet, as it were.

why is LaTeX / pdflatex compiler so 'funky' with multiple compiles necessary and bogus error messages, etc, compared to c++? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there a simple explanation for why the latex / pdflatex compiler is funky in the following two ways:
1) N multiple compiles are necessary until you reach a "steady state" version. N seems to grow up to around 5 or 6 if I use many packages and references.
2) Error messages are almost always worthless. The actual error is not flagged. Example:
\begin{itemize} % Line 499
\begin{enumerate}
% Comment: error: forgot to close the enumerate block
\item This is a bullet point.
\end{itemize} % Line 503
result: "Error on line 1 while scanning \begin{document}", not very useful.
I realize there is a separate "tex exchange" but I'm wondering if someone knowledgeable about c++, java, or other compilers can provide some insight on how those seem to support single-compile and proper error localization.
Edit: this document seems like a rant justifying the hacks in latex's implementation, but what about latex's syntax/language properties make the weird implementation necessary? http://tug.org/texlive/Contents/live/texmf-dist/doc/generic/knuth/tex/tex.pdf
From a LaTeX point of view:
You should at most require 3 (...maybe 4) to reach a steady state. This depends not on the number of packages, but possible layout changes within your document. Layout changes cause references to move, and these references need to be correct (hence the recompile until they don't move).
Nesting of environments is allowed (although this does not address your problem directly). Also, macro definitions act as replacement text for your input. So, even though you write \end{itemize}, it is actually transformed into a bunch of other/different (primitive) macros, removing the obvious-to-humans structure and consequently also the bizarre error message. That's why some of the error messages are difficult to interpret.
wrt. point (2):
Considering that most of the errors are picked up while parsing macro defenitions that get expanded, My guess is that errors wouldn't be useful to the user even if they contained locale and specific causes, because they don't translate well into what you see when you view the code.
Still, it would be useful if they were just a little bit more explicit :/

LGPL grammar file licensing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Given a LGPL'ed grammar file, is the source generated by a compiler-compiler for the grammar a derivative works? What about if the grammar file was modified before it was given as input to the compiler-compiler? There isn't any linking, at least not in the conventional sense.
If the output is a derivitive work, must I then simply provide the (modified) grammer sources making any best efforts to ensure the grammar will function without dependencies imposed by the program/library using it? Or are there more restrictions which must be resolved?
1) Since the grammar contains the essence of the resulting code, it definitely belongs to "all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities" and is not a part of "the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work". In brief, LGPLv3 applies.
So, you need to convey the "Minimal Corresponding Source" (the one used to build the version in the Combined Work) according to sec.4 d) 0) or GPLv3 sec.6, mark it as modified if it is and possibly include custom tools if required by GPL's definition of "Corresponding Source". (In general, as sec.0 says, LGPLv3 is effectively GPLv3 with a few additional provisions.)
2) It might be a derivative work of the generator used as well if the latter inserts parts of itself into the code (see FSF FAQ#Can I use GPL-covered tools... to compile...?) - check the generator's workings and licensing terms if necessary. If it is, you'll have to satisfy both LGPLv3 and the generator's terms that apply to the results of its work.
The best answer, and which everyone should be giving you is as follows:
Contact a lawyer
Disclaimer: IANAL and if you want something "official" you should talk to one. That said...
A common-sense approach says that yes, the result of compilation of something that is compilable is a derivative work. For instance, the compiled version of an LGPL library is still LGPL - you can't say that you obtained a compiled version of the library and never compiled it yourself and somehow dodge providing the source code that way.
Thus, the LGPL would require you to distribute the (potentially modified) source of the original LGPL work, such that if an individual wanted to further modify the work, they could.

What is the cost of using exceptions in Objective-C?

I mean in the current implementation of clang or the gcc version.
C++ and Java guys always tell me that exceptions do not cost any performance unless they are thrown. Is the same true for Objective-C?
C++ and Java guys always tell me that exceptions do not cost any performance unless they are thrown. Is the same true for Objective-C?
Short Answer
Only in 64-bit OS X and iOS.
They're not entirely free. To be more precise, the model is optimized to minimize costs during regular execution (moving consequences elsewhere).
Detailed Answer
On 32 bit OS X and iOS, exceptions have runtime costs even if they are not thrown. These architectures do not use Zero Cost Exceptions.
In 64 bit OS X, ObjC moved over to borrow C++'s "Zero Cost Exceptions". Zero Cost Exceptions have very very low execution overhead, unless thrown. Zero Cost Exceptions effectively move execution cost to binary size. This was one of the primary reasons they were not initially used in iOS. Enabling C++ exceptions and RTTI can increase the binary size by more than 50% -- of course, I would expect those numbers to be far lower in pure ObjC simply because there is less to execute when unwinding.
In arm64, the exception model was changed from Set Jump Long Jump to Itanium-derived Zero Cost Exceptions (judging by the assembly).
However, Idiomatic ObjC programs are not written or prepared to recover from exceptions, so you should reserve their use for situations which you do not intend to recover from (if you decide to use them at all). More details in the Clang manual on ARC, and in other sectons of the referenced page.
According to some 2007 release notes for the Objective-C runtime in Mac OS X v10.5, they have re-written the 64bit implementation of Objective-C exceptions to provide "zero-cost" try blocks and interoperability with C++.
Apparently, these "zero-cost" try blocks incur no time penalty when entering a try, unlike its 32-bit counterpart which must call setjmp() and other functions. Apparently throwing them is "much more expensive".
This is the only bit of information I can find in Apple's release notes so I would have to assume that this still applies in todays runtimes, and as such, 32bit exceptions = expensive, 64-bit exceptions = "zero-cost"