"Bad permissions for mapped region at address" Valgrind error for memset - valgrind

I am running into a problem that appears to be due to a stack overflow. When I run the application under Valgrind, I get the following errors:
Thread 75:
Invalid write of size 4
at 0x833FBF6: <Class Name>::<Method Name>(short, short&) (<File Name>:692)
Address 0x222d75c0 is on thread 75's stack
Process terminating with default action of signal 11 (SIGSEGV): dumping core
Bad permissions for mapped region at address 0x222D6000
at 0x4022BA3: memset (mc_replace_strmem.c:586)
by 0x833FC80: <Class Name>::<Method Name>(short, short&) (<File Name>:708)
If I open the core file in gdb, go to frame 1 where the memset is being called, and do an "info registers", it shows that $esp = 0x222d5210 and $ebp = 0x222d75c8.
Doesn't that seem to indicate that the stack would include memory at addres 0x222D6000? If that's true, then why would we get the "Bad permissions" error?
The other odd thing is that line 692 of the source file is the very first line of the method (i.e., "void ::(short var1, short &var2)"). So, why would we get an invalid write at that point?
As I said, it seems to be a case of running out of stack space, but even if we use the "limit stacksize" command to increase the amount of allocated stack space, we still encounter the same problem.
I've been beating my head against the wall for several days trying to debug this problem. Any advice would be appreciated.

It turns out that this problem was due to a stack overflow after all. I didn't realize that the code that spawned the thread that was causing the problem explicitly set the size of the stack to be used by the thread. That's why changing the value used by the "limit stacksize" command didn't make a difference. Once, I modified the code that set the stack size to increase the amount of memory allocated, the problem went away.

What you could do is to activate the Valgrind gdbserver, and
attach using gdb+vgdb to your program running under Valgrind.
You can then use various valgrind monitor commands to have more
info about the problem. E.g. look again at the register values,
use 'monitor v.info scheduler' to see the stack trace and the stack size, ...
Full list of monitor commands with memcheck+valgrind can be found at
http://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
and
http://www.valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.valgrind-monitor-commands

Related

"NVM_E_INTEGRITY_FAILED" Error was detected at startup during "NVM_ReadAll"

Due to CRC( Autosar)issue for a particular NvBlock, "NVM_E_INTEGRITY_FAILED" Error was observed during "NVM_ReadAll()".
I tried to debug but couldn't root cause the issue.
Out of all the blocks only one NvBlock has the crc issue and obviously causing the NNM_Readll to fail("NVM_REQ_NOT_OK").
Please suggest the best method to debug this issue.
Thank you Lundin and Kesselhaus. Its seems the SPI dirver has the issue in reading the data from Eeprom for that particular block (block size greater than 1k). The calculated CRC has different value compared to actual CRC value. Thus NVM_Integrity Error is set.

Is there a way to get current number of tokens parsed in stack in yacc

I am running into parser stack overflow in yacc. I am not sure how is the current parser stack size determined. IS there a way to get current parser stack size, so that once the number of tokens reaches the maximum stack depth, an error can be reported? Is there a variable in yacc that holds this information?
There is no standard way to get the parser stack size, although obviously it is internally available since the parser is capable of producing a stack overflow error (without segfaulting or otherwise invoking undefined behaviour). You don't need to check this yourself; you simply need to print the error message provided to yyerror; if the stack overflows, the error message will mention that fact.
There are a few ways you can end up with a version of yàcc which doesn't resize the stack. One is the use of the public domain Berkeley yacc, often called byacc; the version I have kicking around (from 1993) sets the default stack size to 500.
Another possibility is to use Gnu bison, compiling the result with a C++ compiler; by default, this will make the stack non-relocatable since bison doesn't know whether the semantic value union is trivially copyable. (Newer versions of bison might not have this restriction.) By default, the initial bison stack size is 200.
A common way to blow up stacks is to use right recursion for long lists. A particularly bad one is some variant on the following:
program: /* empty */
| statement program
;
which will cause the parser stack overflow if a "program" is too long. It's usually sufficient to just change that to left recursion:
program: /* empty */
| program statement
;

SciLab - Stack size exceeded

So, I have this project for school in which I have to write code in SciLab to solve a puzzle (Tents). The code is getting longer and longer as it gets better and better, but I suddenly got an error stating "stack size exceeded".
Error log:
!--error 17
stack size exceeded!
Use stacksize function to increase it.
Memory used for variables: 28875
Intermediate memory needed: 59987764
Total memory available: 10000000
I tried using this line
stacksize('max')
And this one
stacksize(1e8)
Neither of which works, all that happens is SciLab shutting itself down without any warning at all.
How did I exceed my stacksize? Is there a way to prevent this? How can I continue further?
I figured out myself how to solve this problem. Here's what I did wrong for people with the same problem:
Within a function I used the line
[m,n] = [x,y]
to save the coordinates of an object from a matrix. This was called within a loop using x and y to browse through the matrix.
Apparently this caused the stacksize exceeded error and here's how I wrote it afterwards:
m = x
n = y
I have no idea why this line caused this error, but this is how I've solved it.

suppress warnings related to certain library

How can I tell valgrind to stop showing any kind of error related to a certain library? I got lots of reports that look like this:
==24152== Invalid write of size 8
==24152== at 0xD9FF876: ??? (in /usr/lib64/dri/fglrx_dri.so)
==24152== by 0x110647AF: ???
==24152== Address 0x7f3c98553f20 is not stack'd, malloc'd or (recently) free'd
I could prune them by the address (0x7fxxxxxxxxxx is not something that is allocated at userland), but my valgrind build seems not to accept --ignore-ranges=0x7f0000000000-0x7fffffffffff
You can generate suppression-lists using --gen-suppressions=all. Then you can add those to some .supp file under lib/valgrind.

How to really trap all errors with $etrap in Intersystems Caché?

I've been banging my head a lot because of this. In the way that $etrap (error handling special variable) was conceived you must be careful to really trap all errors. I've been partially successful in doing this. But I'm still missing something, because when run in user mode (application mode) there are internal Cache library errors that are still halting the application.
What I did was:
ProcessX(var)
set sc=$$ProcessXProtected(var)
w !,"after routine call"
quit sc
ProcessXProtected(var)
new $etrap
;This stops Cache from processing the error before this context. Code
; will resume at the line [w !,"after routine call"] above
set $etrap="set $ECODE = """" quit:$quit 0 quit"
set sc=1
set sc=$$ProcessHelper(var)
quit sc
ProcessHelper(var)
new $etrap
; this code tells Cache to keep unwindind error handling context up
; to the previous error handling.
set $etrap="quit:$quit 0 quit"
do AnyStuff^Anyplace(var)
quit 1
AnyStuffFoo(var)
; Call anything, which might in turn call many sub routines
; The important point is that we don't know how many contexts
; will be created from now on. So we must trap all errors, in any
; case.
;Call internal Cache library
quit
After all this, I can see that when I call the program from a prompt it works! But when I call from Cache Terminal Script (application mode, I was told) it fails and aborts the program (the error trapping mechanism doesn't work as expected).
Is is possible that an old-style error trap ($ZTRAP) is being set only in Usermode?
The documentation on this is pretty good, so I won't repeat it all here, but a key point is that $ZTRAP isn't New-ed in the same way as $ETRAP. In a way, it is "implicitly new-ed", in that its value only applies to the current stack level and subsequent calls. It reverts to any previous value once you Quit up past the level it was set in.
Also, I'm not sure if there's a defined order of precedence between $ETRAP and $ZTRAP handlers, but if $ZTRAP is of higher precedence, that would override your $ETRAPs.
You could try setting $ZTRAP yourself right before you call the library function. Set it to something different than $ETRAP so you can be sure which one was triggered.
Even that might not help though. If $ZTRAP is being set within the library function, the new value will be in effect, so this won't make a difference. This would only help you if the value of $ZTRAP came from somewhere further up the stack.
You didn't mention what library function caused this. My company has source code for some library functions, so if you can tell me the function name I'll see what I can find. Please give me the value of $ZVersion too so I can be sure we're talking about the same version of Cache.