SciLab - Stack size exceeded - variables

So, I have this project for school in which I have to write code in SciLab to solve a puzzle (Tents). The code is getting longer and longer as it gets better and better, but I suddenly got an error stating "stack size exceeded".
Error log:
!--error 17
stack size exceeded!
Use stacksize function to increase it.
Memory used for variables: 28875
Intermediate memory needed: 59987764
Total memory available: 10000000
I tried using this line
stacksize('max')
And this one
stacksize(1e8)
Neither of which works, all that happens is SciLab shutting itself down without any warning at all.
How did I exceed my stacksize? Is there a way to prevent this? How can I continue further?

I figured out myself how to solve this problem. Here's what I did wrong for people with the same problem:
Within a function I used the line
[m,n] = [x,y]
to save the coordinates of an object from a matrix. This was called within a loop using x and y to browse through the matrix.
Apparently this caused the stacksize exceeded error and here's how I wrote it afterwards:
m = x
n = y
I have no idea why this line caused this error, but this is how I've solved it.

Related

Working with Double variables in Optaplanner

I am working on a problem that requires to optimize a double variable. I wrote a simple code to try it out that tries to find a number given an upper bound ( maximize X while X < upper bound), but I get the following error that I did not understand :
2022-11-13 09:59:07,003 [main] INFO Solving started: time spent (119), best score (-1init/0hard/0soft), environment mode (REPRODUCIBLE), move thread count (NONE), random (JDK with seed 0).
Exception in thread "main" java.lang.ClassCastException: class org.optaplanner.core.impl.domain.valuerange.buildin.primdouble.DoubleValueRange cannot be cast to class org.optaplanner.core.api.domain.valuerange.CountableValueRange (org.optaplanner.core.impl.domain.valuerange.buildin.primdouble.DoubleValueRange and org.optaplanner.core.api.domain.valuerange.CountableValueRange are in unnamed module of loader 'app')
at org.optaplanner.core.impl.heuristic.selector.value.FromSolutionPropertyValueSelector.iterator(FromSolutionPropertyValueSelector.java:127)
at org.optaplanner.core.impl.heuristic.selector.value.FromSolutionPropertyValueSelector.iterator(FromSolutionPropertyValueSelector.java:120)
at org.optaplanner.core.impl.heuristic.selector.value.decorator.ReinitializeVariableValueSelector.iterator(ReinitializeVariableValueSelector.java:58)
at org.optaplanner.core.impl.heuristic.selector.common.iterator.AbstractOriginalChangeIterator.createUpcomingSelection(AbstractOriginalChangeIterator.java:35)
at org.optaplanner.core.impl.heuristic.selector.common.iterator.AbstractOriginalChangeIterator.createUpcomingSelection(AbstractOriginalChangeIterator.java:10)
at org.optaplanner.core.impl.heuristic.selector.common.iterator.UpcomingSelectionIterator.hasNext(UpcomingSelectionIterator.java:27)
at org.optaplanner.core.impl.constructionheuristic.placer.QueuedEntityPlacer$QueuedEntityPlacingIterator.createUpcomingSelection(QueuedEntityPlacer.java:45)
at org.optaplanner.core.impl.constructionheuristic.placer.QueuedEntityPlacer$QueuedEntityPlacingIterator.createUpcomingSelection(QueuedEntityPlacer.java:31)
at org.optaplanner.core.impl.heuristic.selector.common.iterator.UpcomingSelectionIterator.hasNext(UpcomingSelectionIterator.java:27)
at org.optaplanner.core.impl.constructionheuristic.DefaultConstructionHeuristicPhase.solve(DefaultConstructionHeuristicPhase.java:45)
at org.optaplanner.core.impl.solver.AbstractSolver.runPhases(AbstractSolver.java:83)
at org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:193)
at SimpleApp.main(SimpleApp.java:42)
The variable X has a range between 1 and 300, and the upper bound is an arbitrary 10.548
OptaPlanner intentionally avoids working with doubles. The documentation explains why, and also describes better ways of dealing with the issue.
That said, the exception you mention still shouldn't be happening, or there should be a more descriptive one. I'll eventually look into it. But my advice is to not count on doubles in your scoring function.

Matlab Interface Issue - seg fault?

for my problem Matlab crashes in the 5th loop of the main loop in worhp.cpp in the subroutine
if (GetUserAction(&cnt, callWorhp))
{
Worhp(&opt, &wsp, &par, &cnt);
// No DoneUserAction!
}
with a seg fault. It would be great if you could help me to debug by providing information on the opt, wsp, par and cnt structs and what to look for.
Thanks and best regards
I am currently having trouble with a similar issue. In one case, I was able to find an error by checking, whether the solver was able to terminate without an error when using approximated hessian matrices, instead of providing them myself. The error in this case occured, because I gave the library the wrong number of nonzero hessian entries and accessed entries of the hessian matrix that weren't allocated by the solver in my hessian function. Maybe your error is caused by a similar problem.
Kind regards,
Jan

"NVM_E_INTEGRITY_FAILED" Error was detected at startup during "NVM_ReadAll"

Due to CRC( Autosar)issue for a particular NvBlock, "NVM_E_INTEGRITY_FAILED" Error was observed during "NVM_ReadAll()".
I tried to debug but couldn't root cause the issue.
Out of all the blocks only one NvBlock has the crc issue and obviously causing the NNM_Readll to fail("NVM_REQ_NOT_OK").
Please suggest the best method to debug this issue.
Thank you Lundin and Kesselhaus. Its seems the SPI dirver has the issue in reading the data from Eeprom for that particular block (block size greater than 1k). The calculated CRC has different value compared to actual CRC value. Thus NVM_Integrity Error is set.

"Bad permissions for mapped region at address" Valgrind error for memset

I am running into a problem that appears to be due to a stack overflow. When I run the application under Valgrind, I get the following errors:
Thread 75:
Invalid write of size 4
at 0x833FBF6: <Class Name>::<Method Name>(short, short&) (<File Name>:692)
Address 0x222d75c0 is on thread 75's stack
Process terminating with default action of signal 11 (SIGSEGV): dumping core
Bad permissions for mapped region at address 0x222D6000
at 0x4022BA3: memset (mc_replace_strmem.c:586)
by 0x833FC80: <Class Name>::<Method Name>(short, short&) (<File Name>:708)
If I open the core file in gdb, go to frame 1 where the memset is being called, and do an "info registers", it shows that $esp = 0x222d5210 and $ebp = 0x222d75c8.
Doesn't that seem to indicate that the stack would include memory at addres 0x222D6000? If that's true, then why would we get the "Bad permissions" error?
The other odd thing is that line 692 of the source file is the very first line of the method (i.e., "void ::(short var1, short &var2)"). So, why would we get an invalid write at that point?
As I said, it seems to be a case of running out of stack space, but even if we use the "limit stacksize" command to increase the amount of allocated stack space, we still encounter the same problem.
I've been beating my head against the wall for several days trying to debug this problem. Any advice would be appreciated.
It turns out that this problem was due to a stack overflow after all. I didn't realize that the code that spawned the thread that was causing the problem explicitly set the size of the stack to be used by the thread. That's why changing the value used by the "limit stacksize" command didn't make a difference. Once, I modified the code that set the stack size to increase the amount of memory allocated, the problem went away.
What you could do is to activate the Valgrind gdbserver, and
attach using gdb+vgdb to your program running under Valgrind.
You can then use various valgrind monitor commands to have more
info about the problem. E.g. look again at the register values,
use 'monitor v.info scheduler' to see the stack trace and the stack size, ...
Full list of monitor commands with memcheck+valgrind can be found at
http://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
and
http://www.valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.valgrind-monitor-commands

Error due to variable size data in Simulink Matlab function block

I'm working with the Simulink block MATLAB FUNCTION and I'm having problems with the bounds of the variables that I define in there.
This is the part of the code where I’m getting troubles
function P_S1_100= fcn(SOC_S1_100,S1_AGENTS_10,time_CAP_100)
assert(time_CAP_100(1)<100)
tcharging_a1_1=[0:0.05:time_CAP_100(1)]
tcharging_a1_2=[time_CAP_100(1):0.05:time_CAP_100(1)*2]
tcharging_a1=[0:0.05:time_CAP_100(1)]
(Where time_CAP_100 is a vector [1x6])
And this is the error that I'm getting:
Computed maximum size of the output of function 'colon' is not bounded.
Static memory allocation requires all sizes to be bounded.
The computed size is [1 x :?].
Function 'Subsystem1/Slow Charge/S1/MATLAB Function5' (#265.262.302), line 8, column 16:
"[time_CAP_100(1):0.05:time_CAP_100(1)*2]"
Could anyone give me an idea of how to solve this error?
Thanks in advance.
For each of your variable-size data inputs/outputs, you need to define what the upper bound is. See http://www.mathworks.co.uk/help/simulink/ug/declare-variable-size-inputs-and-outputs.html for more details.
Only work around I can think of is to manually write a loop with fixed loop bounds to expand [time_CAP_100(1):0.05:time_CAP_100(1)*2]. That expression is what is causing the problem. You need to know the bounds of this vector. Then you can write a loop something like
% max_size is the maximum length possible for tcharging_a1_2
tcharging_a1_2 = zeros(1,max_size);
tcharging_a1_2(1) = time_CAP_100(1);
for ii=2:max_size
if tcharging_a1_2(ii) < time_CAP_100(1)*2
tcharging_a1_2(ii) = tcharging_a1_2(ii) + .05;
end
end