How to tell whether elaboration has completed at a breakpoint? - systemc

When I hit a breakpoint in a VLAB script, how can I find out if I have caused elaboration to finish or not, yet?
My script reaches a statement that raises an error:
Error: (E529) insert module failed: elaboration done
(The command that causes this is vlab.instantiate("stim", "stim"))
So obviously elaboration was unexpectedly (for me) already completed. I need to somehow go back in the process and find out where that happened - so I need some way of asking "is elaboration complete?" at the point where I set breakpoints earlier in the script.

SystemC provides the following function to query the current phase of elaboration or simulation.
sc_status sc_get_status();
It returns either SC_ELABORATION, SC_BEFORE_END_OF_ELABORATION, SC_END_OF_ELABORATION, SC_START_OF_SIMULATION, SC_RUNNING, SC_PAUSED, SC_STOPPED, or SC_END_OF_SIMULATION.
See section 4.5.8 in the SystemC Language Reference Manual for more details. Note that this function was only added in the most recent version of the standard, IEEE Standard 1666-2011.
In VLAB, the SystemC API is available from the sysc Python package, so the following script can be used to test if the current phase is at elaboration:
import sysc
print "Is elaboration phase:", sysc.sc_get_status() == sysc.SC_ELABORATION

Related

Using Optaplanner for VRPPD

I am trying to run the example "optaplanner-mixedvrp-experiment" developed by Geoffrey De Smet and when I run it it throws me the following error:
Caused by: java.lang.IllegalStateException: The entity (MY) has a
variable (previousStandstill) with value (MUNO) which has a
sourceVariableName variable (nextVisit) with a value (WERBOMONT) which
is not null. Verify the consistency of your input problem for that
sourceVariableName variable.
I have not made any change, I have only cloned and executed it, I import and solve it and it throws me this error.
Do you know what could be happening?
I am applying it in the development of a variant of VRP with multiple deliveries and collections, but it throws me the same error. I have activated the FULL_ASSERT mode and nextVisit, previousStandstill, visitIndex are always null
It's been a long time since I looked at that code, so it's using an old version of optaplanner. Our goal is still to clean it up and offer an out of the box example for VRPPD (and probably remove some boilerplate along the way, using the upcoming #CollectionPlanningVariabe etc). That being said, we have multiple users&customers who used that optaplanner-mixedvrp-experiment to successfully build VRPPD implementations.
Which dataset did you try?
FWIW, that IllegalStateException says that when A.previous = B, the B.next is not A. So either the dataset importer didn't import it correctly - before calling solve() - especially if it fails before the first CH step in FULL_ASSERT. Or one of the custom moves corrupted the model.

How to Call package successfully

got error when calling package
error is
Error starting at line : 1 in command -
PKG_Generate_GRNo.GenerateGR(TO_NUMBER(:P164_APP_ID,
'9999999'),:APP_USER,:P164_FIRST_NAME,:P164_LAST_NAME,:P164_EMAIL,:P164_SKYPE_ID,:P164_COUNTRY,:P164_DATE_OF_BIRTH)
Error report - Unknown Command
PKG_Generate_GRNo.GenerateGR(TO_NUMBER(:P164_APP_ID,
'9999999'),:APP_USER,:P164_FIRST_NAME,:P164_LAST_NAME,:P164_EMAIL,
:P164_SKYPE_ID,:P164_COUNTRY,:P164_DATE_OF_BIRTH);
Session state protection violation is definitely an Apex error, relating to your page settings. It seems your package is trying to change the state of a read-only page. See this other question.
The item identifier in the error message P164_COURSECOUNT has the same prefix as the parameters you pass to the package (:P164_APP_ID) so presumably they relate to the same page. We know nothing about your application or its architecture, so it's hard to offer concrete advice. Maybe you need to change the page or item settings, maybe you need to change what the package does. Only you can tell the right course of action.
As you didn't post the whole command, a note: you have to enclose it into begin-end block, e.g.
BEGIN
PKG_Generate_GRNo.GenerateGR (TO_NUMBER ( :P164_APP_ID, '9999999'),
:APP_USER,
:P164_FIRST_NAME,
:P164_LAST_NAME,
:P164_EMAIL,
:P164_SKYPE_ID,
:P164_COUNTRY,
:P164_DATE_OF_BIRTH);
END;
/

How do you use the benchmark flags for the go (golang) gocheck testing framework?

How does one use the flag options for benchmarks with the gocheck testing framework? In the link that I provided it seems to be that the only example they provide is by running go test -check.b, however, they do not provide additional comments on how it works so its hard to use it. I could not even find the -check in the go documentation when I did go help test nor when I did go help testflag. In particular I want to know how to use the benchmark testing framework better and control how long it runs for or for how many iterations it runs for etc etc. For example in the example they provide:
func (s *MySuite) BenchmarkLogic(c *C) {
for i := 0; i < c.N; i++ {
// Logic to benchmark
}
}
There is the variable c.N. How does one specify that variable? Is it through the actual program itself or is it through go test and its flags or the command line?
On the side note, the documentation from go help testflag did talk about -bench regex, benchmem and benchtime t options, however, it does not talk about the -check.b option. However I did try to run these options as described there but it didn't really do anything I could notice. Does gocheck work with the original options for go test?
The main problem I see is that there is no clear documentation for how to use the gocheck tool or its commands. I accidentally gave it a wrong flag and it threw me a error message suggesting useful commands that I need (which limited description):
-check.b=false: Run benchmarks
-check.btime=1s: approximate run time for each benchmark
-check.f="": Regular expression selecting which tests and/or suites to run
-check.list=false: List the names of all tests that will be run
-check.v=false: Verbose mode
-check.vv=false: Super verbose mode (disables output caching)
-check.work=false: Display and do not remove the test working directory
-gocheck.b=false: Run benchmarks
-gocheck.btime=1s: approximate run time for each benchmark
-gocheck.f="": Regular expression selecting which tests and/or suites to run
-gocheck.list=false: List the names of all tests that will be run
-gocheck.v=false: Verbose mode
-gocheck.vv=false: Super verbose mode (disables output caching)
-gocheck.work=false: Display and do not remove the test working directory
-test.bench="": regular expression to select benchmarks to run
-test.benchmem=false: print memory allocations for benchmarks
-test.benchtime=1s: approximate run time for each benchmark
-test.blockprofile="": write a goroutine blocking profile to the named file after execution
-test.blockprofilerate=1: if >= 0, calls runtime.SetBlockProfileRate()
-test.coverprofile="": write a coverage profile to the named file after execution
-test.cpu="": comma-separated list of number of CPUs to use for each test
-test.cpuprofile="": write a cpu profile to the named file during execution
-test.memprofile="": write a memory profile to the named file after execution
-test.memprofilerate=0: if >=0, sets runtime.MemProfileRate
-test.outputdir="": directory in which to write profiles
-test.parallel=1: maximum test parallelism
-test.run="": regular expression to select tests and examples to run
-test.short=false: run smaller test suite to save time
-test.timeout=0: if positive, sets an aggregate time limit for all tests
-test.v=false: verbose: print additional output
is writing wrong commands the only way to get some help with this tool? it doesn't have a help flag or something?
I'm 5 years late, but to specify how many N times to run. Use the option -benchtime Nx.
Example:
go test -bench=. -benchtime 100x
BenchmarkTest 100 ... ns/op
Please read more about all go testing flags here.
see the Description_of_testing_flags:
-bench regexp
Run benchmarks matching the regular expression.
By default, no benchmarks run. To run all benchmarks,
use '-bench .' or '-bench=.'.
-check.b works the same way as -test.bench.
E.g. to run all benchmarks:
go test -check.b=.
to run a specific benchmark:
go test -check.b=BenchmarkLogic
more information about testing in Go can be found here

Best way to detect a "data loss" publish action when calling SSDT's SQLPackage.exe

When calling SQLPackage.exe (syntax described here) with publish action /a:Publish, there are cases when data loss occurs and the execution will be halted; this is specified by setting the parameter /p:BlockOnDataLoss (default to be 'true').
I need to know whether my publish action has succeeded or has failed due to 'data loss'.
Currently when succeeded, the returned exit code will be 0. And when failed, we just have the returned exit code to be 1. We cannot say whether a fail was caused by the data loss or not. How can we identify this?
Somewhere in the console output, we see the line that contain "... is being dropped, data loss could occur." So I intend to scan for every output line is printed but I guess there should be some other better way to do this.
Hope to hear what you think.

CTest build ID not set

I have a CDash configured to accept posts for automatic builds and tests. However, when any system attempts to post results to the CDash, the following error is produced. The result is that each result gets posted four times (presumably the original posting attempt plus the three retries).
Can anyone give me a hint as to what sets this mysterious build ID? I found some code that seems to produce a similar error, but still no lead on what might be happening.
Build::GetNumberOfErrors(): BuildId not set
Build::GetNumberOfWarnings(): BuildId not set
Submit failed, waiting 5 seconds...
Retry submission: Attempt 1 of 3
Server Response:
The buildid for CDash is computed based on the site name, the build name and the build stamp of the submission. You should have a Build.xml file in a Testing/20110311-* directory in your build tree. Open that up and see if any of those fields (near the top) is empty. If so, you need to set BUILDNAME and SITE with -D args when configuring with CMake. Or, set CTEST_BUILD_NAME and CTEST_SITE in your ctest -S script.
If that's not it, then this is a mystery. I've not seen this error occur before...
I'm having the same issue though Site and Buildname are available in test.xml and are visible on cdash (4 times). I can see the jobs increment by refreshing between retries so it seems that the submission succeeds and reports a timeout.
Update: This seems to have started when I added the -j(nprocs) switch to the ctest command. changing CtestSubmitRetryDelay: 20 (was 5) allowed a server response through that indicates the cdash version may not be able to handle the multi-proc option I'll have to look into that for my issue. Perhaps setting CtestSubmitRetryDelay to a larger number will get you back a server response as it did for me. g'luck!
Out of range value for column 'processorclockfrequency'