I am starting with using SystemC, and studying example mentioned in
Using existing unit test frameworks with SystemC
I do not really understand why forking is required, especially when one fork only waits the other. Is there something in the kernel that needs such multi-threading (especially the event handling)? (in other words: do I need this type of handling ONLY when I need concurrent testing, or also in the more simple cases)
Another question that in the linked answer I see the logo at the beginning
SystemC 2.2.0 --- Feb 24 2011 15:01:50
Copyright (c) 1996-2006 by all Contributors
ALL RIGHTS RESERVED
Running main() from gtest_main.cc
while in my case the logo is at the end of the output
Running main() from gtest_main.cc
[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (0 ms total)
[ PASSED ] 0 tests.
Hello World.
SystemC 2.3.1-Accellera --- Mar 28 2017 21:08:36
Copyright (c) 1996-2014 by all Contributors,
ALL RIGHTS RESERVED
It is just because of the different versions, or it is an expected timely behavior?
The way SystemC kernel is designed in it's current form it does not fully support dynamic creation/removal of simulation module's, once you have started the SystemC simulation with sc_start() and stopped the simulation with sc_stop() you need to completely destroy the SystemC simulation context.
Kindly look at the updated answer for better solution here.
Note: It is advisable to have separate executable for different simulation context as the simulator does not guarantee the results for the simulation in this kind of environment, kindly refer SystemC LRM for more details.
As for the SystemC logo differences it is due to different buffering of stdout, stderr in your system. You will see different results for every execution of the binary. (If it is consistent then maybe google test framework is handling the stdout and stderr streams internally.)
I would go with using registry design pattern. I have already posted how you can do that with SystemC and GoogleTest in Using existing unit test frameworks with SystemC
Related
I am doing an academy project about porting Autosar OS to a microcontroller. After reading papers and information about Autosar, Arctic Core and Arctic Studio, I have some questions:
I used to port FreeRTOS to a microcontroller and it's very easy, I just included some *.h and *.c files of FreeRTOS, and then used the FreeRTOS functions to build my application on the chip. Can I do similarly to Autosar? If it is possible, which files should I include to my main.c
Second question, in FreeRTOS, I only need to use xcreatetask() function(this is a FreeRTOS function) to set task priority, and then i applied vstarttaskschedule() function to run the task in queue however I cannot see these kinds of functions in Autosar OS. Can someone tell me which function in autosar have same functionality like functions I said.
When I program Texas Instrument chips, there is always main function which include the main program that we will build for the chip. However, I don't see any main functions in arctic Core example. How can the chips runs the program without main function?
Please help me answer these questions!
3, You are not able to see to main function in ARTIC core:
AUTOSAR does not define start-up code. You are expected to write main function yourself. Kernel in AUTOSAR OS gets initialized from ECUM module. If you want to boot your OS, you must have ECUM module. Also you should have BSWM module to start schedule tables. You have to create rule in BSWM for RTE start-up and it will start your schedule table.
You have to handcode start-up code (RAM/Register/etc initialization), from that you have to call main function, main function will be handcoded. Call EcuM_init from main function. This way your OS will boot.
2, You are not able to locate function to set task priority and activation:
AUTOSAR does not support dynamic task priority. You have to set all priorities in cofiguration. To run task you can use ActivateTask(). One quick trick to start task at startup is, set parameter OsTaskAutostart for one task. Task for which you have set parameter OsTaskAutostart will get invoked as soon as kernel is initialized.
Your start-up code will be target specific.
ECUM does the initialisation part for all the SW modules within ECU.
Remember to call ECUM from your Main.c
ECUM does initialisation of BSWM, Drivers and the SW modules.
Once RTE is initialised - there is a part SchM within the RTE which schedules the Mainfunctions from each module.
The Mainfunctions from each SW Module are known to RTE by BSWMD and SWCD files.
Read RTE SWS, ECUM SWS, SYSTEMTemplate SWS for more info
I guess your academic project already ended, however porting an AUTOSAR OS to a specific microcontroller is not a suitable scope for an academic project.
Firstly, from your question, I cannot tell if the OS is ARCCORE or other. Secondly, from my experience with FreeRTOS, there is only a limited amount of knowledge which applies to AUTOSAR OS and creating tasks (2.) is application-level rather than porting. Thirdly, the majority of AUTOSAR OS rely on specialised embedded compilers, e.g. GHS or DIAB which are not home to academia.
I have not ported AUTOSAR OS myself, but I suggest taking a look at a ported version, the architecture and file structure, system and then the start-up routines, vector tables, peripheral code, etc. Complexity might be reduced when porting within the same MCU architecture, say Renesas machines or ARM.
To answer your question 3., you will not find the main() within the ARCCORE examples. main() is located in os_init.c and looks like this:
extern void EcuM_Init(void);
int main( void )
{
EcuM_Init();
}
Then, EcuM_Init() [EcuM.c] calls InitOS();
I am evaluating different multiprocessing libraries for a fault tolerant application. I basically need any process to be allowed to crash without stopping the whole application.
I can do it using the fork() system call. The limit here is that the process can be created on the same machine, only.
Can I do the same with MPI? If a process created with MPI crashes, can the parent process keep running and eventually create a new process?
Is there any alternative (possibly multiplatform and open source) library to get the same result?
As reported here, MPI 4.0 will have support for fault tolerance.
If you want collectives, you're going to have to wait for MPI-3.something (as High Performance Mark and Hristo Illev suggest)
If you can live with point-to-point, and you are a patient person willing to raise a bunch of bug reports against your MPI implementation, you can try the following:
disable the default MPI error handler
carefully check every single return code from your MPI programs
keep track in your application which ranks are up and which are down. Oh, and when they go down they can never get back. but you're unable to use collectives anyway (see my opening statement), so that's not a huge deal, right?
Here's an old paper (back when Bill still worked at Argonne. I think it's from 2003):
http://www.mcs.anl.gov/~lusk/papers/fault-tolerance.pdf . It lays out the kinds of fault tolerant things one can do in MPI. Perhaps such a "constrained MPI" might still work for your needs.
If you're willing to go for something research quality, there's two implementations of a potential fault tolerance chapter for a future version of MPI (MPI-4?). The proposal is called User Level Failure Mitigation. There's an experimental version in MPICH 3.2a2 and a branch of Open MPI that also provides the interfaces. Both are far from production quality, but you're welcome to try them out. Just know that since this isn't in the MPI Standard, the function prefixes are not MPI_*. For MPICH, they're MPIX_*, for the Open MPI branch, they're OMPI_* (though I believe they'll be changing theirs to be MPIX_* soon as well.
As Rob Latham mentioned, there will be lots of work you'll need to do within your app to handle failures, though you don't necessarily have to check all of your return codes. You can/should use MPI error handlers as a callback function to simplify things. There's information/examples in the spec available along with the Open MPI branch.
I read about Just-in-time compilation (JIT) and as I understood, there are two approaches for this – Interpreter and JIT, both of which interpreting the bytecode at runtime.
Why not just preparatively interprete all the bytecode to machine code, and only then start to run the process with no more need for interpreter?
Another reason for late JIT compiling has to do with optimization: At run-time the VM can detect more/other patterns it may optimize than the compiler could ever do at compile-time. JIT pre-compiling at startup will always have to be static, and the same could have been done by the compiler already, but through analysis of the actual run-time behaviour the VM may have more information on possible optimizations and may therefore produce better optimization results.
For example, the VM can detect that a single piece of code is actually run a million times at run-time and perform appropriate optimizations which the compiler may have no information about, not unlike the branch prediction that's done at runtime in modern CPUs.
More information can be found in the Wikipedia article on "Adaptive optimization".
Simple: Because it takes time to precompile everything to machine code. And users don't want to wait on the application to start. Remember, the precompilation would have to make a lot of optimizations which takes time.
The server version of JVM is more aggressive in precompiling and optimizing code upfront because code on the server side tends to be executed more often and for a longer period of time before the process is shutdown.
However, a solution (for .Net) is an application called NGen which make the precompilation upfront such that it isn't needed after that point. You only have to run that once.
Not all VM's include an interpreter. For instance Chrome and CLR (.Net) always compiles to machine code before running. However, they have multiple levels of optimizations to reduce the startup time.
I found link showing how runtime recompilation can optimize performance and save extra CPU cycles.
Inlining expansion: To decrease the cost of procedure calls.
Removing redundant loads: When 2 compiled code results in some duplicate code then it can be removed and further optimised by recompilation at run time.
Copy propagation
Eliminating dead code
Here is another link for the same explanation given above.
Is there possibility to show what's going on under specified process in Linux?
For example, i run SQL query -> select evil_function();
and notice that process under Linux uses all cpu.
So is there something with what I can see whats going on under this process?
What I want is to see what queries is running under this process.
Thanks!
strace will tell you what system calls the process is making.
To see what called routines are taking the most CPU, you need to run a profiling tool, and make sure the executable of the process you in compiled correctly (sometimes it needs to be instrumented during compilation for profiling, sometimes it just needs to be compiled with debug symbols, or not stripped of them after compilation).
You might want to look at oprofile, valgrind, gprof and for starters on free tools - there are also commercial products available.
Here are a few links:
http://www.pixelbeat.org/programming/profiling/
http://en.wikipedia.org/wiki/List_of_performance_analysis_tools
You are mixing a whole bunch of things.
If you are talking about MySQL do:
show processlist;
For info specifically about linux processes, you can strace the process to get a list of system function that it calls. Unless you are experienced with linux this will be useless to you.
If the process is paused then you can find out what function it is stopped on, but that's probably not what you want, since you say the process is running.
There are also various tools that can give you info on what parts of the disk the process is reading, and how much memory it's allocating.
And finally you can use gdb to break into the process and single step your way through it to see exactly what it's doing. This will also likely be useless to you since an SQL server does a LOT of things - far to many to understand by this method.
I've been experimenting with creating an interpreter for Brainfuck, and while quite simple to make and get up and running, part of me wants to be able to run tests against it. I can't seem to fathom how many tests one might have to write to test all the possible instruction combinations to ensure that the implementation is proper.
Obviously, with Brainfuck, the instruction set is small, but I can't help but think that as more instructions are added, your test code would grow exponentially. More so than your typical tests at any rate.
Now, I'm about as newbie as you can get in terms of writing compilers and interpreters, so my assumptions could very well be way off base.
Basically, where do you even begin with testing on something like this?
Testing a compiler is a little different from testing some other kinds of apps, because it's OK for the compiler to produce different assembly-code versions of a program as long as they all do the right thing. However, if you're just testing an interpreter, it's pretty much the same as any other text-based application. Here is a Unix-centric view:
You will want to build up a regression test suite. Each test should have
Source code you will interpret, say test001.bf
Standard input to the program you will interpret, say test001.0
What you expect the interpreter to produce on standard output, say test001.1
What you expect the interpreter to produce on standard error, say test001.2 (you care about standard error because you want to test your interpreter's error messages)
You will need a "run test" script that does something like the following
function fail {
echo "Unexpected differences on $1:"
diff $2 $3
exit 1
}
for testname
do
tmp1=$(tempfile)
tmp2=$(tempfile)
brainfuck $testname.bf < $testname.0 > $tmp1 2> $tmp2
[ cmp -s $testname.1 $tmp1 ] || fail "stdout" $testname.1 $tmp1
[ cmp -s $testname.2 $tmp2 ] || fail "stderr" $testname.2 $tmp2
done
You will find it helpful to have a "create test" script that does something like
brainfuck $testname.bf < $testname.0 > $testname.1 2> $testname.2
You run this only when you're totally confident that the interpreter works for that case.
You keep your test suite under source control.
It's convenient to embellish your test script so you can leave out files that are expected to be empty.
Any time anything changes, you re-run all the tests. You probably also re-run them all nightly via a cron job.
Finally, you want to add enough tests to get good test coverage of your compiler's source code. The quality of coverage tools varies widely, but GNU Gcov is an adequate coverage tool.
Good luck with your interpreter! If you want to see a lovingly crafted but not very well documented testing infrastructure, go look at the test2 directory for the Quick C-- compiler.
I don't think there's anything 'special' about testing a compiler; in a sense it's almost easier than testing some programs, since a compiler has such a basic high-level summary - you hand in source, it gives you back (possibly) compiled code and (possibly) a set of diagnostic messages.
Like any complex software entity, there will be many code paths, but since it's all very data-oriented (text in, text and bytes out) it's straightforward to author tests.
I’ve written an article on compiler testing, the original conclusion of which (slightly toned down for publication) was: It’s morally wrong to reinvent the wheel. Unless you already know all about the preexisting solutions and have a very good reason for ignoring them, you should start by looking at the tools that already exist. The easiest place to start is Gnu C Torture, but bear in mind that it’s based on Deja Gnu, which has, shall we say, issues. (It took me six attempts even to get the maintainer to allow a critical bug report about the Hello World example onto the mailing list.)
I’ll immodestly suggest that you look at the following as a starting place for tools to investigate:
Software: Practice and Experience April 2007. (Payware, not available to the general public---free preprint at http://pobox.com/~flash/Practical_Testing_of_C99.pdf.
http://en.wikipedia.org/wiki/Compiler_correctness#Testing (Largely written by me.)
Compiler testing bibliography (Please let me know of any updates I’ve missed.)
In the case of brainfuck, I think testing it should be done with brainfuck scripts. I would test the following, though:
1: Are all the cells initialized to 0
2: What happens when you decrement the data pointer when it's currently pointing to the first cell? Does it wrap? Does it point to invalid memory?
3: What happens when you increment the data pointer when it's pointing at the last cell? Does it wrap? Does it point to invalid memory
4: Does output function correctly
5: Does input function correctly
6: Does the [ ] stuff work correctly
7: What happens when you increment a byte more than 255 times, does it wrap to 0 properly, or is it incorrectly treated as an integer or other value.
More tests are possible too, but this is probably where i'd start. I wrote a BF compiler a few years ago, and that had a few extra tests. Particularly I tested the [ ] stuff heavily, by having a lot of code inside the block, since an early version of my code generator had issues there (on x86 using a jxx I had issues when the block produced more than 128 bytes or so of code, resulting in invalid x86 asm).
You can test with some already written apps.
The secret is to:
Separate the concerns
Observe the law of Demeter
Inject your dependencies
Well, software that is hard to test is a sign that the developer wrote it like it's 1985. Sorry to say that, but utilizing the three principles I presented here, even line numbered BASIC would be unit testable (it IS possible to inject dependencies into BASIC, because you can do "goto variable".