I state that I am a beginner, here is my problem.
My model works perfectly when I run it with a normal simulation. Now I'm trying to optimize some parameters using the optimization experiment, I've followed all the steps of the official tutorial, but it doesn't work because I get "Exception during discrete event execution:
Truncated class file". The strange thing is that, looking into the console displaying the error, I see that some lines are referred to an old version of my model, for example:
java.lang.ClassFormatError: Truncated class file
at coffe_maker.Main._m1_1_delayTime_xjal(Main.java:14070)
The current model's name is coffee_maker_v2_6 so I don't understand why I get this kind of error, do you know if it is normal? What am I doing wrong?
The most likely cause is that you have Java code left in an 'unused' configuration of a Delay block's "Delay time" expression (e.g., it now has a static value but you had Java code in the now-switched-out dynamic value).
Unfortunately, AnyLogic sometimes still includes the switched-out code in the compiled class, and this can sometimes cause strange runtime errors such as that one.
If this does look to be the case, temporarily switch to the offending switched-out configuration and delete it before switching back to the correct one.
I have resolved: the problem was that, in every delay block of my model, the delay time was linked with a database reference (type code), now I am trying by tyiping the probability distributions in the delays directly and now the optimization works
I read about Just-in-time compilation (JIT) and as I understood, there are two approaches for this – Interpreter and JIT, both of which interpreting the bytecode at runtime.
Why not just preparatively interprete all the bytecode to machine code, and only then start to run the process with no more need for interpreter?
Another reason for late JIT compiling has to do with optimization: At run-time the VM can detect more/other patterns it may optimize than the compiler could ever do at compile-time. JIT pre-compiling at startup will always have to be static, and the same could have been done by the compiler already, but through analysis of the actual run-time behaviour the VM may have more information on possible optimizations and may therefore produce better optimization results.
For example, the VM can detect that a single piece of code is actually run a million times at run-time and perform appropriate optimizations which the compiler may have no information about, not unlike the branch prediction that's done at runtime in modern CPUs.
More information can be found in the Wikipedia article on "Adaptive optimization".
Simple: Because it takes time to precompile everything to machine code. And users don't want to wait on the application to start. Remember, the precompilation would have to make a lot of optimizations which takes time.
The server version of JVM is more aggressive in precompiling and optimizing code upfront because code on the server side tends to be executed more often and for a longer period of time before the process is shutdown.
However, a solution (for .Net) is an application called NGen which make the precompilation upfront such that it isn't needed after that point. You only have to run that once.
Not all VM's include an interpreter. For instance Chrome and CLR (.Net) always compiles to machine code before running. However, they have multiple levels of optimizations to reduce the startup time.
I found link showing how runtime recompilation can optimize performance and save extra CPU cycles.
Inlining expansion: To decrease the cost of procedure calls.
Removing redundant loads: When 2 compiled code results in some duplicate code then it can be removed and further optimised by recompilation at run time.
Copy propagation
Eliminating dead code
Here is another link for the same explanation given above.
If I compile with -O0 in ifort, the program can run correctly. But as long as I open the optimization option, like -O, -O3, -fast, there will be a SIGSEGV segmentation fault come out.
This error occurred in a subroutine named maketable(). And the belows are the phenomenons:
(1) I call fftw library in this subroutine. If I comment the sentences about fftw, it'll be ok. But I think it's not the fault of fftw, because I also use fftw in some other places of this code, and they are good.
(2) the fftw is called in a loop, and the loop can run several times when the program crashed. The segfault does not happen at the first time of entering the loop.
(3) I considered the stack overflow, but I don't think so now. I have the executable file complied by others long time ago, it's can be executed in my computer. I think that suggests it's not due to the system stack overflow.
The version of ifort is 10.0, of fftw is fftw-2.1.5. The cpu type is intel xeon 5130. Thanks a lot.
There are two common causes of segmentation faults in Fortran programs:
Attempting to access an element outside the bounds of an array.
Mismatching actual and dummy arguments in a procedure call.
Both are relatively easy to find:
Your compiler will have an option to generate code which performs array bounds checking at run time. Check your compiler documentation, rebuild your code and rerun it. If this is the cause of the problem you will get an error message identifying where your code goes awry.
Program explicit interfaces for any subroutines and functions in your program, or use modules so that the compiler generates such interfaces for you, or use a compiler option (see the documentation) to check that argument types match at compile-time.
It's not unusual that such errors (seem to) arise only when optimisation is turned up high.
EDIT
Note that I'm not suggesting that optimisation causes the error you observe, but that it causes the error to affect the execution of your program and become evident.
It's not unknown for incorrect programs to run many times apparently without fault only for, say, recompilation with a new compiler version to create an executable which crashes every time.
Your wish to switch off optimisation only for the subroutine where the segmentation fault seems to arise is, I suggest, completely wrong-headed. I expect my programs to execute correctly at any level of optimisation (save for clear evidence of a compiler bug, such things are not unknown). I think that by turning off optimisation you are sweeping a real problem with your program under the carpet, as it were.
This code hummed along merrily for a long time, until we recently discovered an edge case where it fails silently-- no errors returned.
The fail is apprently pretty subtle. We can get the code to run uneventfully in the edge case by:
1) compiling with any set of options that includes -traceback or debug (-g or -gopt);
2) compiling with -fast -Mnounroll;
3) compiling with optimization <2;
4) adding WRITE statements into the code to determine the location of the fail;
In other words, most of the tools useful for debugging the failure-- actually result in the failure disappearing.
I am probing for any information on failures related to loop unrolling or other optimization, and their resolution.
Thank you all in advance.
I'm not familiar with pgf (heck, it's been 10 years since I used any fortran), but here are some general suggestions for tracking down (potential) compiler bugs:
Simplify a reproducible case. I.e. try to reproduce the problem with a similar looking piece of code that has all the superfluous details removed. This is helpful because a) you'll be less hesitant to release the code publicly, and b) if someone attempts to diagnose the problem, it will be easier for them with less surrounding material.
Talk to the experts: If you have a support contract for pgf, use it! There's a support request form on their site. If not, there's a User Forums section where you might be able to post your information - someone else may have better workaround, or an employee there may be able to log your problem.
Double-check your code. Is it possible that you're relying on some sort of unspecified behavior? This is the sort of thing that would cause your program to switch behavior when changing optimization levels. I'm not saying compiler bugs are impossible, but it could be a hack in your code too.
Hope that's helpful.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
The Personal Software Process (PSP) is designed to allow software engineers to understand and improve their performance. The PSP uses scripts to guide a practitioner through the process. Each script defines the purpose, the entry criteria, the steps to perform, and the exit criteria. PSP0 is designed to be a framework that allows for starting a personal process.
One of the scripts used in PSP0 is the Development Script, which is to guide development. This script is used once there is a requirements statement, a project plan summary, time and defect recording logs are made, and a defect type standard is established. The activities of this script are design, code, compile, and test. The script is exited when you have a thoroughly tested application and complete time and defect logs.
In the Code phase, you review the requirements and make a design, record requirements defects in the log, and perform time tracking. In the Compile phase, you compile, fix any compile time errors, and repeat until the program compiles, and record any defects and time. Finally, in the Test phase, you test until all tests run without error and all defects are fixed, while recording time and defects.
My concerns are with how to manage the code, compile, and test phases when using modern programming languages (especially interpreted languages like Python, Perl, and Ruby) and IDEs.
My questions:
In interpreted languages, there is no compile time. However, there might be problems in execution. Is executing the script, outside of the unit (and other) tests, considered "compile" or "test" time? Should errors with execution be considered "compile" or "test" errors when tracking defects?
If a test case encounters a syntax error, is that considered a code defect, a compile defect, or a test defect? The test actually found the error, but it is a code problem.
If an IDE identifies an error that would prevent compilation before actually compiling, should that be identified? If so, should it be identified and tracked as a compile error or a code error?
It seems like the PSP, at least the PSP0 baseline process, is designed to be used with a compiled language and small applications written using a text editor (and not an IDE). In addition to my questions, I would appreciate the advice and commentary of anyone who is using or has used the PSP.
In general, as the PSP is a personal improvement process, the answers to your actual questions do not matter as long as you pick one answer and apply it consistently. That way you will be able to measure the times you take in each defined phase, which is what PSP is after. If your team is collectively using the PSP then you should all agree on which scripts to use and how to answer your questions.
My takes on the actual questions are (not that they are relevant):
In interpreted languages, there is no compile time. However, there might
be problems in execution. Is executing
the script, outside of the unit (and
other) tests, considered "compile" or
"test" time? Should errors with
execution be considered "compile" or
"test" errors when tracking defects?
To me, test time is the time when the actual tests run and not anything else. In this case, both the errors and execution time I'd add as 'compile' time, time which is used in generating and running the code.
If a test case encounters a syntax error, is that considered a code
defect, a compile defect, or a test
defect? The test actually found the
error, but it is a code problem.
Syntax errors are code defects.
If an IDE identifies an error that would prevent compilation before
actually compiling, should that be
identified? If so, should it be
identified and tracked as a compile
error or a code error?
If the IDE is part of your toolchain, then it seeing errors is just like yourself having spotted the errors, and thus code errors. If you don't use the IDE regularly, then I'd count them as compile errors.
I've used PSP for years. As others have said, it is a personal process, and you will need to evolve PSP0 to improve your development process. Nonetheless, our team (all PSP-trained) grappled with these issues on several fronts. Let me give you an idea of the components involved, and then I'll say how we managed.
We had a PowerBuilder "tier"; the PowerBuilder IDE prevents you from even saving your code until it compiles correctly and links. Part of the system used JSP, though the quantity of Java was minor, and boilerplate, so that in practice, we didn't count it at all. A large portion of the system was in JS/JavaScript; this was done before the wonderful Ajax libraries came along, and represented a large portion of the work. The other large portion was Oracle Pl/Sql; this has a somewhat more traditional compile phase.
When working in PowerBuilder, the compile (and link) phase started when the developer saved the object. If the save succeeded, we recorded a compile time of 0. Otherwise, we recorded the time it took for us to fix the error(s) that caused the compile-time defect. Most often, these were defects injected in coding, removed in compile phase.
That forced compile/link aspect of the PowerBuilder IDE forced us to move the code review phase to after compiling. Initially, this caused us some distress, because we weren't sure how/if such a change would affect the meaning of the data. In practice, it became a non-issue. In fact, many of us also moved our Oracle Pl/Sql code reviews to after the compile phase, too, because we found that when reviewing the code, we would often gloss over some syntax errors that the compiler would report.
There is nothing wrong with a compile time of 0, any more than there is anything wrong with a test time of 0 (meaning your unit test passed without detecting errors, and ran significantly quicker than your unit of measure). If those times are zero, then you don't remove any defects in those phases, and you won't encounter a div/0 problem. You could also record a nominal minimum of 1 minute, if that makes you more comfortable, or if your measures require a non-zero value.
Your second question is independent of the development environment. When you encounter a defect, you record which phase you injected it in (typically design or code) and the phase you removed it (typically design/code review, compile or test). That gives you the measure called "leverage", which indicates the relative effectiveness of removing a defect in a particular phase (and supports the "common knowledge" that removing defects sooner is more effective than removing them later in the process). The phase the defect was injected in is its type, i.e., a design or coding defect. The phase the defect is removed in doesn't affect its type.
Similarly, with JS/JavaScript, the compile time is effectively immeasurable. We didn't record any times for compile phase, but then again, we didn't remove any defects in that phase. The bulk of the JS/JavaScript defects were injected in design/coding and removed in design review, code review, or test.
It sounds, basically, like your formal process doesn't match your practice process. Step back, re-evaluate what you're doing and whether you should choose a different formal approach (if in fact you need a formal approach to begin with).
In interpreted languages, there is no compile time. However, there might
be problems in execution. Is executing
the script, outside of the unit (and
other) tests, considered "compile" or
"test" time? Should errors with
execution be considered "compile" or
"test" errors when tracking defects?
The errors should be categorized according to when they were created, not when you found them.
If a test case encounters a syntax error, is that considered a code
defect, a compile defect, or a test
defect? The test actually found the
error, but it is a code problem.
Same as above. Always go back to the earliest point in time. If the syntax error was introduced while coding, then it corresponds to the coding phase, if it was introduced while fixing a bug, then it's in the defect phase.
If an IDE identifies an error that would prevent compilation before
actually compiling, should that be
identified? If so, should it be
identified and tracked as a compile
error or a code error?
I believe that should not be identified. It's just time spent on writing the code.
As a side note, I've used the Process Dashboard tool to track PSP data and found it quite nice. It's free and Java-based, so it should run anywhere. You can get it here:
http://processdash.sourceforge.net/
After reading the replies by Mike Burton, Vinko Vrsalovic, and JRL and re-reading the appropriate chapters in PSP: A Self-Improvement Process for Software Engineers, I've come up with my own takes on these problems. What is good, however, is that I found a section in the book that I originally missed when two pages stuck together.
In interpreted languages, there is
no compile time. However, there might
be problems in execution. Is executing
the script, outside of the unit (and
other) tests, considered "compile" or
"test" time? Should errors with
execution be considered "compile" or
"test" errors when tracking defects?
According to the book, it says that "if you are using a development environment that does not compile, then you should merely skip the compile step." However, it also says that if you have a build step, "you can record the build time and any build errors under the compile phase".
This means that for interpreted languages, you will either remove the compile phase from the tracking or replace compilation with your build scripts. Because the PSP0 is generally used with small applications (similar to what you would expect in a university lab), I would expect that you would not have a build process and would simply omit the step.
If a test case encounters a syntax
error, is that considered a code
defect, a compile defect, or a test
defect? The test actually found the
error, but it is a code problem.
I would record errors where they are located.
For example, if a test case has a defect, that would be a test defect. If the test ran, and an error was found in the application being tested, that would be a code or design defect, depending on where the problem actually originated.
If an IDE identifies an error that
would prevent compilation before
actually compiling, should that be
identified? If so, should it be
identified and tracked as a compile
error or a code error?
If the IDE identifies a syntax error, it is the same as you actually spotting the error before execution. Using an IDE properly, there are few excuses for letting defects that would affect execution (as in, cause errors to the execution of the application other than logic/implementation errors) through.