I was wondering if there were a way to compute the size of a reg in Verilog. I researched it quite a bit, and found $size(a), but it's only in SystemVerilog, and it won't work in my verilog program.
Does anyone know an alternative for this??
I also wanted to ask as a side note; I'm having some trouble with my test bench in the sense that when I update a value in the file, that change is not taken in consideration when I simulate. I've been told I might have been using an old test bench but the one I am continuously simulating is the only one available in this project.
EDIT:
To give you an idea of what's the problem: in my code there is a "start" signal and when it is set to 1, the operation starts. Otherwise, it stays idle. I began writing the test bench with start=0, tested it and simulated it, then edited the test bench by setting start to 1. But when I simulate it, the start signal remains 0 in the waveform. I tried to check whether I was using another test bench, but it is the only test bench I am using in this project.
Given that I was on a deadline, I worked on the code so that it would adapt to the "frozen" test bench. I am getting now all the results I want, but I wanted to test some other features of my code, so I created a new project and copy pasted the code in new files (including the same test bench). But when I ran a simulation, the waveform displayed wrong results (even though I was using the exact same code in all modules and test bench). Any idea why?
Any help would be appreciated :)
There is a standardised way to do this, but it requires you to use the VPI, which I don't think you get on Modelsim's student edition. In short, you have to write C code, and dynamically link it to the simulator. In the C code, you can get object properties using routines such as vpi_get. Useful properites might be vpiSize, which is what you want, vpiLeftRange, vpiRightRange, and so on.
Having said all that, Verilog is essentially a static language, and objects have to be declared with a static width using constant expressions. Having a run-time method to determine an object's size is therefore of pretty limited value (since you should already know it), and may not solve whatever problem you actually have. Your question would make more sense for VHDL (and SystemVerilog?), which are much more dynamic.
Note on Icarus: the developers have pushed lots of SystemVerilog stuff back into the main language. If you take advantge of this you may find that your code is not portable.
Second part of your question: you need to be specific on what your problem actually is.
Related
I often work on very small pieces of code, on the order of max 100 lines, especially in scenarios when I learn something new and just play with the code, or when I debug.
Because I frequently change code and want to see how that changes the contents of my variables and output, it is tedious to either
1) hit the debug button, wait for the debugger to start (in my case I use PyCharm as an IDE) and then inspect the output
or
2) insert some prints for the variables that I want to observe and compile the code (slightly faster than starting the debugger).
To eliminate this time consuming workflow, where I constantly hit the compile or debug button every few seconds, is there an IDE where I can set a watch to a few variables and then each time I change in my source code a single character (or, alternatively, every half a second) the IDE automatically compiles everything and I will see then new values of my variables?
(Of course while I intermediatelychange the code the compilation will give errors, but that is ok. This feature would be a big time saver. Maybe PyCharm has it already implemented? If not, ideally I would hope for a language agnostic IDE, similar to PyCharm, where variants for Java etc. also exist. If not, since I code in Python, a Python IDE would also be great.)
This might not be exactly what you are looking for but PyCharm (and IntelliJ and probably others) can run tests automatically when code changes.
In the PyCharm Run toolbar look for "Toggle auto-test" button.
For example in PyCharm you can create test cases that just runs the code you're interested in and prints the variables you need.
Then create a run configuration that runs only those tests and set it to run automatically.
For more details see PyCharm documentation on rerunning tests.
The Scala plugin for IntelliJ has exactly what you need in the form of "worksheets," where every expression is automatically recompiled when its value or the value of anything it references is changed.
Since (based on your usage of PyCharm), I assume you're using Python primarily, I think Jupyter notebook is your best bet. Jupyter is language agnostic but began as specific to python (it was called IPython notebook for this reason).
I have not tried it, but this guide purports to show to get Jupyter to work with PyCharm
EDIT: Here is another possibility called vim worksheet; I haven't tried it, but it purports to do the same thing as Scala worksheets, but in vim, and for a number of languages, including Python.
The python Spyder IDE (comes with Anaconda) has this feature. When you hit run, you can see all of the variables at the top right of the screen and you can click on them to see their values (this is very helpful with Numpy Arrays too!).
If your interest is in the actual workflow improvement:
I used to program like you, looking at what my variables changed to, and design or debug my code based on such modifications, however is way to inefficient and costly to set what variables to watch over and over again and besides when if it bugs, you have to go all over again for the debugging process.
I changed my design process to better my workflow and adopted Test Driven Development (TDD), with it you can look at tools for you specific implementations or IDEs but the principles and workflow stay with you, with it you stop looking on how the variables changed and instead focus on what the functions should do, meaning faster iteration (with real time tools for testing), easier debugging and far more better, safe refactoring.
My favorite tool for it is Cucumber, and agnostic tool (for IDE or programming language) which help you test your code scenarios and at the same time documenting your features.
Hope it helps, i know its a very opinionated answer but it's an honest advices for improvement in ones workflow.
You should try Thonny. It is developed by Institute of Computer Science of University of Tartu.
The 4 features which might be of help to you are below (verbatim from the website):
No-hassle variables.
Once you're done with hello-worlds, select View → Variables and see how your programs and shell commands affect Python variables.
Simple debugger.
Just press Ctrl+F5 instead of F5 and you can run your programs step-by-step, no breakpoints needed. Press F6 for a big step and F7 for a small step. Steps follow program structure, not just code lines.
Stepping through statements
Step through expression evaluation. If you use small steps, then you can even see how Python evaluates your expressions. You can think of this light-blue box as a piece of paper where Python replaces subexpressions with their values, piece-by-piece.
Visualization of expression evaluation
Faithful representation of function calls.
Stepping into a function call opens a new window with separate local variables table and code pointer. Good understanding of how function calls work is especially important for understanding recursion.
My Labview Program works like a charm, until I look at the Block Diagram. No changes are made. I do not Save. Just Ctrl+E and then Ctrl+R.
Now it does not work properly. Only a Restart of Labview fixes the problem.
My Program controls two Scanner arrays for Laser Cutting simultaneously. To force parallel working, I use the Error handler and loops that wait for a signal from the Scanner. But suddenly some loops run more often than they should.
What does majorly happen in Labview when I open the Block diagram that messes with my code?
Edit:
Its hard to tell what is happening without violating my non-disclosure agreement.
I'm controlling two independent mirror-Arrays for Laser Cutting. While one is running one Cutting-Job, the other is supposed to run the other Jobs. Just very fast. When the first is finished they meet at the same position and run the same geometry at the same slow speed. The jobs are provided as *.XML and stored as .net Objects. The device only runs the most recent job and overwrites it when getting a new one.
I can check if a job is still running. While this is true I run a while loop for the other jobs. Now this loop runs a few times too often and even ignores WAIT-blocks to a degree. Also it skips the part where it reads the XML job file, changes the speed part back to fast again and saves it. It only runs one time fast.
#Joe: No it does not. It only runs once well. afterwards it does not.
Youtube links
The way it is supposed to move
The wrong way
There is exactly one thing I can think of that changes solely by opening the block diagram.
When the block diagram opens, any commented-out or unreachable-code-compiler-eliminated sections of code will load their subVIs. If one of those commented out sections of code were somehow interfere with your running code, you might have an issue.
There are only two ways I know of for that to interfere... both of them are fairly improbable.
a) You have some sort of "check for all VIs in memory" or "check for all types in memory" that you're using as a plug-in system. When the commented-out sections load, that would change the VIs in memory. Such systems are not uncommon when parsing XML, so maybe.
b) You are using Run VI method for some dynamically invoked VI to execute as a top-level VI, but by loading the diagram, it discovers that it is a subVI of your current program. A VI cannot simultaneously be top-level and a subVI, so the call to Run VI returns an error.
That's it. I can't think of anything else. Both ideas seem unlikely, but given your claim and a lack of a block diagram, I figured I'd post it as a hypothesis.
In the improbable case someone has a similar problem. The problem was a xml file that was read during run time. Sometimes multiple instances tried to access it and this produced the error.
Quick point to check: are Debug and "retain data in wires" disabled? While it may not change the computations, but it may certainly change the timing of very tight loops, and that was one of the unexpected program behaviors, OP was referring to.
I am struggling a bit with the way how to write tests that reproduce an issue that has not been yet fixed.
Should one write the test and use wrong expectations and once the bug is fixed the developer will see the failure and adjust the expectations or should one just write the test with correct expectations and disable it. Once it is fixed you have to enable it again.
I would prefer the way to define wrong expectations and add the correct ones in comments and once I fix an issue I will immediately get a notification that it fails. If I disable it I won't see it failing and it will probably stay disabled until one will discover this test.
Are there any other ways doing this?
Thanks for your comments.
Martin
Ideally you would write a test that reproduces the bug and then fix said bug.
If for whatever reason that is not currently an option I would say that your approach of having the wrong expectations would be better than having an ignored test. Assuming that you use some clear variable name/ method name / comments that the test is more a placeholder and not the desired outcome.
One thing that I've done is write a test that is a "time bomb" reminder. I pick a date that is a few weeks/months out from now that I expect to be able to get back to it or have it fixed by. If I end up having to push the date out 2 or 3 times I end up deleting the test because it must not be that important.
as #Jarred said, best way is to write a test that express the correct expectations, check if it fails, then fix production code and see the test passes.
if it's not an option then remember that tests are not only to test but also to document. so write a test that document how your program does actually work. if necessary add a comment to the test. and don't write tests that are ignored - it's pointless. in future you can refactor your code many times, you could accidentally fix this test or introduce even more error in this area. writing tests that are intended to be long term ignored is just a waste of time.
don't be afraid that you will forget about that particular bug/test, just create a ticket in your issue tracking system - that's what it's made for.
if you use a testing framework that supports groups, you can add all those tests to be able to instantly exclude those test if needed.
also i really don't like the concept of 'time bomb tests'. your build MUST be reproducible - that's the fundamental assumption of release management, continuous integration, ability to pass your code to another team etc. tests are not meant to track and remind about the issues, it's the job of the issue tracking system. seriously, don't do it
Actually I thought about this again. We are using JUnit and it supports defining expectations on exceptions via #Test(expected=Exception.class).
So what one can do is write the test with the desired expectations and define the test with #Test(expected=AssertionError.class). Once the test will be fixed the test starts failing and the developer has to remove the expectation.
The idea is that given a specific input to the program, somehow I want to automatically step-in through the complete program and dump its control flow along with all the data being used like classes and their variables. Is their a straightforward way to do this? Or can this be done by some scripting over gdb or does it require modification in gdb?
Ok the reason for this question is because of an idea regarding a debugging tool. What it does is this. Given two different inputs to a program, one causing an incorrect output and the other a correct one, it will tell what part of the control flow differ for them.
So What I think will be needed is a complete dump of these 2 control flows going into a diff engine. And if the two inputs are following similar control flows then their diff would (in many cases) give a good idea about why the bug exist.
This can be made into a very engaging tool with many features build on top of this.
Tell us a little more about the environment. dtrace, for example, will do a marvelous job of this in Solaris or Leopard. gprof is another possibility.
A bumpo version of this could be done with yes(1), or expect(1).
If you want to get fancy, GDB can be scripted with Python in some versions.
What you are describing sounds a bit like gdb's "tracepoint debugging".
See gdb's internal help "help tracepoint". You can also see a whitepaper
here: http://sourceware.org/gdb/talks/esc-west-1999/
Unfortunately, this functionality is not currently implemented for
native debugging, but I believe that CodeSourcery is doing some work
on it.
Check this out, unlike Coverity, Fenris is free and widly used..
How to print the next N executed lines automatically in GDB?
I've been experimenting with creating an interpreter for Brainfuck, and while quite simple to make and get up and running, part of me wants to be able to run tests against it. I can't seem to fathom how many tests one might have to write to test all the possible instruction combinations to ensure that the implementation is proper.
Obviously, with Brainfuck, the instruction set is small, but I can't help but think that as more instructions are added, your test code would grow exponentially. More so than your typical tests at any rate.
Now, I'm about as newbie as you can get in terms of writing compilers and interpreters, so my assumptions could very well be way off base.
Basically, where do you even begin with testing on something like this?
Testing a compiler is a little different from testing some other kinds of apps, because it's OK for the compiler to produce different assembly-code versions of a program as long as they all do the right thing. However, if you're just testing an interpreter, it's pretty much the same as any other text-based application. Here is a Unix-centric view:
You will want to build up a regression test suite. Each test should have
Source code you will interpret, say test001.bf
Standard input to the program you will interpret, say test001.0
What you expect the interpreter to produce on standard output, say test001.1
What you expect the interpreter to produce on standard error, say test001.2 (you care about standard error because you want to test your interpreter's error messages)
You will need a "run test" script that does something like the following
function fail {
echo "Unexpected differences on $1:"
diff $2 $3
exit 1
}
for testname
do
tmp1=$(tempfile)
tmp2=$(tempfile)
brainfuck $testname.bf < $testname.0 > $tmp1 2> $tmp2
[ cmp -s $testname.1 $tmp1 ] || fail "stdout" $testname.1 $tmp1
[ cmp -s $testname.2 $tmp2 ] || fail "stderr" $testname.2 $tmp2
done
You will find it helpful to have a "create test" script that does something like
brainfuck $testname.bf < $testname.0 > $testname.1 2> $testname.2
You run this only when you're totally confident that the interpreter works for that case.
You keep your test suite under source control.
It's convenient to embellish your test script so you can leave out files that are expected to be empty.
Any time anything changes, you re-run all the tests. You probably also re-run them all nightly via a cron job.
Finally, you want to add enough tests to get good test coverage of your compiler's source code. The quality of coverage tools varies widely, but GNU Gcov is an adequate coverage tool.
Good luck with your interpreter! If you want to see a lovingly crafted but not very well documented testing infrastructure, go look at the test2 directory for the Quick C-- compiler.
I don't think there's anything 'special' about testing a compiler; in a sense it's almost easier than testing some programs, since a compiler has such a basic high-level summary - you hand in source, it gives you back (possibly) compiled code and (possibly) a set of diagnostic messages.
Like any complex software entity, there will be many code paths, but since it's all very data-oriented (text in, text and bytes out) it's straightforward to author tests.
I’ve written an article on compiler testing, the original conclusion of which (slightly toned down for publication) was: It’s morally wrong to reinvent the wheel. Unless you already know all about the preexisting solutions and have a very good reason for ignoring them, you should start by looking at the tools that already exist. The easiest place to start is Gnu C Torture, but bear in mind that it’s based on Deja Gnu, which has, shall we say, issues. (It took me six attempts even to get the maintainer to allow a critical bug report about the Hello World example onto the mailing list.)
I’ll immodestly suggest that you look at the following as a starting place for tools to investigate:
Software: Practice and Experience April 2007. (Payware, not available to the general public---free preprint at http://pobox.com/~flash/Practical_Testing_of_C99.pdf.
http://en.wikipedia.org/wiki/Compiler_correctness#Testing (Largely written by me.)
Compiler testing bibliography (Please let me know of any updates I’ve missed.)
In the case of brainfuck, I think testing it should be done with brainfuck scripts. I would test the following, though:
1: Are all the cells initialized to 0
2: What happens when you decrement the data pointer when it's currently pointing to the first cell? Does it wrap? Does it point to invalid memory?
3: What happens when you increment the data pointer when it's pointing at the last cell? Does it wrap? Does it point to invalid memory
4: Does output function correctly
5: Does input function correctly
6: Does the [ ] stuff work correctly
7: What happens when you increment a byte more than 255 times, does it wrap to 0 properly, or is it incorrectly treated as an integer or other value.
More tests are possible too, but this is probably where i'd start. I wrote a BF compiler a few years ago, and that had a few extra tests. Particularly I tested the [ ] stuff heavily, by having a lot of code inside the block, since an early version of my code generator had issues there (on x86 using a jxx I had issues when the block produced more than 128 bytes or so of code, resulting in invalid x86 asm).
You can test with some already written apps.
The secret is to:
Separate the concerns
Observe the law of Demeter
Inject your dependencies
Well, software that is hard to test is a sign that the developer wrote it like it's 1985. Sorry to say that, but utilizing the three principles I presented here, even line numbered BASIC would be unit testable (it IS possible to inject dependencies into BASIC, because you can do "goto variable".