Are there any code coverage analysis tools for Intersystems Cache object script? - testing

If there is some sort of debugging API that would allow someone to write his own code coverage analysis tools that would be acceptable as well. I don't believe this is a poll question because the exhaustive list of such tools is probably quite small.

There are no readymade coverage tools (afaik). However, you might be able to utilize the MONLBL facilities (here). These were introduced to be able to identify performance hotspots.
It would be interesting to look into the code of these, as the necessary hooks to create a code coverage tool are quite similar to the ones you need to create the performance monitoring. If you have access to the WRC I strongly recommend to raise this with ISC, it definitely sounds like something many people could use!
HTH
edit: come to think of it, since monlbl gives you the # of times a line is executed, it's a code coverage with benefits ;) So, the answer is: yes, there are

debuging in Cache' Studio.
there is also ^PERFMON

Related

does anyone have parasoft .test or jtest experience

First i have no experience on parasoft .test or jtest experience. I have read the datasheet that the product could automatically generate unit test.
but I am woundering how useful the auto generated unit test are. Does it really do not need any other effort by developer?
any experience sharing are welcome.
thanks a lot!
We used JTest for our product recently. We didn't use the standard product, we used the Eclipse Plugin. The standard product is built on the OSGI framework (read: it's like Eclipse), but you have to import and create your projects. We were already using Eclipse, so it made sense for us to simply use the plugin, which has all of the same capabilities.
While there are many things that JTest can do for you, there are also many irritating things about it. For example, Jtest's static analysis tool is what is really worthwhile, IMHO. It can look for lots of errors and has a pretty good reporting system. But, while unit test generation is okay, but I think I spent as much or more time fixing and enhancing the generated tests than I would have just making them myself. Administering Jtest is also somewhat complicated and involved.
The built-in mechanisms to make unit tests, stub objects, parameterized unit tests, etc. are not well documented. At least, my little brain couldn't make good use of them in the two years we used the product. However, a lot of their super awesome features (like GUI tracing, command-line interface, the Bug Detective, reporting system etc.) all require extra, very expensive licenses.
Really, Jtest just gives you an easy way to manage the execution of static and unit testing. But it's really expensive. I can't believe they charge thousands of dollars per license of that stuff. You'll also find that they will want to train you, which you almost need because the documentation is pretty bad. Which is odd, because the user's guide is like 900 pages long.
But here's a big hint: you can do it for free. If I had to do it over, I would have pushed hard for using these products (which, oddly enough, look and feel very similar to Jtest)
http://code.google.com/javadevtools/codepro/doc/index.html
I wouldn't get Jtest thinking that this will be a small something to add to your developer's routine. Jtest can become a huge time and process sink.
Jtest is very very useful.Yes it generates it own test cases which requires lot more efforts for fixing them.I use it in different form.I delete all the generated unnecessary test cases.I made one another file which create database connection and set various other parameters sets.Also after configuration the code will work without mocking if all of the code is ready and if it is not ready than you can stubs the required methods.
Static code analyzer is good(for checking null pointer exception)
Checking code conventions is very good.
Write your custom code guidlines as use cases and execute it on your code.
Code coverage.
Debug while testing.
The auto generated unit tests still needs a developer to decide what results are correct or not, so you have to sit down and do the job. A lot of the boiler plate code is of course auto generated, so a small time saver there. I haven't used it much, but did evaluate jtest for an earlier employer. Seemed like a great product, if I remember correctly. :)
Alas there will never be a silver bullet that addresses all unit testing requirements, but JTest & .Test (& C++Test for that matter) about as close as you will get. Uggwar is correct that the developer will still need to verify outcomes for the basic auto generated tests, however there is a whole lot more to it.
These tools can be used to create basic regression tests, these are there to tell you when something has changes, not whether what it is testing is right or wrong. You can also trace a running application and then generate JUnit/NUnit/CPPUnit tests that recreate what was going on in the application. These tend to be far more useful tests, which are used as regression tests for items of functionality.
Other functionality includes the ability to generate stubs, use spreadsheets as datasources and provide an object repository. There is a while lot more too ....
Give them a try.
http://www.parasoft.com

How do you organize your business requirements and tests?

as of right now i'm working at place where's there's a lot of legacy codes and pretty much no useful documentation.
and most of the time we just treat business requirements as whatever that is already implemented previously.
i'm looking for any tools or useful method to keep all the requirements for future use and for regression testing mostly.
i'm thinking of maybe linking them up to tests/unit test too so that the business requirements are linked directly to the coding logic.
Any good tools or resources to get me started?
thanks~
Updates
As of now i'm making things simple on myself by writing use case and then create some simple use case diagram using this awesome tool and then convert each use case into a test plan. The test plan is meant for the end user, thus i just make it into a simple step by step flow. I had plans to automate this part using selenium but it wasn't working that well on our website and was taking too long. It's a bit TDD, but i think it create simple understandable goal for both end user and the developer, i hope.
So for now it's just excel and doc file, lugged into the project doc folder and check into cvs/svn doomed to be outdated and be forgotten :P
Business requirements can be well capture in FitNess tests. As for Unit Test they sur help, and but both together in continuous integration like Hudson to detect regression ASAP.
PS: Sorry pretty much all links go to some articles I wrote because I'm also interested in that subject.
Here are some methods/systems that I have used
HP Quality Center
Big and bulky. Not very agile but it works and has a lot of feautres.
It's used in many larger corporations and if you can afford you can get great support from HP
https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-127-24%5E1131_4000_100__
Bugzilla-Testopia
Open Source test case management extension for Bugzilla, managed by Mozilla. Which is good enough in my book to give it a try.
http://www.mozilla.org/projects/testopia/
Excel/Open Office Calc
Just do everything in spreadsheets and link between them.
Very flexible, everybody knows how to use them and you propbably have the software in your organization already.
Other Open Source Solutions
List of 15+ Open Source Test Management Tools
http://www.jayphilips.com/2009/09/10/15-open-source-test-management-tools/

Real time scripting language + MS DLR?

For starters I should let you guys know what I'm trying to do. The project I'm working on has a requirement that requires a custom scripting system to be built. This will be used by non-programmers who are using the application and should be as close to natural language as possible. An example would be if the user needs to run a custom simulation and plot the output, the code they would write would need to look like
variable input1 is 10;
variable input2 is 20;
variable value1 is AVERAGE(input1, input2);
variable condition1 is true;
if condition1 then PLOT(value1);
Might not make a lot of sense, but its just an example. AVERAGE and PLOT are functions we'd like to define, they shouldn't be allowed to change them or really even see how they work. Is something like this possible with DLR? If not what other options would we have(start with ANTRL to define the grammar and then move on?)? In the future this may need to run using XBAP and WPF too, so this is also something we need to consider, but haven't seen much if anything on dlr & xbap. Thanks, and hopefully this all makes sense.
Lua is not an option as it is to different from what they are already accustomed to.
Ralf, its going to reactive, and to be honest the timeframe for when the results should get back to the user may be 1/100 of a second all the way up to 2 weeks or a month(very complex mathematical functions).
Basically they already have a system they purchased that does some of what they need, and included a custom scripting language that does what I mentioned above and they don't want to have to learn a new one, they basically just want us to copy it and add functionality. I think I'll just start with ANTRL and go from there.
Lua
it's small, fast, easy to embed, portable, extensible, and fun!
Lua is definitly the best choice for soft real-time system (like computer games).
See http://shootout.alioth.debian.org/ for detailed benchmarks.
However, last time I checked, Lua used a mark-and-sweep garbage collector which can lead to deadline-violation and non-deterministic jitter in real-time systems.
I believe that you could use theoretically use the DLR, but I'm unsure about support in an XBAP (partially trusted?) scenario.
If you host the DLR you would quickly be able to take advantage of IronRuby or IronPython scripting. You would want to look at these implementations when creating your own language implementation. If you post your question to the IronPython mailing list I'm sure you would get a better reply around the XBAP scenario, and some of the developers there created ToyScript.
What kind of real-time requirement are you trying to fulfill? Is the simulation a hard real-time simulation (some kind of hardware-in-the-loop simulation ==> deadline is less than 1/1000 second)?
Or do you want the scripting-system to be "reactive" to user-input ==> 1/10 should be sufficient.
I am no expert regarding MS DLR, but as far as I know, it does not support hard real-time systems. You may want to take a look at the real-time specification for Java (RTSJ)
Firstly I think that defining your own language is not the way to go.
Primarily because the biggest productivity gains you can get for programmers or non-programmers are the development tools. You (and 99.9% of the rest of us) are not going to write tools as good as what is out their.
Language design is hard.
Language support and documentation, also hard
I would recommend looking for a pre-built solution. If you could find a language that can lock down some functionality, that would be a good starting point. MatLab would be the first that comes to my mind.
Lastly, ditch the natural language part, BASIC, COBOL and YA-TDWTF-Lang all tried and failed at it.
Full disclosure: I work for a company that is developing a generalized domain specific language "system". It's targeted at data-in/text-out applications so it's not apropos and it's not yet to beta. The result is I'm somewhat knowledgeable and biased.

Test planning/documentation/management tools

I'm looking for a good, preferably free, test planning and documentation tool. Ideally something that will keep track of which tests have been run against which software version, with reporting ability. There's a whole bunch of tools listed here but are there any others, and which ones have you had the best experience with? (You do run tests, right?)
UPDATE 2008-01-29
So far TestLink and Fitness have been mentioned. A related question yielded also a link to the ReadySet project, an open collection of software planning documentation templates.
I have used TestLink and found it okayish, but I cannot say I enjoyed using it. Has anyone had any experience with Fitnesse? Or are there any other free tools out there that you have used and found satisfactory?
You should definitely try out Klaros-Testmanagement http://www.klaros-testmanagement.com which has an free, unrestricted Community Edition.
Yes, we do run tests, but not nearly as many as I'd like !
I highly recommend TestLink - the list of tools that you linked to shows that it's had more downloads than all of the other tools put together.
I've used QualityCenter/TestDirectory for a long time.
I'm now using testlink and I must say that I prefer QualityCenter/TestDirectory by far, even if it based on some buggy ActiveX control.
QualityCenter/TestDirectory is more easier to use and the interface is quite better.
TestLink and QualityCenter/TestDirectory are mainly for manual test case (however, you can use Quick Test Pro on QualityCenter/TestDirectory to automatize your tests).
Fitnesse is another kind of tool in my mind : basically, you write your test case on a wiki and link that to a JUnit test. Another tools like that are GreenPepper, Concordion, etc.
PractiTest is a very good option. Not free but very affordable - http://www.practitest.com
A little late on this one, but I would have to suggest you try TestLodge for your manual test management. It works in a simular way to what TestLink does, but it gives off a more professional interface and is something that we also allow our clients to use for their uat phase.
We use Quality Center / Test Director stuff. Its expensive as far as I know, and it's not that great.
I've heard good things about Fitnesse but I don't know how good it's test tracking is.
I know I just recently saw a slick looking test tracker for Trac or something, but I can't find it now...
Quality Center. It's expensive, but it is the best
I'm with Patrick - good ol' office tools :)
I just write mine in Microsoft Word
This is the structure I developed: Writing a System Test Plan.
One thought, and perhaps not a good one, would be to have every test submit a ticket to your ticketing system when it's run indicating the test name, build version, and date, and test results.
That would make the results searchable later-on.
We use a home-grown Access database.
This database keeps track of our requirements, test cases, test plan and test runs. We're able to produce an up-to-date RVTM, keep track of progress against the plan, and assign tasks to testers. We integrated it with Outlook, so each tester is assigned a task from the plan by the QA lead. When they're complete, they just tick it off in Outlook and it updates the database.
For our small team of testers it works nicely, and we're free to customize it however we want.

Anyone Using Executable Requirements?

In my limited experience with them executable requirements (i.e. specifying all requirements as broken automated tests) have proven to be amazingly successful. I've worked on one project in which we placed a heavy emphasis on creating high-level automated tests which exercised all the functionality of a given use case/user story. It was really amazing to me how much easier development became after we began this practice. Implementing features became so much easier after writing a test and we were able to make major architectural changes to the system with all the confidence in the world that everything still worked the same as it did yesterday.
The biggest problem we ran into was that the tools for managing these types of tests aren't very good. We used Fitnesse quite a bit and as a result I now hate the Fit framework.
I'd like to know 1) if anyone else has experience developing using this type of test-driven requirement definition and 2) what tools you all used to facilitate this.
The primary tool I've also used was FitNesse. I've used it at several companies, with very good results. We did have test cases numbering in the many thousands, and we had to be very disciplined in how we organized and used them.
I've tried some other tools, including writing my own DSL (domain-specific language) and using things like RSpec. I really like RSpec, but it is certainly more of a developer tool than a business one.
I know Rick Mugridge has been working on a tool called ZiBreve (http://www.zibreve.com/visit.php?page=index) which is supposed to have stronger refactoring support. I haven't used it myself, but I know Rick and have talked to him several times. I know there was discussion at Agile 2008 on some different ways to deal with the Fitnesse tests in general.
Other than that, I haven't seen a lot of good tools out there. Even tools like WinRunner are fine for QA type tests, but for exploratory testing of requirements by the business, FitNesse or a custom DSL seem to be the ways to go right now.
You might want to take look at Robot Framework (http://robotframework.org). It's FIT-like but hopefully easier to integrate to different testing tools, version control and continuous integration. Different abstraction levels in the test data also make it easier to maintain the data, and when the separate test data editor gets more mature maintenance gets even easier. The quick start guide introduces the most important features of the framework and acts also as an executable demo.
I've had to use, test and set up both fitnesse and one of it's competitor, GreenPepper for my work, and what I can say is :
GreenPepper is a confluence plugin (confluence is an enterprise wiki from atlassian) and have many of the things you need in an "enterprise" level tool with little to no additional work required :
Better user friendly -rich text- wiki
syntax (makes it easier to work with
for non technical people)
It integrates very well with many
development tools : Eclipse, VB,
maven2 and Nant plugin, I tested most
and was very pleased.
User and access rights are managed by
confluence, which is to say it's good
and make uses of database of your
likin (which might be mandatory
depending on where you work)
Many other functionalities that might
or might not be required : ssl support, remote execution (install the wiki on unix, execute on windows if you are working on a C# project, or reverse)
Looks way better :D
Big downs for GreenPepper are : Configuration is quite hard and documentation is poor (although they seem to be working on it and they answer quite fast on their forum) and also it is not free, you have to pay for both confluence and GreenPepper, which might add up to quite a lot.
Fitnesse is very basic in my opinion, very easy to set up, it works but that's it, you can use some of the fitnesse plugins developed by the open source community, and even some Fit plugins, such as the Eclipse plugin (build the skeletton of the fixture from a fitnesse test file, provided it's in a .fit extension, very usefull). Integration is not ideal, authentification and access rights management is poor, but it's FREE and if you need something, you can do it because it's open source.
I've found that using contracts is a great approach. Metaprogramming contracts are generally lower-level than the types of integration tests you describe, but the two are certainly not mutually exclusive. I find contracts help keep documentation, implementation, and testing all in sync -- this is a major problem of TDD (not that it isn't a problem in non-TDD).
I've tried Fitnesse and its really awful (particularly integration with SVN).
And our company develop similar open-source tool with fit engine: FitPro
Another brilliant tool I've used is Concordion. It has the only disadvantage - requrements in html format
My experience is limited to personal projects and found much the same advantages you mentioned. I recommend http://metacpan.org/pod/Test::Simple::Tutorial which was my inspiration for trying out testing-based development. The perl testing modules seem pretty useful and flexible, though I have nothing to compare them to.
I also believe tests are vital for the maintenance period of a project. If you have good tests to begin with, it saves a lot of time and mistakes later on. I wish I had put more work into tests on my current project.