Existing solutions to test a NSIS script - testing

Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.

Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.

Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.

Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.

Related

Generate Lint targets using cmake (Flexelint/Linux)

I am working on a C/C++ product that only builds in the Linux environment. It is a massive code base and generating lint targets manually is going to be incredibly painful. I know that you can link Lint into cmake so cmake generates these targets for you while it builds the code. Cmake has a macro called add_pc_lint (https://cmake.org/Wiki/PC-Lint) which does this for you. I wanted to know if there is something similar that could be used for Flexelint?
I currently have a PC-Lint license and wanted to ask this question before spending $998 on a Flexelint license. Thanks!
FlexeLint and PC-lint share the same manual so I'm pretty sure they are fully compatible on the command line. You should be able to use the same make files for both, or with minor changes. Otherwise they do offer a 30-day money-back guarantee.
Another option might be to run PC-lint under Wine. I tried this once and I got it working, but then I never used it much so I'm not sure how well it worked.
So I did get a FlexeLint license yesterday and now I am trying to integrate it into my CMakeLists. I am looking at the source code of cmake's add_pc_lint function and trying to modify it to work for FlexeLint. If anyone has played around with it before, please comment. The FlexeLint manual is not at all helpful.

How can a modified Julia package be used natively?

So, there is this cool package I've found but it leaves a lot to be desired. Since it made more sense to modify it, rather than build a new one myself, I changed the code in the corresponding source directory (C:\Users[my username].julia\v0.4[package name]\src). I made sure to modify not just the base.jl file, but also the [name of package].jl one so that there are no issues with dependencies or the new functions I added. I tried running the package several times to ensure that Julia doesn't spit out any errors or exceptions (the original package had some deprecated stuff, which I also remedied). Still, I fail to use the additional functionality of the package that I augmented. Any help would be greatly appreciated.
I'm using Julia ver 0.4.2, on a Windows 7 machine. As an IDE I use Notepad++. Thanks
I'm not exactly sure what you tried, but here's a guess as to what's going on: if you've already loaded the package in your julia session, edits to the source files won't take effect unless you explicitly reload the package. There are some good workflow tips here, and more explanation of the module system here.
However, for a newbie the easiest thing might be to quit julia and restart.
As far as making changes to a package, as Gnumic commented, your best approach is to make a branch and commit your changes there. Once you become convinced your changes represent an improvement, consider sharing your changes with the rest of the world.

Is there a way to 'test run' an ant build?

Is there a way to run an ant build such that you get an output of what the build would do, but without actually doing it?
That is to say, it would list all of the commands that would be submitted to the system, output the expansion of all filesets, etc.
When I've searched 'ant' and 'test', I get overwhelming hits for running tests with ant. Any suggestions on actually testing ant build files?
It seems, that you are looking for a "dry run".
I googled it a bit and found no evidence that this is supoorted.
Heres a bugzilla-request for that feature, that explains things a bit:
https://issues.apache.org/bugzilla/show_bug.cgi?id=35464
This is impossible in theory and in practice. In theory, you cannot test a program meaningfully without actually running it (basically the halting problem).
In practice, since individual ant tasks very often depend on each other's output, this would be quite pointless for the vast majority of Ant scripts. Most of them compile some source code and build JARs from the class files - but what would the fileset for the JAR contain if the compiler didn't actually run?
The proper way to test an Ant script is to run it regularly, but on a test system, possibly a VM image that you can restory to the original state easily.
Here's a problem: You have target #1 that builds a bunch of stuff, then target #2 that copies it.
You run your Ant script in test mode, it pretends to do target #1. Now it comes to target #2 and there's nothing to copy. What should target #2 return? Things can get even more confusing when you have if and unless clauses in your ant targets.
I know that Make has a command line parameter that tells it to run without doing a build, but I never found it all that useful. Maybe that's why Ant doesn't have one.
Ant does have a -k parameter to tell it to keep going if something failed. You might find that useful.
As Michael already said, that's what Test Systems - VM's come in handy- are for
From my ant bookmarks => some years ago some tool called "Virtual Ant" has been announced, i never tried it. So don't regard it as a tip but as something someone heard of
From what the site says =
"With Virtual Ant you no longer have to get your hands dirty with XML to create or edit Ant build scripts. Work in a completely virtualized environment similar to Windows Explorer and run your tasks on a Virtual File System to see what they do, in real time, without affecting your real file system*. The actual Ant build script is generated in the background."
Hm, sounds to good to be true ;-)
..without affecting your real file system.. might be what you asked for !?
They provide a 30day trial license so you won't lose no money but only the time to have a look on..

build script - how to do it

About 2 months ago I overtook building proccess in current company. Even though I don't have much knowledge of it, I was the only with enough time, so I didn't have much choice.
Situation is not that good, and I would like to do following:
Labeling files in SourceSafe with version (example ProjectName PV 1.2)
GetFiles from SourceSafe to specific directory
Build vb6/c++/c# projects(yes, there are all kinds of them)
Build InstallShield setups
This is for now partly done using batch scripts(one for labeling and getting, one for building, etc..). So when building start I pretty much have babysit it.
Good part of this code could be reused.
Any recommendations on how to do it better? One big problem is whole bunch of dependencies between projects. Also labeling has to increment version and if necessary change PV to EV.
I would like to minimize user interaction as much as possible. One click on one build script(Spolsky is god) and all is done, no need to increment version, to set where to get files and similar stuff.
Is the batch scripting best way to go? Should I do some functionality with msbuild. Are there any other options?
Specific code is not need, for now I just need a way how to improve it, even though it wouldn't hurt.
Tnx,
Marko
Since you already have a build system (even though some of it currently "manual"), whatever you do, don't start over from scratch.
(1) Make sure you have a test machine (or Virtual Machine) on which to work. Thus you can make changes and improvements without having to worry about breaking anything.
(2) Put all of your build scripts and tools in version control, not just the source code. Then as you make changes, see if they work. If they do, then save them to version control. If they don't, then roll them back.
(3) Choose one area to work on at a time. Don't try to do everything at once. Going from a lot of manual work to "one-click" will take time no matter what build system you're working with.
Sounds like you want a continuous integration solution, like CC.Net. It has configuration options to do all the things you want and a great community to answer questions.
Also, batch scripting is probably not a good option. Sophisticated build and integration tools will let you feed parameters into the build and create different builds for different environments (test, production, etc.). Batch scripting will involve a lot of hand-coding and glue.

Print complete control flow through gdb including values of variables

The idea is that given a specific input to the program, somehow I want to automatically step-in through the complete program and dump its control flow along with all the data being used like classes and their variables. Is their a straightforward way to do this? Or can this be done by some scripting over gdb or does it require modification in gdb?
Ok the reason for this question is because of an idea regarding a debugging tool. What it does is this. Given two different inputs to a program, one causing an incorrect output and the other a correct one, it will tell what part of the control flow differ for them.
So What I think will be needed is a complete dump of these 2 control flows going into a diff engine. And if the two inputs are following similar control flows then their diff would (in many cases) give a good idea about why the bug exist.
This can be made into a very engaging tool with many features build on top of this.
Tell us a little more about the environment. dtrace, for example, will do a marvelous job of this in Solaris or Leopard. gprof is another possibility.
A bumpo version of this could be done with yes(1), or expect(1).
If you want to get fancy, GDB can be scripted with Python in some versions.
What you are describing sounds a bit like gdb's "tracepoint debugging".
See gdb's internal help "help tracepoint". You can also see a whitepaper
here: http://sourceware.org/gdb/talks/esc-west-1999/
Unfortunately, this functionality is not currently implemented for
native debugging, but I believe that CodeSourcery is doing some work
on it.
Check this out, unlike Coverity, Fenris is free and widly used..
How to print the next N executed lines automatically in GDB?