My product owner has asked me to make some comparision logic configurable so the process engineers can change things without making code changes. Currently the code is a SELECT CASE statement with various IF THEN statements that are fairly standard. The problem I can't seem to find a way around is that he wants through configuration to AND/OR a variable number of comparisons in the IF THEN statements. His idea is the that the configuration would work like a limited query builder for the process engineers. The only solution I've come up with is to build a function in a string and use the VBCodeProvider to compile it at runtime. Is there a better way to approach this?
One way to do it is just store the booleans in your configuration file, load them up at run time, and use them in your code like any other boolean.
A better way would be to have the configuration as close to his problem domain as possible, then code up the proper booleans from those to use in your code.
You could use expressions to accomplish this. With this you would be able to build up an IfExpression and build up its conditions. You would be able to compile this and run it all at runtime.
Related
I have certain Modules that I would like to setup to be referencable by multiple solutions, as the code always behaves in basically the same manner (ex. code for logging errors). They make no sense as classes, so it seems like a class library is out; and I haven't seen any other suggestions for sharing code between solutions.
So I'm left wondering what would be the best way to create one thing that I can just use across multiple solutions to avoid having to rewrite the same code?
It sounds like a class library is exactly what you want. Build it, reference it within each solution, and code against it. One source, multiple solutions running off that code.
You could also implement the functionality into a separate piece such as an API. This is dependent on the function of the code obviously, but logging errors is a good use case.
I maintain an old vb.net project that I didn't make and I was wondering if there's an easy way to determine which parts of the software is still used today by the staff where I work.
I would like to log all function calls without having to edit each one of them if possible.
The project has 27 forms and 6 modules.
Any ideas?
Thanks!
There is no way to 100% determine everything that is used by the system. Vb.Net supports dynamic invocation of methods / properties. Hence you can't even do tricks like delete some code and see if it recompiles. Even if it compiles it could be invoked dynamically.
One way to get a sense of what code is used is to profile the application. Start up the profiler, run the app and go through all of the ways in which the app is used. The resulting profile should give you a good sense of what parts are used. It's very possible though this approach will miss code though
I am developing a command line utility that has a LOT of flags. A typical command looks like this:
mycommand --foo=A --bar=B --jar=C --gnar=D --binks=E
In most cases, a 'success' message is printed but I still want to verify against other sources like an external database to ensure actual success.
I'm starting to create integration tests and I am unsure of the best way to do this. My main concerns are:
There are many many flag combinations, how do I know which combinations to test? If you do the math for the 10+ flags that can be used together...
Is it necessary to test permutations of flags?
How to build a framework capable of automating the tests and then verifying results.
How to keep track of a large number of flags and providing an order so it is easy to tell what combinations have been implemented and what has not.
The thought of manually writing out individual cases and verifying results in a unit-test like format is daunting.
Does anyone know of a pattern that can be used to automate this type of test? Perhaps even software that attempts to solve this problem? How did people working on GNU commandline tools test their software?
I think this is very specific to your application.
First, how do you determine the success of the execution of you application? Is it a result code? Is it something printed to the console?
For question 2, it depends how you parse those flags in your application. Most of the time, order of flags isn't important, but there are cases where it is. I hope you don't need to test for permutations of flags, because it would add a lot of cases to test.
In a general case, you should analyse what is the impact of each flag. It is possible that a flag doesn't interfere with the others, and then it just need to be tested once. This is also the case for flags that are meant to be used alone (--help or --version, for example). You also need to analyse what values you should test for each flag. Usually, you want to try each kind of possible valid value, and each kind of possible invalid values.
I think a simple bash script could be written to perform the tests, or any scripting language, like Python. Using nested loops, you could try, for each flag, possibles values, including tests for invalid values and the case where the flag isn't set. I will produce a multidimensional matrix of results, that should be analysed to see if results are conform to what expected.
When I write apps (in scripting languages), I have a function that parses a command line string. I source the file that I'm developing and unit test that function directly rather than involving the shell.
Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.
Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.
Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.
Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.
We have a bunch of vbscript snippets that are stored in a database. They are created by our users and are used during some complex calculations.
We are using the microsoft scriptcontrol to execute them. As we are switching to 64bit applications we cannot use the scriptcontrol anymore and therefore we are going to start using CodeDom and vb.net instead.
The problem is that we still need to support all those legacy vbscripts until they have been converted to vb.net scripts.
The scripts only contain simple functions taking arbitary number of parameters and do some caluclations on them. As I'm a C# developer I do not have that much experience with vbscript contra vb.net syntax.
Is it easy to convert vbscript code to vb.net (using regex or similar)? Got any pointers or things that I should think of? Or should I just wait until all scripts have been converted by the users (may take a while)?
If you have Option Explicit Off in VB.Net, quite a lot of vbscript code will be ok, but one problem you'd have is that in VB.Net you can't just execute a script by itself, so even if the code might work without any conversion you might not be able to run them in the same way since you'll need to compile them into executables before you can run them. If each script can be executed indepentenly of each other, then you'll either need to compile one executable per script, or have one master executable with a big Select Case in there to call the relevant code depending on command line parameters.
I'd suggest that it might be worth waiting for the users to convert them though and also keeping Option Explicit On and letting the users go through the scripts and add datatypes and similar, since it's quite possible that might find quite a few bugs in your scripts.