TFS SDK Code metrics - tfs-sdk

I am making some kind of wrapper around the TFS SDK.
I would like to query the code metrics from a given project on a given URL.
MS Visual studio has a feature to calculate the code metrics inside the IDE. Is it possible to do this on a TFS server and query it from the Warehouse ? I would be interested in
cyclometic complexity
depth of inheritance
class coupling
lines of code
maintainability index
and even more metrics if it is possible
Thanks in advance

By using this utility, you can generate a "metrics.xml" for each project/solution. I am using TFS2010 & have inserted this as a build step, following this guide by J.Ehn.Those results are not inserted into TFS-persistency in any means - but in theory, you could set up an own database where this output shall be inserted. From then on, you could retrieve as you wish.

Related

Oracle SQL Automation

I have a question regarding Automation.In my project we have 35 SQL scripts with same logic,same scripts for all those, with only 4 parameters different how can i automate these in TOAD for Oracle?
Depending on your version of Toad, there could be a 'Automation Designer' under the 'Utilities' menu item. This will allow you to run scripts automatically, based on a bit of logic. It also supports running with parameters.
The tool 'Toad for Data Analysts' can also be used to model automated scripts, and run them with specific parameters.
If you have any of these tools available, I would suggest giving them a try, or at least read up on their documentation. If you don't have access to these, let me know so I can try and think of a different solution.

WSO2 BAM Incremental Analysis

According to the documentation here, this feature is experimental but I would like to know if anyone is using it successfully. I already have some data so I am trying use case 4.
I tried to run an update hive query with #Incremental annotation but with it nothing goes into my RDB anymore.
If I remove it, everything is working but I want to take an advantage of this feature, because of the large amount of stored data and the query execution going very slow cause of it.
Any suggestion or help is greatly appreciated.
The incremental analysis feature will be working fine in the partially distributed setup, but it wasn't thoroughly tested in the external hadoop cluster, hence it was marked as 'experimenal'. Anyhow if you find any bugs on these you can report it in jira.
To answer your question, you need to enable the incremental processing for your stream first and then you need to add the incremental annotation.The following are the detailed steps for this.
1) You need add property 'streams.definitions.defn1.enableIncrementalIndex=true' in the streams.properties as explained here file and create a toolbox which consists only the stream definition artefact as explained here.
2) Install the toolbox - This will register the stream definition you mentioned in the toolbox with incremental analysis. On this point on wards the incoming data will be incrementally processed.
3) Now indicate the #Incremental annotation in the query. The first iteration will consider the whole available data as you have enabled the incremental analysis in the middle of the processing, but from next iteration onwards it'll only consider the new bunch of data.
This feature is said as experimental as there may be some critical bugs. We will release a more stable version of BAM with this feature in the next release.

Compare two fxcop results

I'm going to analysis two different versions of the same dll with fxcop.
I would like to display only the differences between these two reports.
Does anyone know if this is possible ?
Thanks for your time.
Yes, it's possible, but there are no built-in tools available for this. One fairly simple approach would be to use a diff tool to compare the two reports. If the result is too noisy for you, another approach would be to roll your own tool to compare the XML of the two reports.
Are you using UI or the command line?
With the command line tool, you have a number of options. One of them is to import an old report to be used as a baseline. Then set the fxcop project to report only new errors: Report Status="Active, Absent" NewOnly="True"
The command line will be something like this: fxcopcmd.exe /i:OldVersionReport.xml /out:NewVersionReport.xml /p:FXCopProject.fxcop /f:mydll.dll
The new report will have only new active error and also a list of missing i.e. fixed errors from the old version.
While this will work for the most part, you need to understand that the difference will not be 100% acurate. FXCop does its best to match old report to the new version of the DLL, but sometimes it fails. For example, if you fixed a particular violation somewhere in code, but added the same type of violation in another place, FXCop will most likely miss this and show no difference.
For FxCop VS 2010 , all you need is to have /saveMessagesToReport:Absent along with the older generated FxCop file /import:"OldFile.xml" specified .
Just an eg.
fxcopcmd.exe /import:"c:\Old.xml" /summary "/file:c:\*.dll"
/saveMessagesToReport:Absent /out:"c:\Output.xml"

SQL Server Version Updating Tables

I am part of a software development company looking for a good way to update my SQL Server tables when I put out a new version of the software. I know the answer is to probably use scripts in one form or another.
I am considering writing my own .NET program that runs the scripts to make it a bit easier and more user-friendly. I was wondering if there are any tools out there along those lines. Any input would be appreciated.
Suggest you look at Red_gate's SQlCompare
What kind of product are you using for your software installation? Products like InstallShield often now include SQL steps as an option for part of your install script.
Otherwise, you could look at using isql/osql to run your script from the command line through a batch file.
One of the developers where I'm currently consulting wrote a rather nifty SQL installer. I'll ask him when he gets in how he went about it.
I am using Red Gate's SQL Compare all the time. Also you need to make sure to provide a rollback script in case you need to go back to the previous version.
Have a look at DB Ghost Packager Plus.
Packages your source database and the compare and sync engine into a simple EXE for deployment. The installer EXE will automatically update any target schema to match the source on-the-fly at installation time.
Red Gate's SQL Compare to generate the change script, and Red Gate's Multi Script to easily send it to multiple SQL databases at the same time.

Test framework for black box regression testing

I am looking for a tool for regression testing a suite of equipment we are building.
The current concept is that you create an input file (text/csv) to the tool specifying inputs to the system under test. The tool then captures the outputs from the system and records the inputs and outputs to an output file.
The output is in the same format as the original input file and can be used as an input for following runs of the tool, with the measured outputs matched with the values from the previous run.
The results of two runs will not be exact matches, there are some timing differences that depend on the state of the battery, or which depend on other internal state of the equipment.
We would have to write our own interfaces to pass the commands from the tool to the equipment and to capture the output of the equipment.
This is a relatively simple task, but I am looking for an existing tool / package / library to avoid re-inventing the wheel / steal lessons from.
I recently built a system like this on top of git (http://git.or.cz/). Basically, write a program that takes all your input files, sends them to the server, reads the output back, and writes it to a set of output files. After the first run, commit the output files to git.
For future runs, your success is determined by whether the git repository is clean after the run finishes:
test 0 == $(git diff data/output/ | wc -l)
As a bonus, you can use all the git tools to compare differences, and commit them if it turns out the differences were an improvement, so that future runs will pass. It also works great when merging between branches.
I'm not sure there will be a single package that exactly suits your needs. You have a few considerations to make:
How to pass data to the equipment and how to collect it back. This is very application specific, but a usually good option is the old'n'good serial port (RS232) for which an easy interfact exists for any programming language.
How to run the tests. A unit-testing framework can definitely help you here. The existing frameworks have a lot of the basic features implemented - selecting tests to run, selecting the detail-level of the report (very important for detailed debugging at first and production-stage PASS/FAIL analysis later on). I've had good experience using the test frameworks of both Perl and Python from testing embedded devices.
You also have to decide how to make the comparisons. As you correctly noted, the results won't be equal. This is where your domain knowledge comes in. Usually, it is simply implemented using error margins that are applicable in your domain. Of course, you won't be able to use a basic diff tool and will have to write an intelligent script.
You can just use any test framework. The hard part is writing the tools to send/retrieve the data from your test system, not the actual string comparisons.
Your tests would just all look like this:
x = read_input_file(ifilename);
y1 = read_expected_data(ofilename);
send_input_file_to_server();
y2 = read_output_from_server();
checkequal(y1, y2)