CDash Custom Dynamic Analysis - cmake

I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?

If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".

From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.

Related

Upload image diff using CTest and CDash

For running automated tests in a C++ application, I would like the application to dump an image and compare it against a baseline image. I saw several examples of this on various CDash dashboards, e.g. this one (link might not be valid for long).
https://open.cdash.org/testDetails.php?test=660365465&build=5407474
My google-fu has failed me on this one, what is the correct way to get this functionality?
The easiest way to attach ordinary files to test results is by listing those files in either the ATTACHED_FILES or ATTACHED_FILES_ON_FAIL test properties. This is not the mechanism being used here though.
According to this mailing list post, you can output special contents like that shown below to the test's stdout and it results in the named files being uploaded. The sample CDash results page you linked to follows a similar pattern as the example from the mailing list, which I've reproduced here for reference (I've made one small correction to change DifferenceImage to DifferenceImage2):
<DartMeasurement name="BaselineImage" type="text/string">Standard</DartMeasurement>
<DartMeasurementFile name="TestImage" type="image/png">C:/Users/.../Testing/Temporary/BoxWidget.png</DartMeasurementFile>
<DartMeasurementFile name="DifferenceImage2" type="image/png">C:/Users/.../Testing/Temporary/BoxWidget.diff.png</DartMeasurementFile>
<DartMeasurementFile name="ValidImage" type="image/png">C:/Users/.../VTKData/Baseline/Widgets/BoxWidget.png</DartMeasurementFile>
I've checked through the CTest source code and it scans the test output looking for <DartMeasurement> and <DartMeasurementFile> tags here and here. These are uploaded as discrete measurement items to CDash, which also looks for these particular names and presents them specially as in the example CDash links in the question.

Is there a way to export the predefined macros from a Keil build configuration?

Context:
I'm trying to automate some of the more mundane tasks in embedded development with Keil. The end result I'm aiming for is that clicking build in a Keil project will run a pre-build step that runs all the code through Uncrustify (a source code beautifier) to ensure it conforms to the company style-guide, and a post-build step which then runs the code through pc-lint (a static code analyser) to highlight any potentially unsafe code that it might find. I've written a PC utility that searches through the .uvproj file for the #define macros, the include paths and the file-paths all of which are needed for both tools and then modifies the pre and post-build user commands to call up my batch files which will manage both steps. The uncrustify part is working fine and the lint part is producing some sensible messages, but the signal-to-noise ratio isn't that great.
My problem:
Lint keeps on producing messages that seem to relate to macros that the Keil compiler is aware of, but that Lint isn't. I'm trying to find a way to plug that gap. I found a table of predefined macros documented on the Keil website, which seems like a good start, but rather than manually copying them into a static .lnt file, I'd like to find a way of grabbing the up-to-date values at the time the project gets built. This way, the "__ARMCC_VERSION" macro, for instance, would be updated whenever the developer updates his/her Keil compiler, rather than being stuck at a point in time whenever I manually copied it.
I'd love it if someone can answer my question directly, but I'd be equally pleased if someone has a viable suggestion for a more straightforward alternative approach I could try instead. Many thanks!
I am assuming you're using the Keil ARM Compiler.
From the Compiler User Guide:
To list macros that are defined on the command line, predefined by the compiler, and found in header and source files, use --list_macros with a non-empty source file.
To list only macros predefined by the compiler and specified on the command line, use --list_macros with an empty source file.
EDIT:
It looks like your SDK also adds a few macros.
From the µVision User's Guide:
The following control strings are added, depending on the use of MDK:
__UVISION_VERSION:
Major and minor version of µVision. For example: -D__UVISION_VERSION="520".
RTE:
Set when RTE is in use. For example: -D_RTE_.
__RTX:
Set when RTX Kernel has been selected in Options for Target - Target - Operation System. Not set when using RTE. For example: -D__RTX.
__MICORLIB:
Set when Use MicroLIB has been enabled in Options for Target - Target. For example: -D__MICROLIB.
__EVAL:
µVision runs in evaluation mode. License MDK-Lite. For example: -D__EVAL.
device header name:
Device header name.

Adding a two new phases to an Xcode framework project

I am building a project on Github written in Objective-C. It resolves MAC addresses down to manufacturer details. The lookup table is currently stored as text file manuf.txt (from the Wireshark project), which is parsed at run-time, which is costly. I would prefer to compile this down to archived objects at build-time, and load that instead.
I would like to amend the build phases such that I:
Build a simple compiler
Run the compiler, parsing manuf.txt and outputting archived objects
Build the framework
Copy the archived objects into the framwork
I am looking for wisdom on how to achieve steps 1 and 2 using Xcode v7.3 as Xcode provides only a Copy Files phase or a Run Script phase. An example of other projects achieving similar goals would be inspiring.
I suspect that what you are asking is possible, but tricky. The reason is that you will need to write a bunch of class files and then dynamically add them to the project.
Firstly you will need to employ a run script phase to run various tools from the command line to parse your file and generate a number of class files from it. I would suggest looking into various templating engines. For example appledoc uses moustache templates to generate API documentation files. You could use the same technique to generate header and implementation files.
Next, rather than generating archived objects an trying to import into a framework. I think you may be better off generating raw source code, adding it to a project and compiling into a framework. Probably simpler in the long run.
To automatically include the generated code I would look into (which means I haven't actually tried this :-) adding a folder reference to the project rather than an Xcode group. Folder references are an option in the 'Add files to ...' dialog.
Folder references refer to a directory and automatically add the entire contents of that directory to a project. So you can use one to point to the directory where you have generated the source code. This is a much better option than trying to manipulate the project or injecting things into an established framework.
I would prefer to parse the file at runtime. After launch you can look for an already existing output, otherwise parse it one time.
However, I have to do something similar at Objective-Cloud. I simply added a run script build phase and put the compiler call into it.

Is is possible to pass a variable from the build process to Visual Basic code?

My goal is to create build definitions within Visual Studio Team Services for both test and production environments. I need to update 2 variables in my code which determine which database and which blob storage the environment uses. Up till now, I've juggled this value in a Resource variable, and pulled that value in code from My.Resources.DB for a library, and Microsoft.Azure.CloudConfigurationManager.GetSetting("DatabaseConnectionString") for an Azure worker role. However, changing 4 variables every time I do a release is getting tiring.
I see a lot of posts that get close to what I want, but they're geared towards C#. For reasons beyond my influence, this project is written in VB.NET. It seems I have 2 options. First, I could call the MSBuild process with a couple of defined properties, passing them to the .metaproj build file, but I don't know how to get them to be used in VB code. That's preferable, but, at this point, I'm starting to doubt that this is possible.
I've been able to set some pre-processor constants, to be recognized in #If-#Else directives.
#If DEBUG = True Then
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.xxx")
#Else
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.133")
#End If
msbuild CalbertNG.sln.metaproj /t:Rebuild /p:DefineConstants="DEBUG=False"
This seems to work, though I need to Rebuild to change the value of that constant. Should I have to? Should Build be enough? Is this normal, or an indication that I don't have something set quite right?
I've seen other posts that talk about pre-processing the source files with some other builder, like Ant, but that seems like overkill. It feels like I'm close here. But I want to zoom out and ask, from a clean sheet of paper, if you're given 2 variables which need to change per environment, you're using VB.NET, and you want to incorporate those variable values in an automated VS Team Services build process upon code check-in, what's the best way to do it? (I want to define the variables in the VSTS panel, but this just passes them to my builder, so I have to know how to parse the call to MSBuild to make these useful.)
I can control picking between 2 static strings, now, via compiler directives, but I'd really like to reference the Build.BuildNumber that comes out of the MSBuild process to display to the user, and, if I can do that, I can just feed the variables for database and blob container via the same mechanism, and skip the pre-processor.
You've already found the way you can pass data from the MsBuild Arguments directly into the code. An alternative is to use the Condition Attribute in your project files to make certain property groups optional, it allows you to even include specific files conditionally. You can control conditions by passing in /p:ConditionalProperty=value on the MsBuild command. This at least ensures people use a set of values that make sense together.
The problem is that when MsBuild is running in Incremental mode it is likely to not process your changes (as you've noticed), the reason for this, is that the input files remain unchanged since the last build and are all older than the last generated output files.
To by-pass this behavior you'd normally create a separate solution configuration and override the output location for all projects to be unique for that configuration. Combined with setting the Compiler constants for that specific configuration you're ensured that when building that Configuration/Platform combination, incremental builds work as intended.
I do want to echo some of the comments from JerryM and Daniel Mann. Some items are better stored in else where or updated before you actually start the compile phase.
Possible solutions:
Store your configuration data in config files and use Configuration Transformation to generate the right config file base don the selected solution configuration. The process is explained on MSDN. To enable configuration transformation on all project types, you can use SlowCheetah.
Store your ocnfiguration data in the config files and use MsDeploy and specify a Parameters.xml file that matches the deploy package. It will perform the transformation on deploy time and will actually allow your solution to contain a standard config file you use at runtime, plus a publish profile which will post-process your configuration. You can use a SetParameters.xml file to override the variables at deploy time.
Create an installer project (such as through Wix) and merge the final configuration at install time (similar to the MsDeploy). You could even provide a UI which prompts for specific values (and can supply default values).
Use a CI server, like the new TFS/VSTS 2015 task based build engine and combine it with a task that can search&replace tokens, like the Replace Tokens task, Tokenization Task, Colin's ALM Corner Build and Release Tasks. And a whole bunch that specifically deal with versioning. Handling these things in the CI server also allows you to do a quick build locally at all times and do these relatively expensive steps on the build server (patching source code breaks incremental build in MsBuild, because there are always newer input files.
When talking specifically about versioning, there are a number of ways to set the AssemblyVersion and AssemblyFileVersion just before compile time, usually it involves overriding the AssemblyInfo.cs file before compilation. Your code could then use reflection to read the value at runtime. You can use the AssemblyInformationalversion to specify something like you do in the example above which contains .xxx or other text. It also ensures that the version displayed always reflects the information obtained when reading the file properties through Windows Explorer.

How can I create a single Clojure source file which can be safely used as a script and a library without AOT compilation?

I’ve spent some time researching this and though I’ve found some relevant info,
Here’s what I’ve found:
SO question: “What is the clojure equivalent of the Python idiom if __name__ == '__main__'?”
Some techniques at RosettaCode
A few discussions in the Cojure Google Group — most from 2009
but none of them have answered the question satisfactorily.
My Clojure source code file defines a namespace and a bunch of functions. There’s also a function which I want to be invoked when the source file is run as a script, but never when it’s imported as a library.
So: now that it’s 2012, is there a way to do this yet, without AOT compilation? If so, please enlighten me!
I'm assuming by run as a script you mean via clojure.main as follows:
java -cp clojure.jar clojure.main /path/to/myscript.clj
If so then there is a simple technique: put all the library functions in a separate namespace like mylibrary.clj. Then myscript.clj can use/require this library, as can your other code. But the specific functions in myscript.clj will only get called when it is run as a script.
As a bonus, this also gives you a good project structure, as you don't want script-specific code mixed in with your general library functions.
EDIT:
I don't think there is a robust within Clojure itself way to determine whether a single file was launched as a script or loaded as a library - from Clojure's perspective, there is no difference between the two (it all gets loaded in the same way via Compiler.load(...) in the Clojure source for anyone interested).
Options if you really want to detect the manner of the launch:
Write a main class in Java which sets a static flag then launched the Clojure script. You can easily test this flag from Clojure.
Use AOT compilation to implement a Clojure main class which sets a flag
Use *command-line-args* to indicate script usage. You'll need to pass an extra parameter like "script" on the command line.
Use a platform-specific method to determine the command line (e.g. from the environment variables in Windows)
Use the --eval option in the clojure.main command line to load your clj file and launch a specific function that represents your script. This function can then set a script-specific flag if needed
Use one of the methods for detecting the Java main class at runtime
I’ve come up with an approach which, while deeply flawed, seems to work.
I identify which namespaces are known when my program is running as a script. Then I can compare that number to the number of namespaces known at runtime. The idea is that if the file is being used as a lib, there should be at least one more namespace present than in the script case.
Of course, this is extremely hacky and brittle, but it does seem to work:
(defn running-as-script
"This is hacky and brittle but it seems to work. I’d love a better
way to do this; see http://stackoverflow.com/q/9027265"
[]
(let
[known-namespaces
#{"clojure.set"
"user"
"clojure.main"
"clj-time.format"
"clojure.core"
"rollup"
"clj-time.core"
"clojure.java.io"
"clojure.string"
"clojure.core.protocols"}]
(= (count (all-ns)) (count known-namespaces))))
This might be helpful: the github project lein-oneoff describes itself as "dependency management for one-off, single-file clojure programs."
This lets you define everything in one file, but you do need the oneoff plugin installed in order to run it from the command line.