Loading fsx files dynamically in an FSX script - scripting

We are sharing a build script for FAKE across a set of projects. We want to keep this one build script the same but make it possible to extend with other targets. One way I could think of doing this is by loading .fsx files if they fit a specific naming pattern like al files that matches build-*.fsx however I can't seem to think of a way to load these files dynamically. Any suggestions on how to do this or how to accomplish the desired result are all good as answers
if I could I would have done something like
#load "build-*.fsx"

It's not completely clear to me why you want to do this but maybe this will help. Refer to a single script in each project:
#load "load-build-scripts.fsx"
And then in single load-build-scripts.fsx:
#load "build-1.fsx"
#load "build-2.fsx"
#load "build-3.fsx"
...
This second file you will need to change whenever you add a new script.
It's not generally recommended to do this. Because now if these separate scripts refer to each other then some scripts will be loaded more than once. Scripts aren't really meant to be used for cases this complex.
Another option is to use FAKE as a console project instead of using scripts and the fake-cli tool. Then you can use normal .NET project dependencies.

Related

Adding a two new phases to an Xcode framework project

I am building a project on Github written in Objective-C. It resolves MAC addresses down to manufacturer details. The lookup table is currently stored as text file manuf.txt (from the Wireshark project), which is parsed at run-time, which is costly. I would prefer to compile this down to archived objects at build-time, and load that instead.
I would like to amend the build phases such that I:
Build a simple compiler
Run the compiler, parsing manuf.txt and outputting archived objects
Build the framework
Copy the archived objects into the framwork
I am looking for wisdom on how to achieve steps 1 and 2 using Xcode v7.3 as Xcode provides only a Copy Files phase or a Run Script phase. An example of other projects achieving similar goals would be inspiring.
I suspect that what you are asking is possible, but tricky. The reason is that you will need to write a bunch of class files and then dynamically add them to the project.
Firstly you will need to employ a run script phase to run various tools from the command line to parse your file and generate a number of class files from it. I would suggest looking into various templating engines. For example appledoc uses moustache templates to generate API documentation files. You could use the same technique to generate header and implementation files.
Next, rather than generating archived objects an trying to import into a framework. I think you may be better off generating raw source code, adding it to a project and compiling into a framework. Probably simpler in the long run.
To automatically include the generated code I would look into (which means I haven't actually tried this :-) adding a folder reference to the project rather than an Xcode group. Folder references are an option in the 'Add files to ...' dialog.
Folder references refer to a directory and automatically add the entire contents of that directory to a project. So you can use one to point to the directory where you have generated the source code. This is a much better option than trying to manipulate the project or injecting things into an established framework.
I would prefer to parse the file at runtime. After launch you can look for an already existing output, otherwise parse it one time.
However, I have to do something similar at Objective-Cloud. I simply added a run script build phase and put the compiler call into it.

How to reference the absolute directory of a project in Autoconf (to call custom scripts in portable way)?

I'm writing a custom check for installed libraries in autoconf:
AC_DEFUN([AC_GHC_PKG_CHECK],[
...
GHC_PKG_RESULT=$($PYTHON autotools/check-ghc-version-range ....)
...
])
where my Python script that actually performs the check resides in the autotools/ sub-directory of the project.
However, this is not portable, for example make dist-check fails because then autoconf tools are called from a different directory. How can I reference the absolute path to my Python script so that it gets called properly no matter what the current directory is?
ac_top_srcdir or ac_abs_top_srcdir should work in this case:
AC_DEFUN([AC_GHC_PKG_CHECK],[
...
GHC_PKG_RESULT=$($PYTHON $ac_top_srcdir/autotools/check-ghc-version-range ....)
...
])
EDIT: I don't think this approach will work -- it seems that $ac_top_srcdir aren't evaluated until later (AC_OUTPUT?).
What I think might work in this instance is to do something similar to what the runtime C tests do: blast a configuration test to a temporary file (conftest.py instead of conftest.c in this case) and run it. Unfortunately, there's (yet) no builtin macros or for automake/autoconf other tools that directly assist with this task.
Fortunately it seems that a clever person has written at least a couple different ways to do this. The first one is GNU pyconfigure which seems to have facilities for writing Python test code as I described above. The second one is more of an ad hoc macro collection that he used for his project.
You can use $srcdir.
It's not necessarily an absolute path, but it's a path that points from the top of the build tree to the top of the source tree.

CDash Custom Dynamic Analysis

I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.

Is it possible to override the behavior of a merge module

Supposing I have a merge module that installs a file "MyFile.txt" to a certain location, and that I wish to use that merge module, however I want to supply a different copy of "MyFile.txt" from the one supplied with the merge module.
Is it possible to do this? (And for bonus points how can I do this using Wix)
Update: Roughly speaking MyFile.txt is part of a package up component of installable items that we provide to others, they then comine these components with their own to produce an installer.
In the ideal world they would only need to add new files to the output, however this is a replacement for an existing system where they currently have the ability to modify or even replace items (suce as MyFile.txt) in the end installer, and so without the ability to do the same with the merge module the migration path will be difficult.
The packaged up component doesn't need to be a merge module if there is a better solution, however merge modules seemed like the sensible choice and in all other respects provide a very nice re-usable package of installer logic.
It's possible but every technique that I know is a bit of a hack and doesn't scale very well. Can you tell me more about what type of file MyFile.txt is and what the intent of the different flavors of the file? Usually my goal is to never have the same filename twice ( darn component rules ) and then design variation points to support the needs. Sometimes upstream changes to the application are required to do this correctly.

Haskell IO Testing

I've been trying to figure out if there is already an accepted method for testing file io operations in Haskell, but I have yet to find any information that is useful for what I am trying to do.
I'm writing a small library that performs various file system operations (recursively traverse a directory and return a list of all files; sync multiple directories so that each directory contains the same files using inodes as the equality test and hardlinks...) and I want to make sure that they actually work, but the only way I can think of to test them is to create a temporary directory with a known structure and compare the results from the functions executed on this temporary directory with the known results. The thing is, I would like to get as much test coverage as possible while still being mainly automated: I don't want to have create the directory structure by hand.
I have searched google and hackage, but the packages that I have seen on hackage do not use any testing -- maybe I just picked the wrong ones -- and anything I find on google does not deal with IO testing.
Any help would be appreciated
Thanks, James
Maybe you can find a way to make this work for you.
EDIT:
the packages that I have seen on hackage do not use any testing
I have found an unit testing framework for Haskell on Hackage. Including this framework, maybe you could use assertions to verify that the files you require are present in the directories that you want them to be and they correspond to their intended purpose.
HUnit is the usual library for IO-based tests. I don't know of a set of properties/combinators for file actions -- that would be useful.
There is no reason why your test code cannot create a temporary directory, and check its contents after running your impure code.
If you want mainly automated testing of monadic code, you might want to look into Monadic QuickCheck. You can write down properties that you think should be true, such as
If you create a file with read permission, it will be possible to open the file for reading.
If you remove a file, it won't open.
Whatever else you think of...
QuickCheck will then generate random tests.