How to reference the absolute directory of a project in Autoconf (to call custom scripts in portable way)? - scripting

I'm writing a custom check for installed libraries in autoconf:
AC_DEFUN([AC_GHC_PKG_CHECK],[
...
GHC_PKG_RESULT=$($PYTHON autotools/check-ghc-version-range ....)
...
])
where my Python script that actually performs the check resides in the autotools/ sub-directory of the project.
However, this is not portable, for example make dist-check fails because then autoconf tools are called from a different directory. How can I reference the absolute path to my Python script so that it gets called properly no matter what the current directory is?

ac_top_srcdir or ac_abs_top_srcdir should work in this case:
AC_DEFUN([AC_GHC_PKG_CHECK],[
...
GHC_PKG_RESULT=$($PYTHON $ac_top_srcdir/autotools/check-ghc-version-range ....)
...
])
EDIT: I don't think this approach will work -- it seems that $ac_top_srcdir aren't evaluated until later (AC_OUTPUT?).
What I think might work in this instance is to do something similar to what the runtime C tests do: blast a configuration test to a temporary file (conftest.py instead of conftest.c in this case) and run it. Unfortunately, there's (yet) no builtin macros or for automake/autoconf other tools that directly assist with this task.
Fortunately it seems that a clever person has written at least a couple different ways to do this. The first one is GNU pyconfigure which seems to have facilities for writing Python test code as I described above. The second one is more of an ad hoc macro collection that he used for his project.

You can use $srcdir.
It's not necessarily an absolute path, but it's a path that points from the top of the build tree to the top of the source tree.

Related

How to keep CMake generated files?

I'm using add_custom_command() to generate some files. ninja clean removes them, as it should. One of the files is intended as a default/example implementation, to be modified by the user. It is only generated if it does not already exist. I would like for ninja clean not to remove this file.
I have tried a number of things but without success:
add_custom_target(): CMake complains about the missing file unless I name it in BYPRODUCTS, but doing this also leads to removal on clean
set_file_properties(... GENERATED FALSE) doesn't work because CMake complains about the file missing.
set_directory_properties() failed in a similar way: "folder doesn't exist or not yet processed" (it does exist)
I previously generated the example implementation and just let the user copy it or model their code on it. This works, but isn't entirely satisfactory. Is my use-case so unlikely that CMake doesn't support it?
I am afraid you requirment (conceptually, have make create something which make clean does not remove) is rather unusual. I can think of two potential solutions/workarounds.
One, move the file's generation to CMake time. That is, create it using execute_process() instead of add_custom_command(). This may or may not be possible, based on whether the file-generation process (the current custom command) depends on the rest of the build or not.
Two, totally hide the example file's existence from CMake. That is, have the custom command also generate some other file (maybe just a timestamp file) and have its driving custom target depend on that one instead. Do not list the example file as ither the custom command's dependency, output, or byproduct. That way, nothing will depend on it and neither CMake nor Ninja should not care whether it exists or not, so they will not complain or try to clean it up.
If it is an example for the user, it should not be in your build folder, but in the install folder. I don't see why you would need add_custom_command or the other commands you listed.
Therefore, you have to provide install() instructions.
You can then call make install. Cleaning will not remove those and only installing again will overwrite them if necessary.
For those, who come here a long time after the original question was asked (like me), I'll write my solution:
The tool called in add_custom_command generates two files with identical content:
one that is saved in sources, never mentioned anywhere
and one that's marked as byproduct, and then is depended on
So the first one is the file we wanted in the first place.
And the second one is actually used in build process, and gets deleted on clean.
For me the issue is that I actually want to save generated files in VCS so I can track changes. And this approach gives ne what I need.

Produce static libs from tensorflow_cc and tensorflow_framework

As far as I understand using bazel I can only produce libtensorflow_cc.so and libtensorflow_framework.so.
I need to produce static libs that are position independent (-fPIC) because I'll link them to a dynamic lib of my own later.
I found this answer which suggest the use of a Makefile included in the project.
I successfully used it to replace the libtensorflow_cc.so but what can I do to replace libtensorflow_framework.so?
Not an actual answer, but too long for a comment.
I managed to do something like what you mention using Bazel on Windows. In particular, I wanted to make a single wrapper DLL with one or two headers (limited in functionality) that I could move around easily. I'll write a summary of the things that I did; it's rather convoluted an customized for our needs, but maybe you find something useful.
I pass --config=monolithic to the bazel build command (besides any other option that you need). That will avoid modularizing the library and thus remove the dependency to a libtensorflow_framework.so (see
tools/bazel.rc).
The goal that I build is not any of the ones in the TensorFlow repository. Instead, I add a very small program that uses my wrapper as a new Bazel target (a C++ file plus my headers headers and a BUILD file). So all of TensorFlow had to be compiled beforehand in order to compile this final dummy program.
When I get that done, I take advantage of the fact that Bazel does already compile every subgoal as a static library. I check a file under the bazel-bin directory generated for my dummy program goal with a name ending .params - there I find the path of all the static libraries that were used to compile it.
I copy all of these intermediate static libraries to somewhere else. Also, I copy a bunch of headers I will need to compile my final wrapper (TensorFlow own's, but also Eigen, Protobuf and Nsync now too). I put all of this in a build area I have prepared before.
I use NMake Makefile to produce my custom DLL, using the static libraries, the copied headers and my own thin wrapper.
And that's about it, I think. I have an ugly Bash script I run on MSYS2 that does everything for me. Usually with every new release I need to tweak one or two things (some option in the configure script, some additional headers I need to copy, etc.), but I do get it to work in the end. It's quite a lot of fiddling though, so I'm not necessarily saying you should use the same approach (but feel free to ask for details about any step if you want).
Using the -2.params files #jdehesa mentioned and bazel verbose output (-s switch), you can even create a link command to eventually statically link these intermediate static libraries. I automated this process for Windows/Linux/macOS and included it to the vcpkg package manager. To use it just run vcpkg install tensorflow:x64-windows-static. If you're interested in the sources, you'll find them here.

Why is cmake file GLOB evil?

The CMake doc says about the command file GLOB:
We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.
Several discussion threads in the web second that globbing source files is evil.
However, to make the build system know that a source has been added or removed, it's sufficient to say
touch CMakeLists.txt
Right?
Then that's less effort than editing CMakeLists.txt to insert or delete a source file name. Nor is it more difficult to remember. So I don't see any good reason to advise against file GLOB.
What's wrong with this argument?
The problem is when you're not alone working on a project.
Let's say project has developer A and B.
A adds a new source file x.c. He doesn't changes CMakeLists.txt and commits after he's finished implementing x.c.
Now B does a git pull, and since there have been no modifications to the CMakeLists.txt, CMake isn't run again and B causes linker errors when compiling, because x.c has not been added to its source files list.
2020 Edit: CMake 3.12 introduces the CONFIGURE_DEPENDS argument to file(GLOB which makes globbing scan for new files: https://cmake.org/cmake/help/v3.12/command/file.html#filesystem
This is however not portable (as Visual Studio or Xcode solutions don't support the feature) so please only use that as a first approximation, else other people can have trouble building your CMake files under their IDE of choice!
It's not inherently evil - it has advantanges and disadvantages, covered relatively well in this answer here on StackOverflow. But if you use it carelessly, you could end up ignoring dependency changes and requiring clean rebuilds of large parts of your codebase.
I'm personally in favor of using it - in smaller projects, or on certain subdirectories in larger ones - to avoid having to enter every file manually into the build files. Edit: My preference has changed and I currently tend to avoid it.
On top of the reasons other people here posted, imho the worst issue with glob is that it can yield DIFFERENT file lists on different platforms. As I see it, that's a bug. In OSX glob ignores case sensitivity and in a ubuntu box it doesn't.
Globbing breaks all code inspection in things like CLion that otherwise understand limited subsets of CMakeLists.txt and do not and never will support globbing as it is unsafe.
Write script to dump the globbed list and paste it in, its very simple, and then CLion can actually find the referenced files and infer them as useful. Maybe even put such script into the tree so that the other devs can either run it without being a moron OR set git hooks to make it happen.
In no case should some random file dropped into some directory ever get automatically linked that's how trojans happen.
Also CLion without context jumping to known definitions and what not, is like hiking barefoot /// why bother.

How to create a Wix patch in combination with Heat?

I'm developer on a big system (>100 Projects in Solution, >100 000 LOC, > 10 Services, ...) and did the installation of this system in the past with wix and it worked fine. Now I need a way to patch (Minor Upgrade) parts of the system and run into several issues.
My Current Wix Setup is as following:
I have VS2010 and Wix3.6 Toolset and TFS2012 to Build the whole thing and get an installer
I'm using a Setup Library Project Type per Service
I'm using exactly one Setup Project to bundle things together and get one installer for the whole system.
It's not possible to change this setup.
The Setup Library Projects are set up as following:
I use the heat-directory msbuild task to generate the components and files and I'm using preprocessor variables to modify the file paths.
I need to modify the file paths because it must be possible to build an installer on the local developer system and to build the installer on the tfs build system which is different in folder structures.
The TFS uses always the same directory to compile subsequent versions of the software and moves the output after successful compilation to a unique folder structure.
Now I need a patch.
I created the Patch.wxs and called candle and light for it. I called torch to get the difference file. And finally want to create the patch with pyro.
Everything worked fine with a simple testproject, but on the big system
Pyro has the problem that it can't find the files to install.
Through my setup (see above), I must use preprocessor variables and have a full qualified path in my wix output (for example: C:\builds\myproduct\prodct.exe as file source). After moving the TFS output to another location this path is not valid anymore. I tried to use -bt and -bu switches for pyro, but this does only work for relative paths or for named bindpaths.
Now I wanted to change my wix project setup to use named bindpaths rather than preprocessor variables, but it seems that this is not possible.
heat can only use preprocessor variables or wixvariables but it seems not to be possible to use bindpath variables. heat provides a switch -wixvar which should create binder variables instead of preprocessor variables but I does exactly nothing.
Now I tried do use no wix and no preprocessor variables in heat and tell light per -bu -bt switches where to find the files. But if I do not set a preprocessor variable the resulting files look like Sources\product.exe. I can't get rid of this Sources. I know that I can transform all the xml with xslt and remove the Sources but thats a workaround which I would only implement if no other solution is possible. This would also mean that there is a problem in the wix toolchain.
It looks like pyro does only support bindpath variables and heat does only support preprocessor and wix variables. This seems to be really crazy, because how should they work together?
How can I create a patch if I use lit, light, candle, heat, torch and pyro and if the original build paths have changed (which is very common on a build system) and the file paths are created with heat and therefore be fixed or preprocessor or wix variables?
As you've found heat wasn't designed to be used in the patching scenario. It was only in recent versions of the WiX toolset that the generated GUIDs got to a point where there was even a chance that heat could successfully build output that would be patchable. Still need to do work there to make patching where heat is used work well.
Ultimately, I believe the answer is to simplify the "original source" problem. It is challenging to get all the bindpaths set up correctly and that makes patching, which is a hard problem, even harder. We've kicked around a few ideas but nothing has come together yet.
You could always use admin image based patching. It's slower but can be easier to get the "original source" and "target" laid out. That path does lose filtering though.
Basically, we need to do a bit more work in patching scenarios to make it much easier.
PS: "Source" in the path for a File/#Source attribute is an alias for the "default bindpath". You can use bindpaths there.

How can I create a single Clojure source file which can be safely used as a script and a library without AOT compilation?

I’ve spent some time researching this and though I’ve found some relevant info,
Here’s what I’ve found:
SO question: “What is the clojure equivalent of the Python idiom if __name__ == '__main__'?”
Some techniques at RosettaCode
A few discussions in the Cojure Google Group — most from 2009
but none of them have answered the question satisfactorily.
My Clojure source code file defines a namespace and a bunch of functions. There’s also a function which I want to be invoked when the source file is run as a script, but never when it’s imported as a library.
So: now that it’s 2012, is there a way to do this yet, without AOT compilation? If so, please enlighten me!
I'm assuming by run as a script you mean via clojure.main as follows:
java -cp clojure.jar clojure.main /path/to/myscript.clj
If so then there is a simple technique: put all the library functions in a separate namespace like mylibrary.clj. Then myscript.clj can use/require this library, as can your other code. But the specific functions in myscript.clj will only get called when it is run as a script.
As a bonus, this also gives you a good project structure, as you don't want script-specific code mixed in with your general library functions.
EDIT:
I don't think there is a robust within Clojure itself way to determine whether a single file was launched as a script or loaded as a library - from Clojure's perspective, there is no difference between the two (it all gets loaded in the same way via Compiler.load(...) in the Clojure source for anyone interested).
Options if you really want to detect the manner of the launch:
Write a main class in Java which sets a static flag then launched the Clojure script. You can easily test this flag from Clojure.
Use AOT compilation to implement a Clojure main class which sets a flag
Use *command-line-args* to indicate script usage. You'll need to pass an extra parameter like "script" on the command line.
Use a platform-specific method to determine the command line (e.g. from the environment variables in Windows)
Use the --eval option in the clojure.main command line to load your clj file and launch a specific function that represents your script. This function can then set a script-specific flag if needed
Use one of the methods for detecting the Java main class at runtime
I’ve come up with an approach which, while deeply flawed, seems to work.
I identify which namespaces are known when my program is running as a script. Then I can compare that number to the number of namespaces known at runtime. The idea is that if the file is being used as a lib, there should be at least one more namespace present than in the script case.
Of course, this is extremely hacky and brittle, but it does seem to work:
(defn running-as-script
"This is hacky and brittle but it seems to work. I’d love a better
way to do this; see http://stackoverflow.com/q/9027265"
[]
(let
[known-namespaces
#{"clojure.set"
"user"
"clojure.main"
"clj-time.format"
"clojure.core"
"rollup"
"clj-time.core"
"clojure.java.io"
"clojure.string"
"clojure.core.protocols"}]
(= (count (all-ns)) (count known-namespaces))))
This might be helpful: the github project lein-oneoff describes itself as "dependency management for one-off, single-file clojure programs."
This lets you define everything in one file, but you do need the oneoff plugin installed in order to run it from the command line.