Portable makefile creation of directories - cross-platform

I'm looking to save myself some effort further down the line by making a fairly generic makefile that will put together relatively simple C++ projects for me with minimal modifications required to the makefile.
So far I've got it so it will use all .cpp files in the same directory and specified child directories, place all these within a matching structure in a obj subdir and place the resulting file in another subdir called bin. Pretty much what I want.
However, trying to get it so that the required obj and bin directories is created if they don't exist is providing awkward to get working cross-platform - specifically, I'm just testing with Windows 7 & Ubuntu (can't remember version), and I can't get it to work on both at the same time.
Windows misreads mkdir -p dir and creates a -p directory and obviously the two platforms use \ and / respectively for the path separator - and I get errors when using the wrong one.
Here is a few selected portions of the makefile that are relevant:
# Manually edited directories (in this example with forward slashes)
SRC_DIR = src src/subdir1 src/subdir2
# Automagic object directories + the "fixed" bin directory
OBJ_DIR = obj $(addprefix obj/,$(SRC_DIR))
BIN_DIR = bin
# Example build target
debug: checkdirs $(BIN)
# At actual directory creation
checkdirs: $(BIN_DIR) $(OBJ_DIR)
$(BIN_DIR):
#mkdir $#
$(OBJ_DIR):
#mkdir -p $#
This has been put together by me over the last week or so from things I've been reading (mostly on Stack Overflow), so if it happens to be I'm following some horrible bad practice or anything of that nature please let me know.
Question in a nutshell:
Is there a simple way to get this directory creation to work from a single makefile in a way that provides as much portability as possible?

I don't know autoconf. Every experience I've had with it has been tedious. The problem with zwol's solution is that on Windows mkdir returns an error, unlike mkdir -p on Linux. This could break your make rule. The workaround is to ignore the error with - flag before the command, like this:
-mkdir dir
The problem with this is that make still throws an ugly warning for the user. The workaround for this is to run an "always true" command after the mkdir fails as described here, like this:
mkdir dir || true
The problem with this is that Windows and Linux have different syntax for true.
Anyway, I spent too much time on this. I wanted a make file that worked in both POSIX-like and Windows environments. In the end I came up with the following:
ifeq ($(shell echo "check_quotes"),"check_quotes")
WINDOWS := yes
else
WINDOWS := no
endif
ifeq ($(WINDOWS),yes)
mkdir = mkdir $(subst /,\,$(1)) > nul 2>&1 || (exit 0)
rm = $(wordlist 2,65535,$(foreach FILE,$(subst /,\,$(1)),& del $(FILE) > nul 2>&1)) || (exit 0)
rmdir = rmdir $(subst /,\,$(1)) > nul 2>&1 || (exit 0)
echo = echo $(1)
else
mkdir = mkdir -p $(1)
rm = rm $(1) > /dev/null 2>&1 || true
rmdir = rmdir $(1) > /dev/null 2>&1 || true
echo = echo "$(1)"
endif
The functions/variables are used like so:
rule:
$(call mkdir,dir)
$(call echo, CC $#)
$(call rm,file1 file2)
$(call rmdir,dir1 dir2)
Rationale for the definitions:
mkdir: Fix up the path and ignore any errors.
del: In Windows del doesn't delete any files if one of the files is specified to be in a directory that doesn't exist. For example, if you try to delete a set of files and dir/file.c is in the list, but dir doesn't exist, no files will be deleted. This implementation works around that issue by invoking del once for each file.
rmdir: Fix up the path and ignore any errors.
echo: The output's appearance is preserved and doesn't show the extraneous "" in Windows.
I spent a lot of time on this. Perhaps I would have been better off spending my time learning autoconf.
See also:
OS detecting makefile

Windows mkdir always does what Unix mkdir does with the -p switch on. And you can deal with the backslash problem with $(subst). So, on Windows, you want this:
$(BIN_DIR) $(OBJ_DIR):
mkdir $(subst /,\\,$#)
and on Unix you want this:
$(BIN_DIR) $(OBJ_DIR):
mkdir -p -- $#
Choosing between these is not practical to do within a makefile. This is what Autoconf is for.
As a side note, never, ever use the #command feature in your makefiles. There will come a day when you need to debug your build process on a machine you do not have direct access to, and on that day, you will regret it.

I solved the portability problem by creating a Python script called mkdir.py and calling it from the Makefile. A limitation is that Python must be installed, but this is most likely true for any version of UNIX.
#!/usr/bin/env python
# Cross-platform mkdir command.
import os
import sys
if __name__=='__main__':
if len(sys.argv) != 2:
sys.exit('usage: mkdir.py <directory>')
directory = sys.argv[1]
try:
os.makedirs(directory)
except OSError:
pass

Related

CMake incremental compilation through toolchain upgrade

I am trying to find a way to enable incremental compilation with CMake through a toolchain upgrade. Here is the problematic scenario :
Branch main uses g++-9 (using CMAKE_CXX_COMPILER=g++-9)
A new branch uses g++-10 (using CMAKE_CXX_COMPILER=g++-10)
Commits are happening on both branches
Incremental builds on one branch work fine
Switching to the other branch and explicitly invoking CMake fails
My question is the following : I'm looking for the proper way to make the invocation of CMake succeed and rebuild all the project from scratch when a toolchain change happens.
Here is a script that will make it quick and easy to reproduce the problem. This script requires Docker. It will create folders Sources and Build at the location where it is executed to avoid littering your filesystem. It then creates Dockerfiles to build docker containers with both g++ and cmake. It then creates a dummy Hello World C++ CMake project. Finally, it creates a folder for build artifacts and then executes the build with g++-9 and then g++-10. The second build fails because CMake generates an error.
#!/bin/bash
set -e
mkdir -p Sources
mkdir -p Build
# Creates a script that will be executed inside the docker container to perform builds
cat << EOF > Sources/Compile.sh
cd /Build \
&& cmake /Sources \
&& make \
&& ./IncrementalBuild
EOF
# Creates a Dockerfile that will be used to have both gcc-9 and cmake
cat << EOF > Sources/Dockerfile-gcc9
FROM gcc:9
RUN apt-get update && apt-get install -y cmake
RUN ln -s /usr/local/bin/g++ /usr/local/bin/g++-9
ADD Compile.sh /Compile.sh
RUN chmod +x /Compile.sh
ENTRYPOINT /Compile.sh
EOF
# Creates a Dockerfile that will be used to have both gcc-10 and cmake
cat << EOF > Sources/Dockerfile-gcc10
FROM gcc:10
RUN apt-get update && apt-get install -y cmake
RUN ln -s /usr/local/bin/g++ /usr/local/bin/g++-10
ADD Compile.sh /Compile.sh
RUN chmod +x /Compile.sh
ENTRYPOINT /Compile.sh
EOF
# Creates a dummy C++ program that will be compiled
cat << EOF > Sources/main.cpp
#include <iostream>
int main()
{
std::cout << "Hello World!\n";
}
EOF
# Creates CMakeLists.txt that will be used to compile the dummy C++ program
cat << EOF > Sources/CMakeLists.txt
cmake_minimum_required(VERSION 3.9)
project(IncrementalBuild CXX)
add_executable(IncrementalBuild main.cpp)
set_target_properties(IncrementalBuild PROPERTIES CXX_STANDARD 17)
EOF
# Build the docker images with both Dockerfiles created earlier
docker build -t cmake-gcc:9 -f Sources/Dockerfile-gcc9 Sources
docker build -t cmake-gcc:10 -f Sources/Dockerfile-gcc10 Sources
# Run a build with g++-9
echo ""
echo "### Compiling with g++-9 and then running the result..."
docker run --rm --user $(id -u):$(id -g) -v $(pwd)/Sources:/Sources -v $(pwd)/Build:/Build -e CXX=g++-9 cmake-gcc:9
echo ""
# Run a build with g++-10
echo "### Compiling with g++-10 and then running the result..."
docker run --rm --user $(id -u):$(id -g) -v $(pwd)/Sources:/Sources -v $(pwd)/Build:/Build -e CXX=g++-10 cmake-gcc:10
echo ""
# Print success if we reach this point
echo "SUCCESS!"
I'm looking for the proper way to make the invocation of CMake succeed and rebuild all the project from scratch when a toolchain change happens.
The proper way is to use a fresh binary directory. Either remove the binary directory when changing and let it recreate or just use a separate different directory for each toolchain.
Use Build/gcc10 binary directory for gcc10 build and Build/gcc9 for gcc9 builds.
No need to cd Build and mkdir with nowadays cmake - use cmake -S. -BBuild. Also do not use make - prefer cmake --build Build to let you switch generator later.
"If you change the toolchain, you should start with a fresh build. There are too many things that assume the toolchain doesn’t change and while you may be able to find workarounds which appear to work, I recommend you always use a fresh build tree for a different toolchain. This same logic also applies if you update the existing toolchain in-place (e.g. you update to a newer version of GCC on Linux, a newer version of Xcode on macOS, etc.). CMake queries compiler capabilities and caches the results. If you change the toolchain in a way that CMake can’t catch, then you end up with stale cached capabilities being used for the new/updated toolchain. Please don’t do that." - Craig Scott
So essentially I don't think it's possible. You just need to blow away your build. The best thing you can do is alert users if CMake isn't doing it for you.
Perhaps reply on this also:
https://discourse.cmake.org/t/how-to-change-toolchain-without-breaking-developer-workflows/1166
Or start another discourse.

Adding home-brew to PATH

I just installed Home-brew and now I'm trying to insert the home-brew directory at the top of my path environment variable by typing in two commands inside my terminal. My questions are these:
What is a path environment variable?
Are the two codes provided me correct?
echo "export Path=/usr/local/bin:$PATH" >> ~/.bash_profile && source ~/.bash_profile
After this I am to type in brew doctor. Nothing is happening as far as I can see.
Can anyone offer me some advice or direction?
I installed brew in my new Mac M1 and ask me to put /opt/homebrew/bin in the path, so the right command for this case is:
echo "export PATH=/opt/homebrew/bin:$PATH" >> ~/.bash_profile && source ~/.bash_profile
TL;DR
echo "export PATH=/usr/local/bin:$PATH" >> ~/.bash_profile && source ~/.bash_profile
is what you want.
To answer your first question; in order to run (execute) a program (executable) the shell must know exactly where it is in your filesystem in order to run it. The PATH environment variable is a list of directories that the shell uses to search for executables. When you use a command that is not built into the shell you are using the shell will search through these directories in order and will execute the first matching executable it finds.
For example when you type: mv foo bar the shell is almost certainly actually using an executable located in the /bin directory. Thus fully the command is
/bin/mv foo bar
The PATH environment variable therefore saves you some extra typing. You can see what is in your PATH currently (as you can with all environment variables) by entering:
echo $<NAME OF VARIABLE>
So in this instance:
echo $PATH
As I mentioned earlier, ordering is important. Adding /usr/local/bin to the beginning of PATH means that the shell will search there first and so if you have an executable foo in that folder it will be used in preference to any other foo executables you may have in the folders in your path. This means that any executables you install with brew will be used in preference to the system defaults.
On to your second question. What the command you have provided is trying to do is add a line to your .bash_profile and then source it. The .bash_profile is a text file stored in your home directory that is sourced (read) every time bash (your shell) starts. The mistake in the line you've provided is that only the first letter of PATH is capitalised. To your shell Path and PATH are very different things.
To fix it you want:
echo "export PATH=/usr/local/bin:$PATH" >> ~/.bash_profile && source ~/.bash_profile
To explain
echo "export PATH=/usr/local/bin:$PATH"
simply prints or echoes what follows to stdout, which in the above instance is the terminal. (stdout, stderr and stdin are very important concepts on UNIX systems but rather off topic) Running this command produces the result:
export PATH=/usr/local/bin:/opt/local/sbin:/opt/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
on my system because using $PATH within double quotes means bash will substitute it with its value. >> is then used to redirect stdout to the end of the ~/.bash_profile file. ~ is shorthand for your home directory. (NB be very careful as > will redirect to the file and overwrite it rather than appending.)
&& means run the next command is the previous is successful and
source ~/.bash_profile
simply carries out the actions contained in that file.
As per the latest documentation, you need to do this:
echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> /home/dhruv/.bashrc
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
Now you should be able to run brew from anywhere.
When you type in a program somewhere and click enter, it checks certain locations to see if that program exists there.
Linux brew uses locations different from the normal linux programs, so we are adding these locations to the ~/.profile file which sets the paths.
Run this in your terminal, and it will place the correct code in the .profile file, automatically.
echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile
Don't use .bash_profile because when you use something different from bash, like zsh, it may not work. .profile is the correct location.

No such file or directory from sh script

Looking for the origin of this error message:
Processing: +([^_]).flv
date: +([^_]).flv: No such file or directory
I started getting this at some point in the last few months (can't say when as I wasn't logging my cron output. I know, I know!).
When I originally wrote this, it worked ok for at least two months. I'm wondering if there was an sh update that broke it?
The script runs via crontab and gets all .flv files in the current directory without an underscore and processes each one. It then checks the modified date for files that have been created in the last 24 hours and runs the yamdi meta tag injector for .flv files.
It seems like it's not recognizing the pattern as a pattern and looking for it as an actual file to me. If I run this script from an ssh shell it works ok, it's only when running via cron that it gives this error.
shopt -s extglob
now=$(date +"%s")
for f in +([^_]).flv; do
echo "Processing: $f"
age=$(date -r "$f" +"%s")
calc=$(((now-age) / 60 / 60))
if(( calc < 24 )); then
echo "$f age=$calc"
yamdi -i "$f" -o "$f".seek
rm "$f"
cp "$f".seek "$f"
touch -d #$age "$f"
fi
done
This is most likely a problem of the wrong shell being used; make sure your script's first line represents the right shell:
#!/bin/bash
for bash, or whatever shell you wrote this for. You might want to check your environment variables that cron may set (that's a very common problem -- one assumes everything is set up correctly, but the environment that cron offers to scripts it executes is different).

Build kernel module into a specific directory

is there a way to set a output-directory for making kernel-modules inside my makefile?
I want to keep my source-direcory clean from the build-files.
KBUILD_OUTPUT and O= did not work for me and were failing to find the kernel headers when building externally.
My solution is to symlink the source files into the bin directory, and dynamically generate a new MakeFile in the bin directory. This allows all build files to be cleaned up easily since the dynamic Makefile can always just be recreated.
INCLUDE=include
SOURCE=src
TARGET=mymodule
OUTPUT=bin
EXPORT=package
SOURCES=$(wildcard $(SOURCE)/*.c)
# Depends on bin/include bin/*.c and bin/Makefile
all: $(OUTPUT)/$(INCLUDE) $(subst $(SOURCE),$(OUTPUT),$(SOURCES)) $(OUTPUT)/Makefile
make -C /lib/modules/$(shell uname -r)/build M=$(PWD)/$(OUTPUT) modules
# Create a symlink from src to bin
$(OUTPUT)/%: $(SOURCE)/%
ln -s ../$< $#
# Generate a Makefile with the needed obj-m and mymodule-objs set
$(OUTPUT)/Makefile:
echo "obj-m += $(TARGET).o\n$(TARGET)-objs := $(subst $(TARGET).o,, $(subst .c,.o,$(subst $(SOURCE)/,,$(SOURCES))))" > $#
clean:
rm -rf $(OUTPUT)
mkdir $(OUTPUT)
If you are building inside the kernel tree you can use the O variable:
make O=/path/to/mydir
If you are compiling outside the kernel tree (module, or any other kind of program) you need to change your Makefile to output in a different directory. Here a little example of a Makefile rule which output in the MY_DIR directory:
$(MY_DIR)/test: test.c
gcc -o $# $<
and then write:
$ make MY_DIR=/path/to/build/directory
The same here, but I used a workaround that worked for me:
Create a sub-directory with/for every arch name (e.g. "debug_64").
Under "debug_64": create symbolic link of all .c and .h files. Keeping the same structure.
Copy the makefile to "debug_64" and set the right flags for 64 Debug build, e.g.
ccflags-y := -DCRONO_DEBUG_ENABLED
ccflags-y += -I$(src)/../../../lib/include
KBUILD_AFLAGS += -march=x86_64
Remember to set the relative directories paths to one level down, e.g. ../inc will be ../../inc.
Repeat the same for every arch/profile.
Now we have one source code, different folders, and different make files.
By the way, creating profiles inside make files for kernel module build is not an easy job, so, I preferred to create a copy of makefile for every arch.

How to ignore certain files when branching / checking out?

I'd like to compare a few files from the bazaar branch lp:ubuntu/nvidia-graphics-drivers. I'm mainly interested in the debian subdirectory inside that branch, but due to the binary blob in http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-graphics-drivers/oneiric/files, it takes ages to get just the text files. I've already downloaded 555MB and it's still counting.
Is it possible to retrieve a bazaar branch, including or excluding certain files by one of the following properties:
file size
file extension
file name (include only debian/ for example)
I do not need to push back any changes, nor do I need to view the history of a file. I just want to compare two files in the debian/ directory, files with the .in extension and files without.
As far as I'm aware, no. You're downloading the branch history, not just the individual files. And each file is an integral part of the branch's history.
On the bright side, you only have to check it out once. Unless those binary files change, they'll be skipped the next time you pull from Launchpad.
Depending on the branch's history, you may be able to cut down on the download size if you use a lightweight checkout (bzr checkout --lightweight). But of course, that may come back and bite you later, as it means you won't get a local copy of the branch, only the checked-out files. So it'll work much like SVN, where every operation has to go through the server. And as long as you don't need to look at the branch history, or commit your changes, that should serve you just fine, I believe.
I ended up doing some dirty grep-ing on the HTTP response since bzr info "$branch" and bzr ls -d "$branch" "$directory" did not provide enough information to me.
The below Bash script relies on the working of Launchpads front-end Loggerhead. It recursively downloads from a given URL. Currently, it ignores *.run files. Save it as bzrdl in a directory available from $PATH and run it with bzrdl http://launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-graphics-drivers/oneiric/files/head:/debian/. All files will be saved in the current directory, be sure that it's empty to avoid conflicts.
#!/bin/bash
max_retries=5
rooturl="$1"
if ! [[ $rooturl =~ /$ ]]; then
echo "Usage: ${0##*/} URL"
echo "URL must end with a slash. Example URL:"
echo "http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-graphics-drivers/oneiric/files/head:/"
exit 1
fi
tmpdir="$(mktemp -d)"
target="$(pwd)"
# used for holding HTTP response before extracting data
tmp="$(mktemp)"
# url_filter reads download URLs from stdin (piped)
url_filter() {
grep -v '\.run$'
}
get_files_from_dir() {
local slash=/
local dir="$1"
# to avoid name collision: a/b/c/ -> a.d/b.d/c.d/
local storedir="${dir//$slash/.d${slash}}"
mkdir -p "$tmpdir/$storedir" "$target/$dir"
local i subdir
for ((i=0; i<$max_retries; i++ )); do
if wget -O "$tmp" "$rooturl$dir"; then
# store file list
grep -F -B 1 '<img src="/static/images/ico_file_download.gif" alt="Download File" />' "$tmp" |\
grep '^<a' | cut -d '"' -f 2 | url_filter \
> "$tmpdir/$storedir/files"
IFS=$'\n'
for subdir in $(grep -F -B 1 '<img src="/static/images/ico_folder.gif" ' "$tmp" | \
grep -F '<a ' | rev | cut -d / -f 2 | rev); do
IFS=$' \t\n'
get_files_from_dir "$dir$subdir/"
done
return
fi
done
echo "Failed to download directory listing of: $dir" >> "$tmpdir/errors"
}
download_files() {
local slash=/
local dir="$1"
# to avoid name collision: a/b/c/ -> a.d/b.d/c.d/
local storedir="${dir//$slash/.d${slash}}"
local done=false
local subdir
cd "$tmpdir/$storedir"
for ((i=0; i<$max_retries; i++)); do
if wget -B "$rooturl$dir" -nc -i files -P "$target/$dir"; then
done=true
break
fi
done
$done || echo "Failed to download all files from $dir" >> "$tmpdir/errors"
for subdir in *.d; do
download_files "$dir${subdir%%.d}/"
done
}
get_files_from_dir ''
# make *.d expand to nothing if no directories are found
shopt -s nullglob
download_files ''
echo "TMP dir: $tmpdir"
echo "Errors : $(wc -l "$tmpdir/errors" 2>/dev/null | cut -d ' ' -f 2 || echo 0)"
The temporary directory and file is not removed afterwards, that must be done manually. Any errors (failures to download) will be written to $tmpdir/errors
It's confirmed to work with:
bzrdl http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-settings/oneiric/files/head:/debian/
Feel free to correct any mistakes or add improvements.
There is no way to selectively check out a specific directory from a Bazaar branch at the moment, although we do have plans to add such support in the future.
There is definitely too much traffic for the clone you are doing, considering the size of the branch. It's probably a bug in the client implementation.
Here on bzr 2.4 it is still quite slow but not too bad (60s):
localhost:/tmp% bzr branch http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/oneiric/nvidia-settings/oneiric
Most recent Ubuntu Oneiric version: 275.09.07-0ubuntu1
Packaging branch status: CURRENT
Branched 37 revision(s).
From the log:
[11866] 2011-07-31 00:56:57.007 INFO: Branched 37 revision(s).
56.786 Transferred: 5335kB (95.8kB/s r:5314kB w:21kB)