Is there a better way avoid rebuilding a Singularity image if the definition file hasn't changed? - singularity-container

I'm building Singularity images in a CI/CD pipeline. I'd like to avoid rebuilding the image if the definition file hasn't changed. So far, the best way that I can see to do this would be to check for changes using something like this:
if diff my_img.def <(singularity inspect -d my_img.sif) > /dev/null; then
... do something ...
fi
Is there a built-in or better way to do this?

Depending on the CI software you're using, you can have certain jobs run only when specific files have changed. I use Gitlab CI, which has the only/except:changes rule. There is probably something similar for most other CI platforms, but you'll have to check their docs.
Otherwise, your solution is probably the simplest.

Related

how to convince ansible to run a block of tasks only once?

We are having a really large playbook for our systems. The whole playbook runs on every update cycle - which is sick.
A lot of tasks are things like installing basic apt packages (like git) which is only needed once until the task really changed, e.g. if we change the list of packages to be installed.
I thought about something like creating a block of those tasks and storing a version file somewhere, I would then compare the content of that file to the version of the block before running the block again, which unfortunately takes a real long time otherwise.
On the other hand, I could also imagine something like a incremental update playbook which only updates the things really needed based on the previous playbooks.
To me that issue feels very basic, therefore I wonder if something like that is maybe already implemented in ansible in one or another way - I would like to avoid reinventing the wheel.
If you take the call another yml file, you have to use
include_tasks module
because of using
run_once: true
accordingly.

Bamboo custom deployment-project variables

In Bamboo, I want so take a build-release and deploy it on a target host. The target host should be variable.
As far as I know, it is not possible to run deployment-projects with customized deployment-variables (as it is possible to override plan-variables on custom-builds). My question is: is that true and if yes, what is the best way to achieve what I want?
Here are some thoughts I had during research regarding this issue:
I could use a plan-variable "host" in my build job and always customize it as needed. Then I write this variable into a file that will be declared as a build-artifact. In my deployment-tasks I use the "Inject Bamboo variables configuration" task to get the variable. This solution has the disadvantage, that I always have to run a build, even if the artifacts do not change.
Global variables are not feasible because they are not build-dependent. Therefore I can not use them for my task. The reason is that it could happen that they get overwritten by another build.
Are there any better solutions/thoughts on this task?
target host should be variable
No, each host is a separate environment. Otherwise the notion of "promoting an environment" breaks apart. This may be a lot of work to implement and therefore I strongly advise using bamboo specs (in Java).
it is not possible to run deployment-projects with customized deployment-variables
I confirm: it's not possible. And again, it would break the notion of environment promotion. Rule of thumb: your environment setup should be immutable: no variable variance. The only difference b/n runs is the artifacts that are to be deployed.
You can set variables in 'Other environment settings' in deployment project while configuring Environment. This way you will not have to run build when artifacts don't change. Just change variable value before deploying the artifact.

test-framework/quickcheck/cabal: Passing options to testfunction with 'cabal test'

I'm using cabal to build and test my projects with the commands:
cabal configure --enable-tests
cabal build
cabal test
As a framework I use testing-framework (https://batterseapower.github.io/test-framework/).
Everything works, however the number of QuickCheck-tests defaults to 50 which in my use-case is very little because I have to filter the generated data to fit certain properties.
Is there any possibility to pass something like
--maximum-generated-tests=5000
to the test-executable via cabal? I tried things like
cabal test --test-options='maximum-generated-tests=5000'
but no luck so far. Is there any possibility to achieve this?
Many thanks in advance!
jules
You missed the dashes:
cabal test --test-options=--maximum-generated-tests=5000
Also, if too few generated tests satisfy your property, you may have better luck with SmallCheck. It's not random and thus will find all inputs satisfying the condition in the given search space. (Disclosure: I'm the maintainer of SmallCheck.)

How can I have my web server automatically rebuild the files it serves?

I sometimes write client-side-only web applications, and I like a simple and fast development cycle. This leads me to project layouts which explicitly have no build step, so that I can just edit source files and then reload in the web browser. This is not always a healthy constraint to place on the design. I would therefore like the following functionality to use in my development environment:
Whenever the server receives a GET /foo/bar request and translates it to a file /whatever/foo/bar, it executes cd /whatever && make foo/bar before reading the file.
What is the simplest way to accomplish this?
The best form of solution would be an Apache configuration, preferably one entirely within .htaccess files; failing that, a lightweight web server which has such functionality already; or one which can be programmed to do so.
Since I have attracted quite a lot of “don't do that” responses, let me reassure you on a few points:
This is client-side-only development — the files being served are all that matters, and the build will likely be very lightweight (e.g. copying a group of files into a target directory layout, or packing some .js files into a single file). Furthermore, the files will typically be loaded once per edit-and-reload cycle — so interactive response time is not a priority.
Assume that the build system is fully aware of the relevant dependencies — if nothing has changed, no rebuilding will happen.
If it turns out to be impractically slow despite the above, I'll certainly abandon this idea and find other ways to improve my workflow — but I'd like to at least give it a try.
You could use http://aspen.io/.
Presuming the makefile does some magic, you could make a simplate that does:
import subprocess
RESULTFILE = "/whatever/foo/bar"
^L
subprocess.call(['make'])
response.body = ''.join(open(resultfile).readlines())
^L
but if it's less magic, you could of course encode the build process into the simplate itself (using something like https://code.google.com/p/fabricate/ ) and get rid of the makefile entirely.
Since I'm a lazy sod, I'd suggest something like that:
Options +FollowSymlinks
RewriteEngine On
RewriteBase /
RewriteRule ^(.*)\/(.*)(/|)$ /make.php?base=$1&make=$2 [L,NC]
A regex pattern like that provides the two required parameters -
Which then can be passed to a shell-script by PHP function exec():
You might need to add exclusions above the pattern or inclusions to the pattern,
e.g define the first back-reference by words - else it would match other requests as well (which shouldn't be re-written).
The parameters can be accessed in shell-script like that:
#!/bin/bash
cd /var/www/$1
make $2
This seems like a really bad idea. You will add a HUGE amount of latency to each request waiting for the "make" to finish. In a development environment that is okay, but in production you are going to kill performance.
That said, you could write a simple apache module that would handle this:
http://linuxdocs.org/HOWTOs/Apache-Overview-HOWTO-12.html
I would still recommend considering if this is what you want to do. In development you know if you change a file that you need to remake. In production it is a performance killer.

Existing solutions to test a NSIS script

Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.
Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.
Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.
Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.