how to convince ansible to run a block of tasks only once? - automation

We are having a really large playbook for our systems. The whole playbook runs on every update cycle - which is sick.
A lot of tasks are things like installing basic apt packages (like git) which is only needed once until the task really changed, e.g. if we change the list of packages to be installed.
I thought about something like creating a block of those tasks and storing a version file somewhere, I would then compare the content of that file to the version of the block before running the block again, which unfortunately takes a real long time otherwise.
On the other hand, I could also imagine something like a incremental update playbook which only updates the things really needed based on the previous playbooks.
To me that issue feels very basic, therefore I wonder if something like that is maybe already implemented in ansible in one or another way - I would like to avoid reinventing the wheel.

If you take the call another yml file, you have to use
include_tasks module
because of using
run_once: true
accordingly.

Related

How can a modified Julia package be used natively?

So, there is this cool package I've found but it leaves a lot to be desired. Since it made more sense to modify it, rather than build a new one myself, I changed the code in the corresponding source directory (C:\Users[my username].julia\v0.4[package name]\src). I made sure to modify not just the base.jl file, but also the [name of package].jl one so that there are no issues with dependencies or the new functions I added. I tried running the package several times to ensure that Julia doesn't spit out any errors or exceptions (the original package had some deprecated stuff, which I also remedied). Still, I fail to use the additional functionality of the package that I augmented. Any help would be greatly appreciated.
I'm using Julia ver 0.4.2, on a Windows 7 machine. As an IDE I use Notepad++. Thanks
I'm not exactly sure what you tried, but here's a guess as to what's going on: if you've already loaded the package in your julia session, edits to the source files won't take effect unless you explicitly reload the package. There are some good workflow tips here, and more explanation of the module system here.
However, for a newbie the easiest thing might be to quit julia and restart.
As far as making changes to a package, as Gnumic commented, your best approach is to make a branch and commit your changes there. Once you become convinced your changes represent an improvement, consider sharing your changes with the rest of the world.

Does a new process is created when syscall is called in Minix?

For example, when we call write(...) in program in minix. Does a new process is created(like with fork()) or is it done within current process?
Is it efficient to make a lot of syscalls?
Process creation is strictly fork's / exec's job. What kind of process could a system call like write possibly spawn?
Now, Minix is a microkernel, meaning that things like file systems run in userland processes. Writing to a file could therefore possibly spawn a new process somewhere else, but that depends on your file system driver. I haven't paid attention to the MinixFS driver so far, so I can't tell you whether that happens -- but it's not very likely, process creation still being relatively expensive.
It's almost never efficient to make a lot of syscalls (context switches being involved). However, "performant", "efficient" and "a lot" are all very relative things, so I can't tell you something you probably don't know already.

How can I have my web server automatically rebuild the files it serves?

I sometimes write client-side-only web applications, and I like a simple and fast development cycle. This leads me to project layouts which explicitly have no build step, so that I can just edit source files and then reload in the web browser. This is not always a healthy constraint to place on the design. I would therefore like the following functionality to use in my development environment:
Whenever the server receives a GET /foo/bar request and translates it to a file /whatever/foo/bar, it executes cd /whatever && make foo/bar before reading the file.
What is the simplest way to accomplish this?
The best form of solution would be an Apache configuration, preferably one entirely within .htaccess files; failing that, a lightweight web server which has such functionality already; or one which can be programmed to do so.
Since I have attracted quite a lot of “don't do that” responses, let me reassure you on a few points:
This is client-side-only development — the files being served are all that matters, and the build will likely be very lightweight (e.g. copying a group of files into a target directory layout, or packing some .js files into a single file). Furthermore, the files will typically be loaded once per edit-and-reload cycle — so interactive response time is not a priority.
Assume that the build system is fully aware of the relevant dependencies — if nothing has changed, no rebuilding will happen.
If it turns out to be impractically slow despite the above, I'll certainly abandon this idea and find other ways to improve my workflow — but I'd like to at least give it a try.
You could use http://aspen.io/.
Presuming the makefile does some magic, you could make a simplate that does:
import subprocess
RESULTFILE = "/whatever/foo/bar"
^L
subprocess.call(['make'])
response.body = ''.join(open(resultfile).readlines())
^L
but if it's less magic, you could of course encode the build process into the simplate itself (using something like https://code.google.com/p/fabricate/ ) and get rid of the makefile entirely.
Since I'm a lazy sod, I'd suggest something like that:
Options +FollowSymlinks
RewriteEngine On
RewriteBase /
RewriteRule ^(.*)\/(.*)(/|)$ /make.php?base=$1&make=$2 [L,NC]
A regex pattern like that provides the two required parameters -
Which then can be passed to a shell-script by PHP function exec():
You might need to add exclusions above the pattern or inclusions to the pattern,
e.g define the first back-reference by words - else it would match other requests as well (which shouldn't be re-written).
The parameters can be accessed in shell-script like that:
#!/bin/bash
cd /var/www/$1
make $2
This seems like a really bad idea. You will add a HUGE amount of latency to each request waiting for the "make" to finish. In a development environment that is okay, but in production you are going to kill performance.
That said, you could write a simple apache module that would handle this:
http://linuxdocs.org/HOWTOs/Apache-Overview-HOWTO-12.html
I would still recommend considering if this is what you want to do. In development you know if you change a file that you need to remake. In production it is a performance killer.

Design principles as to how linux repository managers update themselves?

I know there are other applications also, but considering yum/apt-get/aptitude/pacman are you core package managers for linux distributions.
Today I saw on my fedora 13 box:
(7/7): yum-3.2.28-4.fc13_3.2.28-5.fc13.noarch.drpm | 42 kB 00:00
And I started to wonder how does such a package update itself? What design is needed to ensure a program can update itself?
Perhaps this question is too general but I felt SO was more appropriate than programmers.SE for such a question being that it is more technical in nature. If there is a more appropriate place for this question feel free to let me know and I can close or a moderator can move.
Thanks.
I've no idea how those particular systems work, but...
Modern unix systems will generally tolerate overwriting a running executable without a hiccup, so in theory you could just do it.
You could do it in a chroot jail and then move or something similar to reduce the time during which the system is vulnerable. Add a journalling filesystem and this is a little safer still.
It occurs to me that the package-manager needs to hold the package access database in memory as well to insure against a race condition there. Again, the chroot jail and copy option is available as a lower risk alternative.
And I started to wonder how does such a package update itself? What
design is needed to ensure a program can update itself?
It's like a lot of things, you don't need to "design" specifically to solve this problem ... but you do need to be aware of certain "gotchas".
For instance Unix helps by reference counting inodes so "you" can delete a file you are still using, and it's fine. However this implies a few things you have to do, for instance if you have plugins then you need to load them all before you run start a transaction ... even if the plugin would only run at the end of the transaction (because you might have a different version at the end).
There are also some things that you need to do to make sure that anything you are updating works, like: Put new files down before removing old files. And don't truncate old files, just unlink. But those also help you :).
Using external problems, which you communicate with, can be tricky (because you can't exec a new copy of the old version after it's been updated). But this isn't often done, and when it is it's for things like downloading ... which can somewhat easily be made to happen before any updates.
There are also things which aren't a concern in the cmd line clients like yum/apt, for instance if you have a program which is going to run 2+ "updates" then you can have problems if the first update was to the package manager. downgrades make this even more fun :).
Also daemon like processes should basically never "load" the package manager, but as with other gotchas ... you tend to want to follow this anyway, for other reasons.

build script - how to do it

About 2 months ago I overtook building proccess in current company. Even though I don't have much knowledge of it, I was the only with enough time, so I didn't have much choice.
Situation is not that good, and I would like to do following:
Labeling files in SourceSafe with version (example ProjectName PV 1.2)
GetFiles from SourceSafe to specific directory
Build vb6/c++/c# projects(yes, there are all kinds of them)
Build InstallShield setups
This is for now partly done using batch scripts(one for labeling and getting, one for building, etc..). So when building start I pretty much have babysit it.
Good part of this code could be reused.
Any recommendations on how to do it better? One big problem is whole bunch of dependencies between projects. Also labeling has to increment version and if necessary change PV to EV.
I would like to minimize user interaction as much as possible. One click on one build script(Spolsky is god) and all is done, no need to increment version, to set where to get files and similar stuff.
Is the batch scripting best way to go? Should I do some functionality with msbuild. Are there any other options?
Specific code is not need, for now I just need a way how to improve it, even though it wouldn't hurt.
Tnx,
Marko
Since you already have a build system (even though some of it currently "manual"), whatever you do, don't start over from scratch.
(1) Make sure you have a test machine (or Virtual Machine) on which to work. Thus you can make changes and improvements without having to worry about breaking anything.
(2) Put all of your build scripts and tools in version control, not just the source code. Then as you make changes, see if they work. If they do, then save them to version control. If they don't, then roll them back.
(3) Choose one area to work on at a time. Don't try to do everything at once. Going from a lot of manual work to "one-click" will take time no matter what build system you're working with.
Sounds like you want a continuous integration solution, like CC.Net. It has configuration options to do all the things you want and a great community to answer questions.
Also, batch scripting is probably not a good option. Sophisticated build and integration tools will let you feed parameters into the build and create different builds for different environments (test, production, etc.). Batch scripting will involve a lot of hand-coding and glue.