Suppose I have written five different recipes and I have five different target nodes.
How can I fire a bootstrap command that will address all 5 nodes at ones and it will execute the specified recipes(one for one) in parallel on each target node.
There is way to bootstrap multiple cookbooks on multiple machines at a time using spiceweasel . In this first install the spiceweasel package on the node . U need to create a yaml file and load ur data in the file .. like node ip , cookbook name .
go through this link , it will give you an idea
https://github.com/mattray/spiceweasel#cookbooks
Related
When configuring a slurm cluster you need to have a copy of the configuration file slurm.conf on all nodes. These copies are identical. In the situation where you need to use GPUs in your cluster you have an additional configuration file that you need to have on all nodes. This is the gres.conf. My question is - will this file be different on each node depending on the configuration on that node or will it be identical on all nodes (like slurm.conf?). Assume that the nodes have different configurations of gpus in them and are not identical.
Since Slurm version 14.3.0, the gres.conf accepts a NodeName parameter so that the same file can be setup on all nodes.
From the NEWS file:
gres.conf - Add "NodeName" specification so that a single gres.conf file
can be used for a heterogeneous cluster.
It will thus look something like this:
NodeName=node001 Name=gpu File=/dev/nvidia0
NodeName=node002 Name=gpu File=/dev/nvidia[0-1]
...
Before that, the gres.conf file had to be distinct for each node.
I am using Ansible tower 3.4.3.
As part of one of my jobs, I need to generate a log file and logfile name should contain Tower_Job_ID to easily recognize which log is generated by which tower job id.
I guess there will be some global variables like "ansible_tower_job_id" but unable to find any documentation or the variable name.
Can some one help, how to capture the current running job ID in ansible tower.
The callback link contains the ID in it.
From the docs: "The ‘1’ in this sample URL is the job template ID in Tower." .
We have some scripts to help us set up VPCs with up to 6 VMs in AWS. Now I want to log in to each of these machines. For security reasons we can only access one of them via SSH and then tunnel/proxy through that to the other machines. So in our inventory we have the IP address of the SSH host (we call it Redcarpet) and some other hosts like Elasticsearch, Mongodb and Worker:
#inventory/hosts
[redcarpet]
57.44.113.25
[services]
10.0.1.2
[worker]
10.0.5.77
10.0.1.200
[elasticsearch]
10.0.5.30
[mongodb]
10.0.1.5
Now I need to tell each of the groups, EXCEPT redcarpet to use certain SSH settings. If these were applicable to all groups, I would put them in inventory/group_vars/all.yml, but now I will have to put them in:
inventory/group_vars/services.yml
inventory/group_vars/worker.yml
inventory/group_vars/elasticsearch.yml
inventory/group_vars/mongodb.yml
Which leads to duplication. Therefore I would like to use an include or include_vars to include one or two variables from a common file (e.g. inventory/common.yml). However, when I try to do this in any of the group_var files above, it does not pick up the variables. What is the best practice to use with variables that are common to multiple groups?
If you want to go with the group_vars approach, I would suggest you add another group, and add the dependent groups as children to that group.
#inventory/hosts
[redcarpet]
57.44.113.25
[services]
10.0.1.2
[worker]
10.0.5.77
10.0.1.200
[elasticsearch]
10.0.5.30
[mongodb]
10.0.1.5
[redcarpet_deps:children]
mongodb
elasticsearch
worker
services
And now you can have a group_vars file called redcarpet_deps.yml and they should pickup the vars from there.
Allright I got myself in a deadlock with Mercurial and sub-repos... Here's what happenend:
I had a large mercurial repo that I server via apache and hgweb.cgi.
Due to the size of the repo I decided to move to sub-repositories and share these with hgwebdir.cgi.
Using the convert tool with the filemap option I created several sub-repositories:
/main/foo
/main/bar
Nicely created an entry for the sub-repositories in .hgsub:
foo = foo
bar = bar
And set hgwebdir.cgi up to show $/** as the root folder.
Now when I went to my site (foo.com/hg) I saw my sub-repositories with one empty reposory among them (no name, no content), but I could not download it (archive location unknown):
empty_repo http://img707.imageshack.us/img707/8237/emptysubrepo.png
That was allright until I added a new sub-repository.
I could not push the new .hgsub file to foo.com/hg, since that page is served by hgwebdir.
The only method I can work currently is switch from hgwebdir to hgweb, commit .hgsubste and switch back to hgwebdir.
Does someone have a good setup for such a mess?
On the webserver your main and its subrepos should appear as siblings -- not with the subrepos inside main.
Main
ASCII
AlignDistribute
And the URLs in your .hgsub should look like:
ASCII = ../ASCII
AlignDistribute = ../AlignDsitribute
Then you'll be able to push/pull to http://foo.com/hg/Main and when you clone it the clone/update will automatically attach and clone down the separate subrepos.
From what I've read on https://www.mercurial-scm.org/wiki/PublishingRepositories#multiple
The keys (on the left) and the values (on the right) are both filesystem paths
The keys should be prefixes of the values and are "subtracted" from the values in order to generate the URL paths to each repository
What I'm guessing happened is that in your hgweb(dir) configuration you're specifying the same value for a collection possibly as the key, so during subtraction it ends up with a blank name and no way to get to it.
When I use [collections] to set /a/full/path = /a/full/path directly to a repo, it'll end up blank too, because it's reading that folder as a repo because it is a repo, instead of each sub-directory being an individual repo, after I removed the .hg folder and .hgsubs and everything from the root of my collection entry, all the subfolders started showing up properly.
I originally used in [paths], /path/to/my/project = /path/to/my/project, and since that is a single referenced repository, it'll subtract the value from the key, leaving you once again with '', instead I used project = /path/to/my/project and it came out as 'project'.
Hopefully that URL or these descriptions will get you out of your pickle!
I am trying to branch a file in ClearCase Remote Client.
I have the branch set up and the config spec is updated to handle the branch.
But I can't find the option, and the googling isn't helping much.
The way I understand your question, it sounds like you want to somehow select a command from a Clearcase RC menu(s) and have the branch explicitly created(?)
Clearcase has no explicit "Generate Branch for this File" command; you would want the "Checkout" command in this case. Branching is indirect and is a result of checking out a version of a file in a view that has a config spec with the '-mkbranch ' operation in it. I.e. the following config spec will create the dev_1.0_branch once I check it out (for any and all vobs and files):
element * CHECKEDOUT
element * .../dev_1.0_branch/LATEST
element * /main/LATEST -mkbranch dev_1.0_branch
The first line is standard for views in which you are doing development, line 2 will assure that I see any file that has a dev_1.0_branch (particularly important for the checkout+mkbranch to work as expected :-), and line 3 will select the latest version of any file that does not have a dev_1.0_branch and will create the branch if (and only if) the file version selected by that rule is checked out.
Please let me know if any of the above sounds greek to you, particularly any of the config spec rules. Having worked with ClearCase for a long time, I assume and use a lot of its terminology and concepts as if it's common knowledge :-P.
One thing of note: if you checkout the file, then immediately uncheckout the file, you will leave an empty branch on that file (i.e. in the above you would have a file with a version such as: foo.c##/main/dev_1.0_branch/0, but no /main/dev_1.0_branch/1 version). Many sites prefer to keep the version tree clean and remove empty branches (one can be found in this IBM Rational Technical article)
Just to be clear, I'm familiar with ClearCase Base & ClearCase MultiSite, but have not worked with the Remote Client yet.
--- 2009-Jun-29 Update
In response to Paul's comment below, if you want to be selective in what files are branched, you can modify the "*" to be more specific. For example, if you want to only branch foo.c in the FOODEV VOB, but leave everything else on main:
UNIX config spec:
element * CHECKOUT
element * .../my_dev_branch/LATEST
element /vobs/FOODEV/src/foo.c -mkbranch my_dev_branch
element * /main/LATEST
(For windows, you would want to use Windows conventions. I.e. \FOODEV\src\foo.c).
You can also select a directory and all elements below the directory (again UNIX config spec):
element * CHECKOUT
element * .../my_dev_branch/LATEST
element /vobs/FOODEV/src/mycomponent/... -mkbranch my_dev_branch
element * /main/LATEST
The main page for config_spec (cleartool man config_spec from the command line on windows or unix) provides decent guidance in the "Pattern" section for how to write the element/version selector (2nd column).
You can do a lot of complex version selection with the config specs. Please let me know if you would like more details or specifics.
Here's a config spec that I used for fixing a particular bug, with names changed to disguise some of the guilty.
element * CHECKEDOUT
element * .../TEMP.bugnum171238.jleffler/LATEST
mkbranch -override TEMP.bugnum171238.jleffler
include /clearcase/cspecs/project/version-1.23.45
To create the branch, in each VOB, I used a command:
ct mkbrtype -c 'Branch for bug 171238' TEMP.bugnum171238.jleffler#/vobs/project
Previously, we used config specs with -mkbranch rules appended to the various element lines.