Terraform template_file get pass all received variables to a function - variables

is there in Terraforom in template_files a way to pass through all the received variables to other place?
I mean something similar than $# in bash.
For example:
resource "template_file" "some_template" {
template = "my_template.tpl")}"
vars {
var1 = "value1"
var2 = "value2"
}
}
and then from the rendered file:
#!/bin/bash
echo "Var1: ${var1}"
echo "Var2: ${var2}"
echo "But I want it in someway similar to this:"
for v in $#; do
echo "$v";
done

According to the documentation, no.
From https://www.terraform.io/docs/providers/template/d/file.html
Variables for interpolation within the template. Note that variables
must all be primitives. Direct references to lists or maps will cause
a validation error.
Primitives in terraform are string, number and boolean.
So it means you can not pass a hash or a list to group all the variables in one.

Use join and pass all the variables as one and parse/split them within a script (with tr/IFS tricks)
join("; ", [var.myvar1, var.myvar2, var.myvar3])
and then
IN="${allvars}"
IFS=';' read -ra ADDR <<< "$IN"
for i in "${ADDR[#]}"; do
echo "$i"
done

Related

channel checks as empty even if it has content

I am trying to have a process that is launched only if a combination of conditions is met, but when checking if a channel has a path to a file, it always returns it as empty. Probably I am doing something wrong, in that case please correct my code. I tried to follow some of the suggestions in this issue but no success.
Consider the following minimal example:
process one {
output:
file("test.txt") into _chProcessTwo
script:
"""
echo "Hello world" > "test.txt"
"""
}
// making a copy so I check first if something in the channel or not
// avoids raising exception of MultipleInputChannel
_chProcessTwo.into{
_chProcessTwoView;
_chProcessTwoCheck;
_chProcessTwoUse
}
//print contents of channel
println "Channel contents: " + _chProcessTwoView.toList().view()
process two {
input:
file(myInput) from _chProcessTwoUse
when:
(!_chProcessTwoCheck.toList().isEmpty())
script:
def test = _chProcessTwoUse.toList().isEmpty() ? "I'm empty" : "I'm NOT empty"
println "The outcome is: " + test
}
I want to have process two run if and only if there is a file in the _chProcessTwo channel.
If I run the above code I obtain:
marius#dev:~/pipeline$ ./bin/nextflow run test.nf
N E X T F L O W ~ version 19.09.0-edge
Launching `test.nf` [infallible_gutenberg] - revision: 9f57464dc1
[c8/bf38f5] process > one [100%] 1 of 1 ✔
[- ] process > two -
[/home/marius/pipeline/work/c8/bf38f595d759686a497bb4a49e9778/test.txt]
where the last line are actually the contents of _chProcessTwoView
If I remove the when directive from the second process I get:
marius#mg-dev:~/pipeline$ ./bin/nextflow run test.nf
N E X T F L O W ~ version 19.09.0-edge
Launching `test.nf` [modest_descartes] - revision: 5b2bbfea6a
[57/1b7b97] process > one [100%] 1 of 1 ✔
[a9/e4b82d] process > two [100%] 1 of 1 ✔
[/home/marius/pipeline/work/57/1b7b979933ca9e936a3c0bb640c37e/test.txt]
with the contents of the second worker .command.log file being: The outcome is: I'm empty
I tried also without toList()
What am I doing wrong? Thank you in advance
Update: a workaround would be to check _chProcessTwoUse.view() != "" but that is pretty dirty
Update 2 as required by #Steve, I've updated the code to reflect a bit more the actual conditions i have in my own pipeline:
def runProcessOne = true
process one {
when:
runProcessOne
output:
file("inputProcessTwo.txt") into _chProcessTwo optional true
file("inputProcessThree.txt") into _chProcessThree optional true
script:
// this would replace the probability that output is not created
def outputSomething = false
"""
if ${outputSomething}; then
echo "Hello world" > "inputProcessTwo.txt"
echo "Goodbye world" > "inputProcessThree.txt"
else
echo "Sorry. Process one did not write to file."
fi
"""
}
// making a copy so I check first if something in the channel or not
// avoids raising exception of MultipleInputChannel
_chProcessTwo.into{
_chProcessTwoView;
_chProcessTwoCheck;
_chProcessTwoUse
}
//print contents of channel
println "Channel contents: " + _chProcessTwoView.view()
println _chProcessTwoView.view() ? "Me empty" : "NOT empty"
process two {
input:
file(myInput) from _chProcessTwoUse
when:
(runProcessOne)
script:
"""
echo "The outcome is: ${myInput}"
"""
}
process three {
input:
file(defaultInput) from _chUpstreamProcesses
file(inputFromProcessTwo) from _chProcessThree
script:
def extra_parameters = _chProcessThree.isEmpty() ? "" : "--extra-input " + inputFromProcessTwo
"""
echo "Hooray! We got: ${extra_parameters}"
"""
}
As #Steve mentioned, I should not even check if a channel is empty, NextFlow should know better to not initiate the process. But I think in this construct I will have to.
Marius
I think part of the problem here is that process 'one' creates only optional outputs. This makes dealing with the optional inputs in process 'three' a bit tricky. I would try to reconcile this if possible. If this can't be reconciled, then you'll need to deal with the optional inputs in process 'three'. To do this, you'll basically need to create a dummy file, pass it into the channel using the ifEmpty operator, then use the name of the dummy file to check whether or not to prepend the argument's prefix. It's a bit of a hack, but it works pretty well.
The first step is to actually create the dummy file. I like shareable pipelines, so I would just create this in your baseDir, perhaps under a folder called 'assets':
mkdir assets
touch assets/NO_FILE
Then pass in your dummy file if your '_chProcessThree' channel is empty:
params.dummy_file = "${baseDir}/assets/NO_FILE"
dummy_file = file(params.dummy_file)
process three {
input:
file(defaultInput) from _chUpstreamProcesses
file(optfile) from _chProcessThree.ifEmpty(dummy_file)
script:
def extra_parameters = optfile.name != 'NO_FILE' ? "--extra-input ${optfile}" : ''
"""
echo "Hooray! We got: ${extra_parameters}"
"""
}
Also, these lines are problematic:
//print contents of channel
println "Channel contents: " + _chProcessTwoView.view()
println _chProcessTwoView.view() ? "Me empty" : "NOT empty"
Calling view() will emit all values from the channel to stdout. You can ignore whatever value it returns. Unless you enable DSL2, the channel will then be empty. I think what you're looking for here is a closure:
_chProcessTwoView.view { "Found: $it" }
Be sure to append -ansi-log false to your nextflow run command so the output doesn't get clobbered. HTH.

How to use conditions in .gitlab-ci.yml variables?

I want to know, if it's possible to set custom Gitlab CI variable from if-else condition statement.
In my .gitlab-ci.yml file I have the following:
variables:
PROJECT_VERSION: (if [ "${CI_COMMIT_TAG}" == "" ]; then "${CI_COMMIT_REF_NAME}-${CI_PIPELINE_ID}"; else ${CI_COMMIT_TAG}; fi);
Trying to set project version:
image: php:7.1-cli
stage: test
script:
# this echoes correct string (eg. "master-2794")
- (if [ "${CI_COMMIT_TAG}" == "" ]; then echo "${CI_COMMIT_REF_NAME}-${CI_PIPELINE_ID}"; else echo ${CI_COMMIT_TAG}; fi);
# this echoes something like "(if [ "" == "" ]; then "master-2794"; else ; fi);"
- echo $PROJECT_VERSION
Can this be done? If so, what have I missed? Thanks
This is expected behavior.
CI_COMMIT_TAG is only set to a value in a GitLab job. From https://docs.gitlab.com/ee/ci/variables/README.html
CI_COMMIT_TAG - The commit tag name. Present only when building tags.
Therefore in the variables section CI_COMMIT_TAG is not defined, hence equals to "".
So if you want to use CI_COMMIT_TAG use in job where tags are defined. See https://docs.gitlab.com/ee/ci/yaml/README.html#tags
It is possible:
Add your logic into the variable section:
variables:
VERSION_LOGIC: '(if [ "$${CI_COMMIT_TAG}" == "" ]; then echo "1.3.5.$$CI_PIPELINE_IID"; else echo "$${CI_COMMIT_TAG}"; fi);'
Now you are able to use this logic in a script secion of a job:
version:
stage: versioning
script:
- VERSION=$(eval $VERSION_LOGIC)
- echo "The current version is set to ${VERSION}."

Is there a way to selectively capture variables in a scriptblock's context inside modules?

Suppose you have defined two functions in a module (i.e. a .psm1 file):
function f1{
param($x1)
$a1 = 10
f2 $x1
}
function f2{
param($x2)
$a2 = 100
& $x2
}
Now suppose you run the following:
PS C:\> $a0 = 1
PS C:\> $x0 = {$a0+$a1+$a2}
PS C:\> f1 $x0
1
$x2 keeps the context of the command line despite being invoked inside $f2. This holds if you change & to ..
Replacing $xn with $xn.GetNewClosure() in the module then calling f1 captures the value of 100 but not 10:
PS C:\> f1 $x0
101
PS C:\> f1 $x0.GetNewClosure
101
This happens because calling .GetNewClosure() inside f2 "overwrites" the value of $a1 captured in f1.
Is there a way to selectively capture variables in scriptblocks? Working from the example, is there a way to capture both $a1 inside f1 and $a2 inside f2?
Further Reading
PowerShell scopes are not simple. Consider the possibilities from this incomplete list of factors:
there can be any combination of global and module scope hierarchies active at any time
. and & invocation affects scope differently,
the sophisticated flow control afforded by the pipeline means that multiple scopes of the begin, process, and end scriptblocks of different or the same scope hierarchies, or even multiple invocations of the same function can be active simultaneously
In other words, a working description of PowerShell scope resists simplicity.
The about_Scopes documentation suggests the matter is far simpler than it, in fact, is. Perhaps analysing and understanding the code from this issue would lead to a more complete understanding.
I was hoping there was a built-in way of achieving this. The closest thing I found was [scriptblock]::InvokeWithContext(). Handling the parameters for InvokeWithContext() manually gets pretty messy. I managed to encapsulate the mess by defining a couple of helper functions in another module:
function ConvertTo-xScriptblockWithContext{
param([parameter(ValueFromPipeline=$true)]$InputObject)
process{
$InputObject | Add-Member -NotePropertyMembers #{variablesToDefine=#()}
{$InputObject.InvokeWithContext(#{},$InputObject.variablesToDefine)}.GetNewClosure() |
Add-Member -NotePropertyMembers #{ScriptBlockWithContext=$InputObject} -PassThru
}}
function Add-xVariableToContext{
param(
[parameter(ValueFromPipeline=$true)]$InputObject,
[parameter(position=1)]$Name,
[parameter(position=2)]$Value
)
process{
$exists = $InputObject.ScriptBlockWithContext.variablesToDefine | ? { $_.Name -eq $Name }
if ($exists) { $exists = $Value }
else{ $InputObject.ScriptBlockWithContext.variablesToDefine += New-Object 'PSVariable' #($Name,$Value) }
}}
Then, f1 and f2 add variables to the scriptblock's context using Add-xVariableToContext as it passes through:
function f1{
param($x1)
$a1 = 10
$x1 | Add-xVariableToContext 'a1' $a1
f2 $x1
}
function f2{
param($x2)
$a2 = 100
$x2 | Add-xVariableToContext 'a2' $a2
& $x2
}
Notice that $x2 is invoked like any other scriptblock so it can be safely used with the variables added to its context by anything that accepts scriptblocks. Creating new scriptblocks, adding $a0 to their context, and passing them to f1 looks like this:
$a0 = 1
$x0a,$x0b = {$a0+$a1+$a2},{$a0*$a1*$a2} | ConvertTo-xScriptblockWithContext
$x0a,$x0b | Add-xVariableToContext 'a0' $a0
f1 $x0a
f1 $x0b
#111
#1000

How do I pick elements from a block using a string in Rebol?

Given this block
fs: [
usr [
local [
bin []
]
share []
]
bin []
]
I could retrieve an item using a path notation like so:
fs/usr/local
How do I do the same when the path is a string?
path: "/usr/local"
find fs path ;does not work!
find fs to-path path ;does not work!
You need to complete the input string path with the right root, then LOAD it and evaluate it.
>> path: "/usr/local"
>> insert path "fs"
>> do load path
== [
bin []
]
Did you know Rebol has a native path type?
although this doesn't exactly answer your question, I tought I'd add a reference on how to use paths directly in Rebol. Rebol has a lot of datatypes, and when you can, you should leverage that rich language feature. Especially when you start to use and build dialects, knowing what types exist and how to use them becomes even more potent.
Here is an example on how you can build and run a path directly, without using strings. in order to represent a path within source code, you use the lit-path! datatype.
example:
>> p: 'fs/usr/local
== fs/usr/local
>> do p
== [
bin []
]
you can even append to a path to manipulate it:
>> append p 'bin
== fs/usr/local/bin
>> do p
== []
if it where stored within a block, you use a path! type directly (not a lit-path!):
>> p: [fs/usr/local/bin]
== [fs/usr/local]
>> do first p
== [
bin []
]
also note that using paths directly has advantages over using strings because the path is composed of a series of words, which you can do some manipulation more easily than with strings example:
>> head change next p 'bin
== fs/bin/local
>> p: 'fs/path/issue/is
== fs/path/issue/is
>> head replace p 'is 'was
== fs/path/issue/w
as opposed to using a string:
>> p: "fs/path/issue/is"
== "fs/path/issue/is"
>> head replace p "is" "was"
== "fs/path/wassue/is"
If you want to browse the disk, instead of Rebol datasets, you must simply give 'FS a value of a file! and the rest of the path with browse from there (this is how paths work on file! types):
fs: %/c/
read dirize fs/windows

How to set default values for Tcl variables?

I have some Tcl scripts that are executed by defining variables in the command-line invocation:
$ tclsh84 -cmd <script>.tcl -DEF<var1>=<value1> -DEF<var2>=<value2>
Is there a way to check if var1 and var2 are NOT defined at the command line and then assign them with a set of default values?
I tried the keywords global, variable, and set, but they all give me this error when I say "if {$<var1>==""}": "can't read <var1>: no such variable"
I'm not familiar with the -def option on tclsh.
However, to check if a variable is set, instead of using 'catch', you can also use 'info exist ':
if { ![info exists blah] } {
set blah default_value
}
Alternatively you can use something like the cmdline package from tcllib. This allows you to set up defaults for binary flags and name/value arguments, and give them descriptions so that a formatted help message can be displayed. For example, if you have a program that requires an input filename, and optionally an output filename and a binary option to compress the output, you might use something like:
package require cmdline
set sUsage "Here you put a description of what your program does"
set sOptions {
{inputfile.arg "" "Input file name - this is required"}
{outputfile.arg "out.txt" "Output file name, if not given, out.txt will be used"}
{compressoutput "0" "Binary flag to indicate whether the output file will be compressed"}
}
array set options [::cmdline::getoptions argv $sOptions $sUsage]
if {$options(inputfile) == ""} {puts "[::cmdline::usage $sOptions $sUsage]";exit}
The .arg suffix indicates this is a name/value pair argument, if that is not listed, it will assume it is a binary flag.
You can catch your command to prevent error from aborting the script.
if { [ catch { set foo $<var1> } ] } {
set <var1> defaultValue
}
(Warning: I didn't check the exact syntax with a TCL interpreter, the above script is just to give the idea).