I created a docker container but cannot seem to parse its properties with docker-cli and its format parameter (which uses Go templates).
Any idea greatly appreciated.
Start a docker container, e.g. go to https://birthday.play-with-docker.com/cli-formating/ and click on the docker command in the Prepare the environment section
Choose a property for parsing, e.g. Ports. Printing it with docker container ls --format '{{ .Ports }}' should yield 0.0.0.0:80->80/tcp.
Trying to get the port range part after the colon, split the property at ":" (docker container ls --format '{{ split .Ports ":" }}') which yields array [0.0.0.0 80->80/tcp]
The return type (docker container ls --format '{{ printf "%T" (split .Ports ":") }}') is
[]string.
The string array has a length of 2 (docker container ls --format '{{ len (split .Ports ":") }}') .
Accessing index value 0 (docker container ls --format '{{ index (split .Ports ":") 0 }}') yields 0.0.0.0 as expected.
Accessing index value 1 (docker container ls --format '{{ index (split .Ports ":") 1 }}') yields failed to execute template: template: :1:2: executing "" at <index (split .Ports ":") 1>: error calling index: reflect: slice index out of range instead of 80->80/tcp .
In order to access string array elements, I found a solution.
As this question focuses on the Go templates part in the --format parameter, I will just write about that.
The {{ index (split .Ports ":") 1 }} yields an error because index simply is the wrong function in this case. To access an array element, use slice:
Access the second array element (zero-based index) with {{ slice (split .Ports ":") 1 }}, this yields [80->80/tcp].
If you need string output, you can convert the slice object with join using "" as separator, removing the square brackets in the output: {{ join ( slice (split .Image ":") 1 ) "" }} yields 80->80/tcp.
The complete command is docker container ls --format '{{ join ( slice ( split .Ports ":" ) 1 ) "" }}'. Keep in mind that Go templates use a kind of prefix notation which might not seem that common.
Related
I have a question to DevOps Pipelines.
I have created a Pipeline and during this creation I add a variable with this values “ Name: RESIDENCE ; value: ISS“. So this Value is defined outside from any script.
Pipeline Variable
Inside the .yml file I use this code:
variables:
- name: shuttle
value: Columbia
- name: pipe_var
value: $(RESIDENCE)
- name: location
${{ if eq(variables.pipe_var, 'ISS') }}:
value: pretty_cool
${{ if eq(variables.pipe_var, 'MIR') }}:
value: not_possible
steps:
- script: |
echo space shuttle is: $(shuttle)
echo residence is: $(pipe_var)
echo place to be is: $(location)
But The Pipeline output is only showing:
space shuttel is: Columbia
resicende is: ISS
place to be is:
So as it can be seen in the line “resicende is: ISS “ the value from the outside variable “RESIDENCE” is shown correct. To show this value I use the detour over the variable “pipe_var”. But when I try to compare this variable value with the “if equal” lines then I get no results.
What I’m doing wrong? Is there a special way to compare string values in a pipeline?
It would be nice if someone could give me a hint.
I'm trying to use a var in a var declaration on Ansible (2.7.10)
I'm using aws_ssm lookup plugin (https://docs.ansible.com/ansible/latest/plugins/lookup/aws_ssm.html)
Working example (hardcoded values):
var: "{{ lookup('aws_ssm', '/path/server00', region='eu-west-3') }}"
I want to use variables for the server name and the AWS region, but all my tentatives went on errors.
What I've tried so far:
var: "{{ lookup('aws_ssm', '/path/{{ server }}', region={{ region }}) }}"
var: "{{ lookup('aws_ssm', '/path/{{ server }}', region= + region) }}"
- name: xxx
debug: msg="{{ lookup('aws_ssm', '/path/{{ server }}', region='{{ region }}' ) }}"
register: var
Without any success yet, thanks for your help,
You never nest {{...}} template expressions. If you're already inside a template expression, you can just refer to variables by name. For example:
var: "{{ lookup('aws_ssm', '/path/' + server, region=region) }}"
(This assumes that the variables server and region are defined.)
You can also take advantage of Python string formatting syntax. The following will all give you the same result:
'/path/' + server
'/path/%s' % (server)
'/path/{}'.format(server)
And instead of + you can use the Jinja ~ concatenation operator, which acts sort of like + but forces arguments to be strings. So while this is an error:
'some string' + 1
This will result in the text some string1:
'some string' ~ 1
I am struggling with a SQL query in a YAML file. I have tested my SQL query in my database, which works perfectly.
This is my query in my Ansible file:
shell: "{{ scrub_command }} -c \"UPDATE project_record SET meta=jsonb_set(meta, '{"email"}', concat('"', meta->>'email', '.not"')::jsonb) WHERE meta->>'email' IS NOT NULL AND meta->>'email' NOT ILIKE '%#email.somethingelse.com' AND meta->>'email' NOT ILIKE '%#something.com';\""
I can see that in the editor, my query is not ending properly as the \" has a different color from the opening \").
If I take out the part:
concat('"', meta->>'email', '.not"')::jsonb)
the query is closing properly.
I have tried playing with the query and testing it in YAML lint website, but I can't find a way for my YAML file to accept my query.
The error I get when running my script is:
Syntax Error while loading YAML.\n expected <block end>, but found '<scalar>'
The YAML lint website would give this error:
did not find expected key while parsing a block mapping at line 1 column 1
What I am doing wrong?
Your query is not YAML valid. Can you try with :
shell: "{{ scrub_command }} -c \"UPDATE project_record SET meta=jsonb_set(meta, '{\"email\"}', concat('\"', meta->>'email', '.not\"')::jsonb) WHERE meta->>'email' IS NOT NULL AND meta->>'email' NOT ILIKE '%#email.somethingelse.com' AND meta->>'email' NOT ILIKE '%#something.com';\""
This one is valid according to http://www.yamllint.com/
The best way to put scalars that have both single and double quotes themselves into YAML is by using a block style scalars. That is a scalar that is indicated by the | or > character. In the block style (|) none of the characters in the scalar are interpreted, even newlines are newlines:
shell: |-
{{ scrub_command }} -c \"UPDATE project_record SET meta=jsonb_set(meta, '{"email"}', concat('"', meta->>'email', '.not"')::jsonb) WHERE meta->>'email' IS NOT NULL AND meta->>'email' NOT ILIKE '%#email.somethingelse.com' AND meta->>'email' NOT ILIKE '%#something.com';\"
In folded style single newlines are replaced by a space so you can make things a bit more readable:
shell: >-
{{ scrub_command }} -c \"UPDATE project_record
SET meta=jsonb_set(meta, '{"email"}', concat('"', meta->>'email', '.not"')::jsonb)
WHERE meta->>'email' IS NOT NULL AND meta->>'email' NOT ILIKE '%#email.somethingelse.com' AND meta->>'email'
NOT ILIKE '%#something.com';\"
In both the second line should be exactly (including backslashes that have no special meaning in these block style YAML scalars, but of course do for the shell that executes the string loaded from the scalar).
The - after | resp. > is necessary to get rid of the trailing newline.
Apologies for lack of clarify, rewriting my ask:
I am struggling to get the appropriate start_index value passed on to the third task below "Echo parameters". If user_defined_index is "" I want the echo unique UIDs task to execute and output passed in the start_index variable. Likewise if user_define_index is not "" I want the second task below to execute and populate the start_index variable. I essentially need to pass either A or B to the echo parameters task.
The Echo parameters task expects to get some UIDs. The first task autogenerates UIDs based on what you see in the shell command. the second task allows for user to specify UIDs. So whichever WHEN command is valid that set of UIDs need to get used by the third task. Using debug statements I have confirmed that both ECHO UNIQUE UIDS and CAPTURE USER DEFINED UIDs tasks work fine and the corresponding register variables have the right data.
My issue is 3rd task only picks up values from the 1st task regardless, whether it be auto generated values or blank with skipped equals true.
I need the correct corresponding value in the start_index to be fed into the 3rd task.
- name: echo unique UIDs
shell: echo $(((0x$(hostid) + $(date '+%s'))*100000 + {{ item[0] }}*100000 + {{ start_stress_index }}))
with_indexed_items:
- "{{ load_cfg }}"
register: start_index
when: user_defined_index == ""
changed_when: False
- name: Capture user defined UIDs
shell: echo '{{ user_defined_index }}' | tr , '\n'
with_indexed_items:
- "{{ load_cfg }}"
register: start_index
when: user_defined_index != ""
changed_when: False
- name: Echo parameters
command: echo --cfg='{{ start_index }}' --si={{ item[1].stdout }}
with_together:
- "{{ load_cfg }}"
- "{{ start_index.results }}"
For the above regardless of user_define_index output from the echo unique UIDs always gets passed through to the 3rd task. After googling I finally found a potential solution to use the ternary filter:
https://github.com/ansible/ansible/issues/33827
I have modified my code to be:
- name: echo unique UIDs
shell: echo $(((0x$(hostid) + $(date '+%s'))*100000 + {{ item[0] }}*100000 + {{ start_stress_index }}))
with_indexed_items:
- "{{ load_cfg }}"
register: start_auto_index
when: user_defined_index == ""
changed_when: False
- name: Capture user defined UIDs
shell: echo '{{ user_defined_index }}' | tr , '\n'
with_indexed_items:
- "{{ load_cfg }}"
register: start_user_index
when: user_defined_index != ""
changed_when: False
- name: Echo parameters
command: echo --cfg='{{ start_index }}' --si={{ item[1].stdout }}
with_together:
- "{{ load_cfg }}"
- "{{ ((start_auto_index is not skipped)|ternary(start_auto_index,start_user_index))['results'] }}"
However, I still have same issue as with my first example when i run the above I again only get output from start_auto_index sent to 3rd task echo parameters no matter what i do with user_defined_index.
I hope this clarifies my question.
Explanation
The problem is that your tasks contain a loop and in such case Ansible returns separate statuses for the task and each loop iteration.
With start_auto_index is not skipped you check the status of the whole task, but it is iterations that get status "skipped".
Solution
Since your conditional user_defined_index for the tasks containing the loop is constant for each loop iteration, all iterations will have the same skipped-status, you can modify the condition in the Echo parameters task to checking just one of them:
"{{ ((start_auto_index[0] is not skipped)|ternary(start_auto_index,start_user_index))['results'] }}"
Besides, in the Echo parameters task with_together does not seem to serve any function, as you don't refer to the item[0].
At SQLWorkbenchJ, I am trying to load a text file that is 'tab' delimited from Amazon S3 into Redshift by using this command:
COPY table_property
FROM 's3://...txt’
CREDENTIALS 'aws_access_key_id=…;aws_secret_access_key=…’
IGNOREHEADER 1
DELIMITER '\t';
But it returns the following warning:
Warnings:
Load into table 'table_property' completed, 0 record(s) loaded successfully.
I have checked various Stackoverflow sources and Tutorial: Loading Data from Amazon S3 but neither of the solutions works.
My data from the text file looks like this:
BLDGSQFT DESCRIPTION LANDVAL STRUCVAL LOTAREA OWNER_PERCENTAGE
12440 Apartment 15 Units or more 2013005 1342004 1716 100
20247 Apartment 15 Units or more 8649930 5766620 7796.25 100
101
1635 Live/Work Condominium 977685 651790 0 100
Does anyone have the solution to this?
Check the table STL_LOAD_ERRORS and STL_LOADERROR_DETAIL for the precise error message.
The message you are talking about is not an "Error". Your table will have all records. It just say there was no addition records added.
Try putting DELIMITER '\\t' instead of DELIMITER '\t'. That worked in many of my cases working with Redshift from Java, PHP, and Python. Or sometimes even more '\' signs. It's tied to how IDEs/languages interpret string queries that are supposed to be executed.
For example, this is my code from Airflow DAG, something I'm working on right now (doesn't matter if you are not familiar with Airflow, it's basically Python code.
redshift_load_task = PostgresOperator(task_id='s3_to_redshift',
sql=" \
COPY " + table_name + " \
FROM '{{ params.source }}' \
ACCESS_KEY_ID '{{ params.access_key}}' \
SECRET_ACCESS_KEY '{{ params.secret_key }}' \
REGION 'us-west-2' \
ACCEPTINVCHARS \
IGNOREHEADER 1 \
FILLRECORD \
DELIMITER '\\t' \
BLANKSASNULL \
EMPTYASNULL \
MAXERROR 100 \
DATEFORMAT 'YYYY-MM-DD' \
",
postgres_conn_id="de_redshift",
database="my_database",
params={
'source': 's3://' + s3_bucket_name + '/' + s3_bucket_key + '/' + filename,
'access_key': s3.get_credentials().access_key,
'secret_key': s3.get_credentials().secret_key,
},
)
Notice how I defined the delimiter DELIMITER '\\t' instead of DELIMITER '\t'.
Another example is part of Hive query, executed via Java code on Spark:
...
AND (ip_address RLIKE \"^\\\\d+\\\\.\\\\d+\\\\.\\\\d+\\\\.\\\\d+$\")"
...
Notice here how there are 4 backslashes in order to escape d in the regex, instead of only writing \d. Hope it helps.