Remove subelement from a yaml array in bash/awk when there's no order? - awk

I'm trying to find a better way of removing values from a yaml, for example - this is my yaml example:
apiVersion: v1
data:
mapRoles: |-
- username: user1
rolearn: arn
groups:
- grp
- grp2
- groups:
- grp
rolearn: arn
username: user2
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"..."}
uid: 93ad6dc1-2a1f-11ea-b5da-0ec0e91c7076
my input is list of user names, which i can test with regex or. as i cannot install any dependencies, i have to use a tool that is installed in any system - thats what i chose awk.
in each part, i have to check the username if it matches any list of values, then if it does - remove a specific group from the "groups:" list.
what i was thinking is to identify each start of a yaml key (that represents a user) - then, add everything to an array while checking if the username is exactly what we expect - if it does, print the array but without the relevant group, else - print the entire array.
i've started writing it and it seems complex - is there any better way?
--- examples ---
If i'm specifying "user1" and the "grp" as the params, yaml should look like:
apiVersion: v1
data:
mapRoles: |-
- username: user1
rolearn: arn
groups:
- grp2
- groups:
- grp
rolearn: arn
username: user2
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"..."}
uid: 93ad6dc1-2a1f-11ea-b5da-0ec0e91c7076
if i'm specifying user2 and the "grp", it should look like:
apiVersion: v1
data:
mapRoles: |-
- username: user1
rolearn: arn
groups:
- grp
- grp2
- groups:
rolearn: arn
username: user2
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"..."}
uid: 93ad6dc1-2a1f-11ea-b5da-0ec0e91c7076
That's my issue - basically the user2 is specified AFTER the groups section, which then i'm not sure about the correct way to remove it.

This might be what you're trying to do but its not clear from your question:
$ cat tst.awk
BEGIN {
split(users,tmp)
for (i in tmp) {
tgtUsers[tmp[i]]
}
split(groups,tmp)
for (i in tmp) {
tgtGroups[tmp[i]]
}
}
match($0,/^[[:space:]]*(-[[:space:]]*)?[^[:space:]]+[[:space:]]*:/) {
sect = $0
sub(/^[[:space:]]*(-[[:space:]]*)?/,"",sect)
sub(/[[:space:]]*:.*/,"",sect)
}
sect == "username" {
inTgtUsers = ($NF in tgtUsers)
inGroups = 0
}
sect == "groups" {
inGroups = 1
}
!(inGroups && inTgtUsers && ($NF in tgtGroups))
$ awk -v users='user1' -v groups='grp' -f tst.awk file
apiVersion: v1
data:
mapRoles: |-
- username: user1
rolearn: arn
groups:
- grp2
- groups:
rolearn: arn
username: user2
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"..."}
uid: 93ad6dc1-2a1f-11ea-b5da-0ec0e91c7076

So, by the note of Ed Morton, i've taken his script, but had to change the regex, as it didn't catch groups with colons in the name (such as - my:group)
BEGIN {
inGroups = 0
split(users,tmp)
for (i in tmp) {
tgtUsers[tmp[i]]
}
split(groups,tmp)
for (i in tmp) {
tgtGroups[tmp[i]]
}
}
match($0,/^ +(-? (username|rolearn|userarn): [^ ]+|-? groups: *$)/) {
sect = $0
sub(/^[[:space:]]*(-[[:space:]]*)?/,"",sect)
sub(/[[:space:]]*:.*/,"",sect)
if ($1 == "-") {
startIndexArr[NR]
start_index = NR
}
inGroups = 0
}
sect == "username" {
if ($NF in tgtUsers)
relevantUsersIndexArr[start_index]
}
sect == "groups" {
inGroups = 1
}
(inGroups == 1 && ($NF in tgtGroups)) {
foundGroupsArr[NR]
}
{
yamlArr[NR] = $0
}
END {
row_num = 1
for (i in yamlArr) {
if (row_num in startIndexArr)
start_entity_index = row_num
print_row = 1
if (row_num in foundGroupsArr && start_entity_index in relevantUsersIndexArr)
print_row = 0
if (print_row == 1)
print yamlArr[row_num]
row_num++
}
}

Related

How to compare 2 JSON objects containing array using Karate tool [duplicate]

This question already has an answer here:
Is there a simple match for objects containing array where the array content order doesn't matter?
(1 answer)
Closed 1 year ago.
One of the API testing using intuit/karate,
Expected JSON is: {name: hello,
config:[{username: abc, password: xyz},{username: qwe, password: tyu}]}
There is two possibility of an API response.
First possible actual JSON: {name: hello,
config:[{username: qwe, password: tyu},{username: abc, password: xyz}]}
Second possible actual JSON: {name: hello,
config:[{username: abc, password: xyz},{username: qwe, password: tyu}]}
Likewise, the sequence of array element is different in actual response, hence following approach of validation of response throws error randomly.
And response == < ExpectedResponse >
And response contains < ExpectedResponse >
Sometimes error is thrown as :
Error : { Actual: response.config[0].abc, Expected: response.config[0].qwe }
Sometimes error is thrown as :
Error : { Actual: response.config[0].qwe, Expected: response.config[0].abc }
Would you please provide exact karate approach of JSON validation in which entire JSON along with ignore the sequence of element in JSON containing array ?
Here is the solution:
* def response1 = {name: 'hello', config:[{username: 'qwe', password: 'tyu'},{username: 'abc', password: 'xyz'}]}
* def response2 = {name: 'hello', config:[{username: 'abc', password: 'xyz'},{username: 'qwe', password: 'tyu'}]}
* def config = [{username: 'qwe', password: 'tyu'},{username: 'abc', password: 'xyz'}]
* match response1 == { name: 'hello', config: '#(config)' }
* match response2 == { name: 'hello', config: '#(^^config)' }

Mule Dataweave Fixed Width File with header and footer

I am working on a project where we receive a flat file but the first and last lines have information that does not fit the fixed width pattern. Is there a way to dataweave all of this information correctly and if possible put the header and footer into variables and just have the contents in the payload.
Example File
HDMTFSBEUP00000220170209130400 MT HD07
DT01870977 FSFSS F3749261 CR00469002017020820170225 0000
DT01870978 FSFSS F3749262 CR00062002017020820170125 0000
TRMTFSBEUP00000220170209130400 000000020000002000000000000043330000000000000 0000
I know for CSV you can skip a line but dont see it with fixed width and also the header and footer will both start with the first 2 letters every time so maybe they can be filtered by dataweave?
Please refer to the DataWeave Flatfile Schemas documentation. There are several examples for processing several different types of data.
In this case, I tried to simplify your example data, and apply a custom schema as follow:
Example data:
HDMTFSBEUP00000220170209130400
DT01870977
DT01870978
TRMTFSBEUP00000220170209130400
Schema/Flat File Definition:
form: FLATFILE
structures:
- id: 'test'
name: test
tagStart: 0
tagLength: 2
data:
- { idRef: 'header' }
- { idRef: 'data', count: '>1' }
- { idRef: 'footer' }
segments:
- id: 'header'
name: header
tag: 'HD'
values:
- { name: 'header', type: String, length: 39 }
- id: 'data'
name: data
tag: 'DT'
values:
- { name: 'code', type: String, length: 17 }
- id: 'footer'
name: footer
tag: 'TR'
values:
- { name: 'footer', type: String, length: 30 }
The schema will validate the example data and identify based on the tag, the first 2 letters. The output will be grouped accordingly.
{
"header": {},
"data": [{}, {}],
"footer": {}
}
Since the expected result is only the data, then just select it: payload.data.
Use range selector to skip header and footer.
payload[1..-2] map {
field1: $[0..15],
field2: $[16..31]
...,
...
}
[1..-2] will select from 2nd line till the second last line in the payload.
$[0..15] will select from 1st column index to 16th index. $[16..31] select from 17th column index to 32nd index.
I was facing the same issue and the answer #sulthony h wrote needs a little tweak. I used these lines instead and it worked for me.
data:
- { idRef: 'header', count: 1 }
- { idRef: 'data', count: '>1' }
- { idRef: 'footer', count: 1 }
"count" was missing from header and footer, and that was throwing an exception. Hope this helps.

Permissions ex is not working

I am trying to make a kit pvp server and my players are not being able to use signs at all!
groups:
Initiate:
options:
default: 'true'
prefix: '&0[&3Initiate&0]&f'
permissions:
- modifyworld.*
- essentials.eco
- essentials.pay
- essentials.pay.multiple
- essentials.afk
- essentials.afk.auto
- essentials.mail
- essentials.mail.send
- essentials.msg
- essentials.rules
- essentials.seen
- essentials.seen.banreason
- essentials.suicide
- essentials.spawn
- essentials.keepxp
- essentials.warp
- essentials.warp.list
- essentials.warp.afireandice
- essentials.warp.forestlyr
- essentials.warp.mainplains
- essentials.warp.spawn
- essentials.signs.use.balance
- essentials.signs.use.buy
- essentials.signs.use.disposal
- essentials.signs.use.free
- essentials.signs.use.heal
- essentials.signs.use.info
- essentials.signs.use.mail
- essentials.signs.use.repair
- essentials.signs.use.sell
- essentials.signs.use.warp
- kingkits.command.previewkit
- kingkits.sign.list.use
- kingkits.sign.kit.use
- kingkits.compass
- kingkits.quicksoup
schema-version: 1
users:
9bb304e6-2ff2-4acc-b073-d899993e157d:
group: []
options:
name: CraigSwords
7225aabb-6ae9-4081-add2-00dbdd6d114c:
group: []
options:
name: SocialSavior
b4c5a860-8e01-4306-99c7-3457e935eed3:
group: []
options:
name: mewtwolvex
7f1e5c73-3fac-4b5e-b7ed-6661740470a7:
group: []
options:
name: Slick10000
Your Initiate group permissions appear to be correct. Even though this group is configured to be the default group, it is not assigned to your players since you have explicit group definitions for all (i.e. group: []). Removing the empty group definitions from your players will cause PEX to assign them to your default Initiate group.

Set fact with dynamic key name in ansible

I am trying to shrink several chunks of similar code which looks like:
- ... multiple things is going here
register: list_register
- name: Generating list
set_fact: my_list="{{ list_register.results | map(attribute='ansible_facts.list_item') | list }}"
# the same code repeats...
The only difference between them is list name instead of my_list.
In fact, I want to do this:
set_fact:
"{{ some var }}" : "{{ some value }}"
I came across this post but didn't find any answer here.
Is it possible to do so or is there any workaround?
take a look at this sample playbook:
---
- hosts: localhost
vars:
iter:
- key: abc
val: xyz
- key: efg
val: uvw
tasks:
- set_fact: {"{{ item.key }}":"{{ item.val }}"}
with_items: "{{iter}}"
- debug: msg="key={{item.key}}, hostvar={{hostvars['localhost'][item.key]}}"
with_items: "{{iter}}"
The above does not work for me. What finally works is
- set_fact:
example_dict: "{'{{ some var }}':'{{ some other var }}'}"
Which is in the end obvious. You construct a string (the outer double quotes) which is then interpreted as a hash. In hashes key and value must be single quotes (the inner single quotes around the variable replacements). And finally you just place your variable replacements as in any other string.
Stefan
As of 2018, using ansible v2.7.1, the syntax you suggest in your post works perfectly well.
At least in my case, I have this in role "a" :
- name: Set fact
set_fact:
"{{ variable_name }}": "{{ variable_value }}"
And that in role "b" :
- debug:
msg: "variable_name = {{ variable_name }}"
And execution goes :
TASK [role a : Set fact] *******************************************************
ok: [host_name] => {
"ansible_facts": {
"variable_name": "actual value"
},
"changed": false
}
...
TASK [role b : debug] **********************************************************
ok: [host_name] => {}
MSG:
variable_name = actual value
- set_fact: '{{ some_var }}={{ some_value }}'
It creates a string of inline module parameter expression by concatenating value of some_var (fact name), separator = and value of some_value (fact value).
- set_fact:
var1={"{{variable_name}}":"{{ some value }}"}
This will create a variable "var1" with your dynamic variable key and value.
Example: I used this for creating dynamic tags in AWS Autoscaling group for creating kubernetes tags for the instances like this:
- name: Dynamic clustertag
set_fact:
clustertag={"kubernetes.io/cluster/{{ clustername }}":"owned"}
- name: Create the auto scale group
ec2_asg:
.
.
.
tags:
- "{{ clustertag }}"
Beware of a change in 2.9 – the behaviour changed rendering all the answers invalid. https://github.com/ansible/ansible/issues/64169

Assign Array to String in mule

How to assign array to String? I want to convert XML to CSV. My dataweave returning array. I want to convert it into CSV.
Transform Message Code:
%input payload application/xml
%output application/java
---
(payload.catalog.*category-assignment default []) groupBy $.#product-id pluck {
product-id:$$,
cat-id: $.#category-id joinBy ":",
primary-flag:$.primary-flag,
field3:$.#category-id when $.primary-flag[0] == "true" otherwise ""
}
My payload is like:
[{
product - id = D198561, cat - id = 1111, primary - flag = null, field3 =
}, {
product - id = D198563, cat - id = 30033, primary - flag = [true], field3 = [30033]
}, {
product - id = D198566, cat - id = 0933: 2104: 7043, primary - flag = null, field3 =
}]
I want output in CSV with respect to payload as:
Field1: product-id, Field2:cat-id,Field3:field3
Its giving error in field 3 as it is array like field3=[30033].
Thanks in advance