How to do double-indirection with jq variables? - variables

How to construct a name of a variable from data and access that variable?
For example it should somehow give the content of the alpha-file:
jq '$.var"-file"' --slurpfile alpha-file <(echo 0) --slurpfile beta-file <(echo 1) <<<'{"var": "alpha"}'
It should output:
[
0
]

Named arguments are also available to the jq program as $ARGS.named.
So there is a dictionary to extract variables by a string name from:
jq '$ARGS.named["\(.var)-file"]' --slurpfile alpha-file <(echo 0) --slurpfile beta-file <(echo 1) <<<'{"var": "alpha"}'
outputs
[
0
]

You could use getpath like this:
jq --slurpfile alphaFile <(echo 0) --slurpfile betaFile <(echo 1) '
.var as $ab | {alpha: $alphaFile, beta: $betaFile} | getpath([$ab])' <<< '{"var": "alpha"}'
Better yet:
jq --slurpfile alpha <(echo 0) --slurpfile beta <(echo 1) '
.var as $ab | {$alpha, $beta} | getpath([$ab])' <<< '{"var": "alpha"}'

Related

print part of a field with awk

I run:
ss -atp | grep -vi state | awk '{ print $2" "$3" "$4" "$5" "$6 }'
output:
0 0 192.168.1.14:49254 92.222.106.156:http users:(("firefox-esr",pid=696,fd=95))
From the last column, I want to strip everything but firefox-esr (in this case); more precisely I want to only fetch what's between "".
I have tried:
ss -atp | grep -vi state | awk '{ sub(/users\:\(\("/,"",$6); print $2" "$3" "$4" "$5" "$6 }'
0 0 192.168.1.14:49254 92.222.106.156:http firefox-esr",pid=696,fd=95))
There is still the last part to strip; the problem is that the pid and fd are not a constant value and keep changing.
You might harness gensub reference ability for that. For simplicity let file.txt content be
users:(("firefox-esr",pid=696,fd=95))
then
awk '{print gensub(/.*"(.+)".*/,"\\1",1,$1)}' file.txt
outputs:
firefox-esr
Keep in mind that gensub do not alter string it gets as 4th argument, but return new string, so I print it.
You can use
awk '{ gsub(/^[^\"]*\"|\".*/, "", $6); print $2" "$3" "$4" "$5" "$6 }'
Here, gsub(/^[^\"]*\"|\".*/, "", $6) will take Field 6 as input, and remove all chars from start till the first " including it (see the ^[^\"]*\" part) and then the next " and all text after it (using \".*).
See this online awk demo:
s='0 0 0 192.168.1.14:49254 92.222.106.156:http users:(("firefox-esr",pid=696,fd=95))'
awk '{gsub(/^[^\"]*\"|\".*/, "",$6); print $2" "$3" "$4" "$5" "$6 }' <<< "$s"
# => 0 0 192.168.1.14:49254 92.222.106.156:http firefox-esr

Can't add Dictionary to Repl

How can I add this to the repl with line returns?
import Dict
fruit = Dict.fromList \
[ \
((0,0), 'Apple') \
,((0,1), ' ') \
]
Error:
> fruit = Dict.fromList \
| [ \
| ((0,0), 'Apple') \
| ,((0,1), ' ') \
-- SYNTAX PROBLEM -------------------------------------------- repl-temp-000.elm
The = operator is reserved for defining variables. Maybe you want == instead? Or
maybe you are defining a variable, but there is whitespace before it?
5| fruit = Dict.fromList
Is it just not possible to do this in the repl with lists where you want to add line returns?
Not a language I know but thought I'd take a look. This appears to work:
import Dict
fruit = Dict.fromList \
[ \
((0,0), "Apple") \
,((0,1), " ") \
]
You seem to have some trailing whitespace after ,((0,1), ' ') \
Also I needed double quotes which appears to be supported by https://elmprogramming.com/string.html
By way of minimal test - this behaves like your example if the trailing space is included:
import Dict
fruit = Dict.fromList [ \

Using awk to split a record into multiple fields

I have a file with records which are not separated by any delimiter . A sample is shared below:
XXXXXYYYYZZZ
XXXXXYYYYZZZ
XXXXXYYYYZZZ
XXXXXYYYYZZZ
XXXXXYYYYZZZ
I have been given a DDL for the file such that field 1 lies in the position 1-5, field 2 lies in the position 6-9 , field 3 lies in the position 10-12
How to use awk command to print the below output?
field1,field2,field3
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
IN GNU awk using FIELDWIDTHS:
$ awk '
BEGIN {
FIELDWIDTHS="5 4 3" # here you state the field widths
OFS="," # output field separator
print "field1","field2","field3" } # print header in BEGIN
{
print $1,$2,$3 } # print 3 first fields, you could also:
' file # {$1=$1; print} or even:
field1,field2,field3 # {$1=$1}1
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
If you don't have GNU awk, use f1=substr($0,1,5);f2=substr($0,6,4)...print f1,f2,f3.
Edit:
$ awk '
BEGIN {
OFS=","
print "field1","field2","field3" }
{
f1=substr($0,1,5)
f2=substr($0,6,4)
f3=substr($0,10,3)
print f1,f2,f3 }
' file
Latter as one-liner with ;s inserted:
$ awk 'BEGIN {OFS=","; print "field1","field2","field3"}{f1=substr($0,1,5); f2=substr($0,6,4); f3=substr($0,10,3); print f1,f2,f3}' file
The former as one-liner:
$ awk 'BEGIN{FIELDWIDTHS="5 4 3"; OFS=","; print "field1","field2","field3"}{print $1,$2,$3}' file
This might work for you (GNU sed):
sed -e '1i\field1,field2,field3' -e 's/[^,]/,&/6;s//,&/10' file
without using substr(), here's a not-so-elegant awk way to get around it
echo 'XXXXXYYYYZZZ
XXXXXYYYYZZZ
XXXXXYYYYZZZ
XXXXXYYYYZZZ
XXXXXYYYYZZZ' |
mawk 'BEGIN { OFS = ","
print (__ = "field")(++_), (__)(++_), (__)(++_)
_ = (_ = (_ = ".")_)_
__ = "&,"
} sub("." _,__) + sub("," _,__)^_'
field1,field2,field3
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ
XXXXX,YYYY,ZZZ

editing a output file to be delimited with a semicolon and the input file is a CSV kornshell

My input file is CSV
AED,E ,3.67295,20160105,20:10:00,UAE DIRHAM
ATS,E ,10.9814,20160105,20:10:00,AUSTRIAN SHILLINGS
AUD,A ,0.71525,20160105,20:10:00,AUSTRALIAN DOLLAR
I want to read it in to output it like so
EUR;1.127650;USD/EUR;EURO;Cash
JPY;124.335000;JPY/USD;JAPANESE YEN;Cash
GBP;1.538050;USD/GBP;BRITISH POUND;Cash
actual code :
cat $FILE2 | while read a b c d e f
do
echo $a $c $a/USD $f Cash \
| awk -F, 'BEGIN { OFS =";" } {print $1, $2, $3, $4, $5}' >> my_ratesoutput.csv
output:
Cash;;;;95 AED/USD UAE DIRHAM
Cash;;;;14 ATS/USD AUSTRIAN SHILLINGS
Cash;;;;25 AUD/USD AUSTRALIAN DOLLAR
Cash;;;;/USD BARBADOS DOLLAR
export IFS=","
semico=';'
FILE=rates.csv
FILE2=rateswork.csv
echo $FILE
rm my_ratesoutput.csv
cp -p $FILE $FILE2
sed 1d $FILE2 > temp.csv
mv temp.csv $FILE2
echo "Currency;Spot Rate;Terms;Name;Curve" >>my_ratesoutput.csv
cat $FILE2 |while read a b c d e f
do
echo $a$semico$c$semico$a/USD$semico$f$semicoCash >> my_ratesoutput.csv
done

Repeat printf arguments with command line operators

I want to repeat the same argument $i for the instances 03-12. I'm really trying to use some nco operators - but the printf statement is hanging me up.
I'm trying to use an netcdf operator on it - where these outputs of the printf are the input files to the command. While this works now with the printf statements, it's not piping into the netcdf command. Which goes as: ncea -v T,U inputfiles outputfile
#!/bin/csh
set i = 1
while ($i < 2)
ncea -v T,U
foreach j ( {3,4,6,7,8,9,10,11,12} )
`printf O3_BDBP_1979ghg.cam.h0.00%02d-%02d.nc $j $i `
end
O3_BDBP_1979.nc
# i = $i + 1
end
Other printf statements I've tried are
ncea -v T,U `printf O3_BDBP_1979ghg.cam.h0.00{03,04,05,06,07,08,09,10,11,12}-%02d.nc $i` O3_BDBP_1979.nc
ncea -v T,U `printf O3_BDBP_1979ghg.cam.h0.00{03,04,05,06,07,08,09,10,11,12}-%1$02d.nc $i` O3_BDBP_1979.nc