Escape awk $ in shell function of bashrc - awk

I have a command that gets the next ID of a table from a pool of sql files, now I am trying to put this command as an alias in ~/.bashrc using a shell function, but I did not figure out how to escape $ so it gets to awk and not replaced by bash, here's the code in .bashrc:
function nextval () {
grep 'INSERT INTO \""$1"\"' *.sql | \
awk '{print $6}' | \
cut -c 2- | \
awk -F "," '{print $1}' | \
sort -n | \
tail -n 1 | \
awk '{print $0+1}'
}
alias nextval=nextval
Usage: # nextval tablename
Escaping with \$ I get an the error: awk: backslash not last character on line.
The $ is not inside double quotes, so why bash is replacing it ?

Perhaps the part you really need to change is this
'INSERT INTO \""$1"\"'
to
"INSERT INTO \"$1\""

#konsolebox answered your question but also you could write the function without so many tools and pipes, e.g.:
function nextval () {
awk -v tbl="$1" '
$0 ~ "INSERT INTO \"" tbl "\"" {
split( substr($6,2), a, /,/ )
val = ( ((val == "") || (a[1] > val)) ? a[1] : val)
}
END { print val+1 }
' *.sql
}
It's hard to tell if the above is 100% correct without any sample input or expected output to test it against but it should be close.

Related

execute mkpasswd inside awk

I am trying to use mkpasswd inside awk to compare a file field with encryption (I use Ubuntu):
Execute:
mkpasswd -m sha-512 word abcdefgh
Output:
$6$abcdefgh$SByAdlFKQWuVuMNFUL.ERj1CxsscDs.v6nR2h2cyIkM.PAEUEqaMudTk3I/yfyFeaJY/da4dJto/1wXxMCaok/
Trying:
awk 'mkpasswd -m sha-512 $7 abcdefgh =="$6$abcdefgh$SByAdlFKQWuVuMNFUL.ERj1CxsscDs.v6nR2h2cyIkM.PAEUEqaMudTk3I/yfyFeaJY/da4dJto/1wXxMCaok/"' FS=: file > file1
File:
6:g:g:g:g:g:word1
7:g:g:g:g:g:word
8:g:g:g:g:g:word2
Expected output:
7:g:g:g:g:g:word
awk -F':' '
{
cmd = "mkpasswd -m sha-512 \047" $7 "\047 abcdefgh"
sha = ( (cmd | getline line) > 0 ) ? line : "N/A" )
close(cmd)
}
sha == "$6$abcdefgh$SByAdlFKQWuVuMNFUL.ERj1CxsscDs.v6nR2h2cyIkM.PAEUEqaMudTk3I/yfyFeaJY/da4dJto/1wXxMCaok/"
' file > file1
See http://awk.freeshell.org/AllAboutGetline for if/how to use getline, including reading from a pipe as in this case.

How to fix executing awk in Tcl?

I am unable to read fields from awk command in Tcl while it runs in a terminal but not in Tcl script.
Tried making syntax changes, it works in terminal not in script
set a { A B C D E F G H I J K L M N O P Q R S T U V W X Y Z }
#store only cell var in file
exec grep -in "cell (?*" ./slow.lib | cut -d "(" -f2 | cut -d ")" -f1 > cells.txt
#take alphabets to loop
foreach b $a {
puts "$b\n"
if { [ exec cat cells.txt | awk ' $1 ~ /^$b/ ' ] } {
foreach cell [exec cat ./cells.txt] {
puts "$b \t $cell"
}
}
The condition should check for first char in the file and give boolean.
The error is:
can't read "1": no such variable
while executing "exec cat cells.txt | awk ' $1 ~ /^$b/ ' "
Your problem is that Tcl attaches no special meaning at all to the ' character. It uses {…} (which nest better) for the same purpose. Your command:
exec cat cells.txt | awk ' $1 ~ /^$b/ '
should become:
exec cat cells.txt | awk { $1 ~ /^$b/ }
Except… you also want $b (but not $1) to be substituted in there. The easiest way to do that is with format:
exec cat cells.txt | awk [format { $1 ~ /^%s/ } $b]
It would be more optimal to omit the use of cat here:
exec awk [format { $1 ~ /^%s/ } $b] <cells.txt
You are aware that your whole script can be written in pure Tcl without any use of exec?
can't read "1": no such variable
The (Tcl) error message is very informative. Tcl feels responsible for substituting a value of a Tcl variable 1 for $1 (meant for awk as part of the awk script). This is due to improper quoting of your awk scriplet. At the same time, you want $b to be substituted for from within Tcl.
Turn awk 'print $1 ~ /^$b/' into awk [string map [list #b# $b] {{$1 ~ /^#b#/}}]. Curly braces will preclude Tcl substitutions for $1, #b# will have already been substituted for before awk sees it thanks to [string map].
exec cat cells.txt | awk [string map [list #b# $b] {{$1 ~ /^#b#/}}]
That written, I fail to see why you are going back and forth between grep, awk etc. and Tcl. All of this could be done in Tcl alone.

While using awk showing fatal : cannot open pipe ( Too many open files) error

I was trying to do masking of file with command 'tr' and 'awk' but failing with error fatal: cannot open pipe ( Too many open pipes) error. FILE has approx 1000000 records quite a huge number.
Below is the code I am trying :-
awk - F "|" - v OFS="|" '{ "echo \""$1"\" | tr \" 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\" \" QWERTYUIOPASDFGHJKLZXCVBNM9876543210mnbvcxzlkjhgfdsapoiuytrewq\"" | get line $1}1' FILE.CSV > test.CSV
It is showing error :-
awk: (FILENAME=- FNR=1019) fatal: cannot open pipe `echo ""TTP_123"" | tr "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" "QWERTYUIOPASDFGHJKLZXCVBNM9876543210mnbvcxzlkjhgfdsapoiuytrewq"' (Too many open pipes)
Please let me know what I am doing wrong here
Also a Note any number of columns could be used for masking and can be at any positions in this example I have taken 1 and 2 column positions but it could be 3 and 10 or 5,7,25 columns
Thanks
AJ
First things first, you can't have a space between - and F or v.
I was going to suggest sed, but as you only want to translate the first column, that's not as easy.
Unfortunately, awk doesn't have built-in tr functionality, so you'd have to use the shell like you are and just close the pipe:
awk -F "|" -v OFS="|" '{
command="echo \"\\"$1"\\\" | tr \" 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\" \" QWERTYUIOPASDFGHJKLZXCVBNM9876543210mnbvcxzlkjhgfdsapoiuytrewq\""
command | getline $1
close(command)
}1' FILE.CSV > test.CSV
However, I suggest using perl, which can do field splitting and character translation:
perl -F'\|' -lane '$F[0] =~ tr/0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ/QWERTYUIOPASDFGHJKLZXCVBNM9876543210mnbvcxzlkjhgfdsapoiuytrewq/; print join("|", #F)' FILE.CSV > test.CSV
Or, for a shorter command line, just put the program into a file, drop the e in -lane and use the file name instead of the '...' command.
you can do the mapping in awk instead of making a system call for each line, or perhaps simply
paste -d'|' <(cut -d'|' -f1 file | tr '0-9' 'a-z') <(cut -d'|' -f2- file)
replace the tr arguments with yours.
This does not answer your question, but you can implement tr as an awk function that would save having to spawn lots of external processes
$ cat tr.awk
function tr(str, from, to, s,i,c,idx) {
s = ""
for (i=1; i<=length($str); i++) {
c = substr(str, i, 1)
idx = index(from, c)
s = s (idx == 0 ? c : substr(to, idx, 1))
}
return s
}
{
print $1, tr($1,
" 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ",
" QWERTYUIOPASDFGHJKLZXCVBNM9876543210mnbvcxzlkjhgfdsapoiuytrewq")
}
Example:
$ printf "%s\n" hello wor-ld | awk -f tr.awk
hello KGCCN
wor-ld 3N8-CF

AWK how to count patterns on the first column?

I was trying get the total number of "??", " M", "A" and "D" from this:
?? this is a sentence
M this is another one
A more text here
D more and more text
I have this sample line of code but doesn't work:
awk -v pattern="\?\?" '{$1 == pattern} END{print " "FNR}'
$ awk '{ print $1 }' file | sort | uniq -c
1 ??
1 A
1 D
1 M
If for some reason you want an awk-only solution:
awk '{ ++cnt[$1] } END { for (i in cnt) print cnt[i], i }' file
but I think that's needlessly complicated compared to using the built-in unix tools that already do most of the work.
If you just want to count one particular value:
awk -v value='??' '$1 == value' file | wc -l
If you want to count only a subset of values, you can use a regex:
$ awk -v pattern='A|D|(\\?\\?)' '$1 ~ pattern { print $1 }' file | sort | uniq -c
1 ??
1 A
1 D
Here you do need to send a \ in order that the ?s are escaped within the regular expression. And because the \ is itself a special character within the string being passed to awk, you need to escape it first (hence the double backslash).

awk search a line withing a file

I am trying to find occurrences of a STRING in some other file:
First I extract the STRING exactly I want to search:
grep STRING test.txt | cut -d"," -f3 | tr -d ' '
Now I proceed to search it in other file - so my command is:
grep STRING test.txt | cut -d"," -f3 | tr -d ' ' | awk '/$0/' temp.txt
I am getting 0 rows output - but comparing manually I do find the strings common in both files?
You can't pipe like that. You'd need to use a sub-shell; something like:
grep $(grep STRING test.txt | cut -d"," -f3 | tr -d ' ') temp.txt
Alternatively, use awk like this:
awk -F, 'FNR==NR && /STRING/ { gsub(/ /,""); a[$3]; next } FNR!=NR { for (i in a) if ($0 ~ i) { print; next } }' test.txt temp.txt