Saving/Adding integer values after string is found in text file - awk

I would like to know how I can sum up the integers of text1+text2+text3 from a text file, but if text2 and/or text3 doesnt exist after text1 then I get the integer from text1 only and look for the next text2 and/or text3 after text1.
example textfile:
text1:102
text2:123
text3:1432
text1:12
text1:34324
text3:234234
Desired output:
102+123+1432
12
34324+234234
or
1657
12
268558
I am not too sure how to get this integer value and store it.

$ cat a.awk
function printsum() { print t1+t2+t3; t1=t2=t3=0 }
BEGIN {FS=":"}
$1 == "text1" && NR > 1 { printsum() }
$1 == "text1" { t1 = $2 }
$1 == "text2" { t2 = $2 }
$1 == "text3" { t3 = $2 }
END { printsum() }
$ awk -f a.awk file
1657
12
268558

I was going to wait til you posted your attempt, but since you already have an answer...
$ cat tst.awk
BEGIN { FS=":" }
$1 == key { print tot; tot=0 }
{ tot += $2 }
NR==1 { key=$1 }
END { print tot+0 }
$ awk -f tst.awk file
1657
12
268558

Related

Data Conversion Using Sed or Awk - Name to Title

I have data in the below format:
APP_OWNER : hari
APP_AREA : Work:Business Area:AUS
APP_ID : 124080
APP_OWNER : ari
APP_AREA : Work:AUS
APP_ID : 124345
I want the data to be converted to below format.
APP_OWNER,APP_AREA,APP_ID
hari,Work:Business Area:AUS,124080
ari,Work:AUS,124345
I can convert one line but how to do it with 3 lines at the same time?
My Attempt with one line
sed '0,/: /s//\n/' test.txt
Original Question : Convert Rows to Columns Shell Script
Regards
Here is an awk solution without hardcoding any value:
awk -F '[[:blank:]]+:[[:blank:]]+' 'NR == 1 {fh=$1} !($1 in hdr) {tr = (tr == "" ? "" : tr ",") $1; hdr[$1]} $1 == fh && NR > 1 {print (body ? "" : tr ORS) td; body=1; td=""} {td = (td == "" ? "" : td ",") $2} END {print td}' file
APP_OWNER,APP_AREA,APP_ID
hari,Work:Business Area:AUS,124080
ari,Work:AUS,124345
Here is more readable version:
awk -F '[[:blank:]]+:[[:blank:]]+' '
NR == 1 {
fh = $1
}
!($1 in hdr) { # collect headers only once
tr = (tr == "" ? "" : tr ",") $1
hdr[$1]
}
$1 == fh && NR > 1 { # print header and body
print (body ? "" : tr ORS) td # if body=1 then print only body
body = 1
td = ""
}
{
td = (td == "" ? "" : td ",") $2
}
END {
print td
}' file
Could you please try following, written and tested with shown samples.
awk -F'[[:blank:]]+:[[:blank:]]+' -v OFS="," '
/^APP_OWNER/{
if(heading){
count=1
print heading
}
if(val){
print val
}
val=""
}
count=="" && !($1 in head){
head[$1]
heading=(heading?heading OFS:"")$1
}
{
val=(val?val OFS:"")$2
}
END{
if(val){
print val
}
}
' Input_file
$ cat tst.awk
BEGIN { OFS="," }
{
tag = val = $0
sub(/[[:space:]]*:.*/,"",tag)
sub(/[^:]+:[[:space:]]*/,"",val)
tag2val[tag] = val
}
!seen[tag]++ {
tags[++numTags] = tag
}
(NR%3) == 0 {
if ( !doneHdr++ ) {
for (tagNr=1; tagNr<=numTags; tagNr++) {
tag = tags[tagNr]
printf "%s%s", tag, (tagNr<numTags ? OFS : ORS)
}
}
for (tagNr=1; tagNr<=numTags; tagNr++) {
tag = tags[tagNr]
val = tag2val[tag]
printf "%s%s", val, (tagNr<numTags ? OFS : ORS)
}
delete tag2val
}
$ awk -f tst.awk file
APP_OWNER,APP_AREA,APP_ID
hari,Work:Business Area:AUS,124080
ari,Work:AUS,124345

is it possible to drop a column containing a specific value using unix?

I have a very large variant calling data. I can not pull out the result I want.
here is an example
bac1 bac2 bac3 bac4
1 0 0 1
Now I want to drop the columns that contain 0 using the ubuntu command-line. The result would be like this
bac1 bac4
1 1
I tried this
awk -F "\t" -v "pat=0\t" 'NR == 2 {for (i=1; i <= NF; i++) Take[i] = (pat != $i)}{for (i =1; i <= NF; i++) if (Take [i]) printf $i FS; print ""}'
And the output is this:
NC_045512.2 18876 NC_045512.2_18876_T_C T C . PASS GT 1
Header of this output is:
#CHROM POS ID REF ALT QUAL FILTER FORMAT EPI_ISL_422804
So the final output had to be like this:
#CHROM POS ID REF ALT QUAL FILTER FORMAT EPI_ISL_422804
NC_045512.2 18876 NC_045512.2_18876_T_C T C . PASS GT 1
The file is not always 2 lines but at most it can be 4 lines.
It does not return the header line that's because I used NR == 2. Is there any way I cant get the header column as well??
If your input file always only has 1 data line as in your example then:
$ cat tst.awk
BEGIN { FS=OFS="\t" }
NR == 1 { split($0,hdr); next }
{
for (i = 1; i <= NF; i++) {
if ($i != 0) {
cols[++nf] = i
}
}
for (i = 1; i <= nf; i++) {
printf "%s%s", hdr[cols[i]], (i<nf ? OFS : ORS)
}
for (i = 1; i <= nf; i++) {
printf "%s%s", $(cols[i]), (i<nf ? OFS : ORS)
}
}
.
$ awk -f tst.awk file
bac1 bac4
1 1
otherwise if your input can have more than 1 data line then you need a 2-pass approach:
$ cat tst.awk
BEGIN { FS=OFS="\t" }
NR == FNR {
if (NR > 1) {
for (i = 1; i <= NF; i++) {
if ($i == 0) {
zeroCols[i]
}
}
}
next
}
FNR == 1 {
for (i = 1; i <= NF; i++) {
if (! (i in zeroCols) ) {
cols[++nf] = i
}
}
}
{
for (i = 1; i <= nf; i++) {
printf "%s%s", $(cols[i]), (i<nf ? OFS : ORS)
}
}
.
$ awk -f tst.awk file file
bac1 bac4
1 1
Long version with if:
awk 'NR==1{
split($0,array,FS)
}
NR==2{
s=0
for(i=1;i<=NF;i++){
if($i!=0){
if(s==0){
s=1
printf("%s",array[i])
}
else{
printf("%s%s",OFS,array[i])
}
}
}
print ""
s=0
for(i=1;i<=NF;i++){
if($i!=0){
if(s==0){
s=1
printf("%s",$i)
}
else{
printf("%s%s",OFS,$i)
}
}
}
print ""
}' FS='\t' OFS="\t" file
One line:
awk 'NR==1{split($0,array,FS)} NR==2{s=0; for(i=1;i<=NF;i++) {if($i!=0) {if(s==0) {s=1; printf("%s",array[i])} else {printf("%s%s",OFS,array[i])}}} print ""; s=0; for(i=1;i<=NF;i++){if($i!=0){if(s==0){s=1; printf("%s",$i)} else {printf("%s%s",OFS,$i)}}} print ""}' FS='\t' OFS="\t" file
Output:
bac1 bac4
1 1

AWK - Working with two files

I have these two csv files:
File A:
veículo;carro;sust
automóvel;carro;sust
viatura;carro;sust
breve;rápido;adj
excepcional;excelente;adj
maravilhoso;excelente;adj
amistoso;simpático;adj
amigável;simpático;adj
...
File B:
"A001","carro","sust","excelente","adj","ocorrer","adv","bom","adj"
...
In the file A, $1(word) is synonym for $2(word) and $3(word) the part of speech.
In the lines of the file B we can skip $1,the remaining columns are words and their part of speech.
What I need to to do is to look line by line each pair (word-pos) in the file A and generate a line for each synonym. It is difficult to explain.
Desired Output:
"A001","carro","sust","excelente","adj","ocorrer","adv","bom","adj"
"A001","viatura","sust","excelente","adj","ocorrer","adv","bom","adj"
"A001","veículo","sust","excelente","adj","ocorrer","adv","bom","adj"
"A001","automóvel","sust","excelente","adj","ocorrer","adv","bom","adj"
"A001","carro","sust","excepcional","adj","ocorrer","adv","bom","adj"
"A001","viatura","sust","excepcional","adj","ocorrer","adv","bom","adj"
"A001","veículo","sust","excepcional","adj","ocorrer","adv","bom","adj"
"A001","automóvel","sust","excepcional","adj","ocorrer","adv","bom","adj"
"A001","carro","sust","maravilhoso","adj","ocorrer","adv","bom","adj"
"A001","viatura","sust","maravilhoso","adj","ocorrer","adv","bom","adj"
"A001","veículo","sust","maravilhoso","adj","ocorrer","adv","bom","adj"
"A001","automóvel","sust","maravilhoso","adj","ocorrer","adv","bom","adj"
Done:
BEGIN {
FS="[,;]";
OFS=";";
}
FNR==NR{
sinonim[$1","$2","$3]++;
next;
}
{
s1=split($0,AX,"\n");
for (i=1;i<=s1;i++)
{
s2=split(AX[i],BX,",");
for (j=2;j<=NF;j+=2)
{
lineX=BX[j]","BX[j+1];
gsub(/\"/,"",lineX);
for (item in sinonim)
{
s3=split(item,CX,",");
lineS=CX[2]","CX[3];
if (lineX == lineS)
{
BX[j]=CX[1];
lineD=""
for (t=1;t<=s2;t++)
{
lineD=lineD BX[t]",";
}
lineF=lineF lineD"\n";
}
}
}
}
print lineF
}
$ cat tst.awk
BEGIN { FS=";" }
NR==FNR { synonyms[$2,$3][$2]; synonyms[$2,$3][$1]; next }
FNR==1 { FS=OFS="\",\""; $0=$0 }
{
gsub(/^"|"$/,"")
for (i=2;i<NF;i+=2) {
if ( ($i,$(i+1)) in synonyms) {
for (synonym in synonyms[$i,$(i+1)]) {
$i = synonym
for (j=2;j<NF;j+=2) {
if ( ($j,$(j+1)) in synonyms) {
for (synonym in synonyms[$j,$(j+1)]) {
orig = $0
$j = synonym
if (!seen[$0]++) {
print "\"" $0 "\""
}
$0 = orig
}
}
}
}
}
}
}
.
$ awk -f tst.awk fileA fileB
"A001","carro","sust","excelente","adj","ocorrer","adv","bom","adj"
"A001","veículo","sust","excelente","adj","ocorrer","adv","bom","adj"
"A001","automóvel","sust","excelente","adj","ocorrer","adv","bom","adj"
"A001","viatura","sust","excelente","adj","ocorrer","adv","bom","adj"
"A001","carro","sust","maravilhoso","adj","ocorrer","adv","bom","adj"
"A001","carro","sust","excepcional","adj","ocorrer","adv","bom","adj"
"A001","veículo","sust","maravilhoso","adj","ocorrer","adv","bom","adj"
"A001","veículo","sust","excepcional","adj","ocorrer","adv","bom","adj"
"A001","automóvel","sust","maravilhoso","adj","ocorrer","adv","bom","adj"
"A001","automóvel","sust","excepcional","adj","ocorrer","adv","bom","adj"
"A001","viatura","sust","maravilhoso","adj","ocorrer","adv","bom","adj"
"A001","viatura","sust","excepcional","adj","ocorrer","adv","bom","adj"
The above uses GNU awk for multi-dimensional arrays, with other awks it's a simple tweak to use synonyms[$2,$3] = synonyms[$2,$3] " " $2 etc. or similar and then split() later instead of synonyms[$2,$3][$2] and in.
BEGIN { FS="[,;]"; OFS="," }
NR == FNR { key = "\"" $2 "\""; synonym[key] = synonym[key] "," $1; next }
{
print;
if ($2 in synonym) {
count = split(substr(synonym[$2], 2), choices)
for (i = 1; i <= count; i++) {
$2 = "\"" choices[i] "\""
print
}
}
}

merge file on the basis of 2 fields

file1
session=1|w,eventbase=4,operation=1,rule=15
session=1|e,eventbase=5,operation=2,rule=14
session=2|t,eventbase=,operation=1,rule=13
file2
field1,field2,field3,session=1,fieldn,operation=1,fieldn
field1,field2,field3,session=1,fieldn,operation=2,fieldn
field1,field2,field3,session=2,fieldn,operation=2,fieldn
field1,field2,field3,session=2,fieldn,operation=1,fieldn
Output
field1,field2,field3,session=1,fieldn,operation=1,fieldn,eventbase=4,rule=15
field1,field2,field3,session=1,fieldn,operation=2,fieldn,eventbase=5,rule=14
field1,field2,field3,session=2,fieldn,operation=2,fieldn,NOMATCH
field1,field2,field3,session=2,fieldn,operation=1,fieldn,eventbase=,rule=13
I have Tried
BEGIN { FS = OFS = "," }
FNR == NR {
split($1,s,"|")
session=s[1];
a[session,$3] = session","$2","$3","$4;
next
}
{
split($4,x,"|");
nsession=x[1];
if(nsession in a)print $0 a[nsession,$6];
else print $0",NOMATCH";
}
Issue is I am not able to FIND nsession in 2D array a with if(nsession in a)
matching 2 files on the combination basis of session and operation
Thanks.. it helped.. Now I am learning :) Thanks team
BEGIN { FS = OFS = "," }
FNR == NR {
split($1,s,"|")
session=s[1];
a[session,$3] = session","$2","$3","$4;
next
}
{
split($4,x,"|");
nsession=x[1];
key=nsession SUBSEP $6
if(key in a)print $0 a[nsession,$6];
else print $0",NOMATCH";
}
You can try
awk -f merge.awk file1 file2
where merge.awk is
NR==FNR {
sub(/[[:blank:]]*$/,"")
getSessionInfo(1)
ar[ses,op]=",eventbase="evb",rule="rule
next
}
{
sub(/[[:blank:]]*$/,"")
getSessionInfo(0)
if ((ses,op) in ar)
print $0 ar[ses,op]
else
print $0 ",NOMATCH"
}
function getSessionInfo(f, a) {
match($0,/session=([^|])[|,]/,a)
ses=a[1]
match($0,/operation=([^,]),/,a)
op=a[1]
if (f) {
match($0,/eventbase=([^,]),/,a)
evb=a[1]
match($0,/rule=(.*)$/,a)
rule=a[1]
}
}

awk: Print string in file2 not matching with string file2

I have two file with comma separated values, I want to remove all the strings in file1 matching with strings in file 2.
file1:
soap,cosmetics,june,hello,good
file2:
june,hello
output:
soap,cosmetics,good
I tried this, but not working. I'm not sure where I'm going wrong. Any help appreciated.
BEGIN {
FS=","
}
NR==FNR {
a[NR]=$0
next
}
{
for (j=1;j<=NF;j++) {
split($0, d, ",")
if (d[j] in a == 0) {
line = (line ? line "," : "") d[j]
}
}
print line
line = ""
}
Here's one way using awk. Run like:
awk -f script.awk file2 file1
Contents of script.awk:
BEGIN {
FS=","
}
FNR==NR {
for(i=1;i<=NF;i++) {
a[$i]
}
next
}
{
for(j=1;j<=NF;j++) {
if (!($j in a)) {
r = (r ? r FS : "") $j
}
}
}
END {
print r
}
Results:
soap,cosmetics,good
Alternatively, here's the one-liner:
awk -F, 'FNR==NR { for(i=1;i<=NF;i++) a[$i]; next } { for(j=1;j<=NF;j++) if (!($j in a)) r = (r ? r FS : "") $j } END { print r }' file2 file1
$ gawk -v RS='[,\n]' 'NR==FNR{a[$0];next} !($0 in a){o=o s $0;s=","} END{print o}' file2 file1
soap,cosmetics,good