I have a logfile.txt and I want to specify the filed $4 but based on number of column not number of field because the fields are separated by spaces characters and the field 2 ($2) may contain a values separated by space. I want to count lines but I don't know how specify $4 without causing a problem if the field 2 ($2) contain a space character.
here is my file:
KJKJJ1KLJKJKJ928482711 PIEJHHKIA 87166188177633 AJHHHH77760 00666667 876876800874 2014100898798789979879877770
KJKJJ1KLJKJKJ928482711 HKHG 81882776553868 HGHALJLKA700 00876763 216897879879 2014100898798789979879877770
KJKJJ1KLJKJKJ928482711 UUT UGGT 81762665356426 HGJHGHJG661557008 00778787 268767860704 2014100898798789979879877770
KJKJJ1KLJKJKJ9284827kj ARTH HGG 08276255534867 HGJHGHJG661557008 00876767 212668767684 2014100898798789979879877770
here is the code :
awk 'END { OFS="\t"; for (k in c) print c[k],"\t"k,"\t"f[k] } { k = $4 c[k]++; f[k]=substr($0,137,8) }' logfile.txt
I WANT TO COUNT BASED ON field $4. but to specify this field in code we must based on number of character (substr ($0,..,..) :
the output shold be :
1 20141008 AJHHHH77760
1 20141008 HGHALJLKA700
2 20141008 HGJHGHJG661557008
If your records are composed of fixed width fields you can use cut(1)
% cut -c1-22,23-42,43-62,... --output-delimiter=, file | sed 's/, */,/g' > file.csv
% awk -F, '{your_code}' file.csv
please write a range for each of your fixed width fields, in place of the ... ellipsis.
I have written ranges only for the first three, lazy me.
If you don't want to bother with an intermediate file, just use a | pipe.
Related
I have a file with 2 columns. In the first column, there are several strings (IDs) and in the second values. In the strings, there are a number of dots that can be variable. I would like to split these strings based on the last dot. I found in the forum how remove the last past after the last dot, but I don't want to remove it. I would like to create a new column with the last part of the strings, using bash command (e.g. awk)
Example of strings:
5_8S_A.3-C_1.A 50
6_FS_B.L.3-O_1.A 20
H.YU-201.D 80
UI-LP.56.2011.A 10
Example of output:
5_8S_A.3-C_1 A 50
6_FS_B.L.3-O_1 A 20
H.YU-201 D 80
UI-LP.56.2011 A 10
I tried to solve it by using the following command but it works if I have just 1 dot in the string:
awk -F' ' '{{split($1, arr, "."); print arr[1] "\t" arr[2] "\t" $2}}' file.txt
You may use this sed:
sed -E 's/^([[:blank:]]*[^[:blank:]]+)\.([^[:blank:]]+)/\1 \2/' file
5_8S_A.3-C_1 A 50
6_FS_B.L.3-O_1 A 20
H.YU-201 D 80
UI-LP.56.2011 A 10
Details:
^: Start
([[:blank:]]*[^[:blank:]]+): Capture group #2 to match 0 or more whitespaces followed by 1+ non-whitespace characters.
\.: Match a dot. Since this regex pattern is greedy it will match until last dot
([^[:blank:]]+): Capture group #2 to match 1+ non-whitespace characters
\1 \2: Replacement to place a space between capture value #1 and capture value #2
Assumptions:
each line consists of two (white) space delimited fields
first field contains at least one period (.)
Sticking with OP's desire (?) to use awk:
awk '
{ n=split($1,arr,".") # split first field on period (".")
pfx=""
for (i=1;i<n;i++) { # print all but the nth array entry
printf "%s%s",pfx,arr[i]
pfx="."}
print "\t" arr[n] "\t" $2} # print last array entry and last field of line
' file.txt
Removing comments and reducing to a one-liner:
awk '{n=split($1,arr,"."); pfx=""; for (i=1;i<n;i++) {printf "%s%s",pfx,arr[i]; pfx="."}; print "\t" arr[n] "\t" $2}' file.txt
This generates:
5_8S_A.3-C_1 A 50
6_FS_B.L.3-O_1 A 20
H.YU-201 D 80
UI-LP.56.2011 A 10
With your shown samples, here is one more variant of rev + awk solution.
rev Input_file | awk '{sub(/\./,OFS)} 1' | rev
Explanation: Simple explanation would be, using rev to print reverse order(from last character to first character) for each line, then sending its output as a standard input to awk program where substituting first dot(which is last dot as per OP's shown samples only) with spaces and printing all lines. Then sending this output as a standard input to rev again to print output into correct order(to remove effect of 1st rev command here).
$ sed 's/\.\([^.]*$\)/\t\1/' file
5_8S_A.3-C_1 A 50
6_FS_B.L.3-O_1 A 20
H.YU-201 D 80
UI-LP.56.2011 A 10
I am having a text file with multiple rows and two or four column. If two column then 1st column is id and 2nd is number and if four column 1st and 2nd is id and 3rd and 4th is number. For the four column rows 2nd and 4th column cells can have multiple entry separated by comma. If there is two column only I want to print them as it is; but if there is four column I want to print only the 1st column id and in the second column I want the sum of all the number present in 3rd and 4th column for that row.
Input
CG AT,AA,CA 17 1,1,1
GT 14
TB AC,TC,TA,GG,TT,AR,NN,NM,AB,AT,TT,TC,CA,BB,GT,AT,XT,MT,NA,TT 552 6,1,1,2,2,1,2,1,5,3,4,1,2,1,1,1,3,4,5,4
TT CG,GT,TA,GB 105 3,4,1,3
Expected Output
CG 20
GT 14
TB 602
TT 116
If there are no leading spaces in the actual file, use $1 instead of $2.
$ awk -F '[ ,]+' '{for(i=1; i<=NF; i++) s+=$i; print $2, s; s=0}' <<EOF
CG AT,AA,CA 17 1,1,1
GT 14
TB AC,TC,TA,GG,TT,AR,NN,NM,AB,AT,TT,TC,CA,BB,GT,AT,XT,MT,NA,TT 552 6,1,1,2,2,1,2,1,5,3,4,1,2,1,1,1,3,4,5,4
TT CG,GT,TA,GB 105 3,4,1,3
EOF
CG 20
GT 14
TB 602
TT 116
-F '[ ,]+' means "fields are delimited by one or more spaces or commas".
There is no condition associated with the {action}, so it will be performed on every line.
NF is the Number of Fields, and $X refers to the Xth field.
Strings are equal to 0, so we can simply add every field together to get a sum.
After we print the first non-blank field and our sum, we reset the sum for the next line.
Here is a solution coded to follow your instruction as closely as possible (with no field-splitting tricks so that it's easy to reason about):
awk '
NF == 2 {
print $1, $2
next
}
NF == 4 {
N = split($4, f, /,/)
for (i = 1; i <= N; ++i)
$3 += f[i]
print $1, $3
}'
I noticed though that your input section contains leading spaces. If leading spaces are actually present (and are irrelevant), we can add a leading { sub(/^ +/, "") } to the script.
Here is what I am doing.
The text file is comma separated and has three field,
and I want to extract all the line containing the same second field
more than three times.
Text file (filename is "text"):
11,keyword1,content1
4,keyword1,content3
5,keyword1,content2
6,keyword2,content5
6,keyword2,content5
7,keyword1,content4
8,keyword1,content2
1,keyword1,content2
My command is like below. cat the whole text file inside awk and grep with the second field of each line and count the number of the line.
If the number of the line is greater than 2, print the whole line.
The command:
awk -F "," '{ "cat text | grep "$2 " | wc -l" | getline var; if ( 2 < var ) print $0}' text
However, the command output contains only first three consecutive lines,
instead of printing also last three lines containing "keyword1" which occurs in the text for six times.
Result:
11,keyword1,content1
4,keyword1,content3
5,keyword1,content2
My expected result:
11,keyword1,content1
4,keyword1,content3
5,keyword1,content2
7,keyword1,content4
8,keyword1,content2
1,keyword1,content2
Can somebody tell me what I am doing wrong?
It is relatively straight-forward to make just two passes over the file. In the first pass, you count the number of occurrences of each value in column 2. In the second pass, you print out the rows where the value in column 2 occurs more than your threshold value of 3 times.
awk -F, 'FNR == NR { count[$2]++ }
FNR != NR { if (count[$2] > 3) print }' text text
The first line of code handles the first pass; it counts the occurrences of each different value of the second column.
The second line of code handles the second pass; if the value in column 2 was counted more than 3 times, print the whole line.
This doesn't work if the input is only available on a pipe rather than as a file (so you can't make two passes over the data). Then you have to work much harder.
echo "45" | awk 'BEGIN{FS=""}{for (i=1;i<=NF;i++)x+=$i}END{print x}'
I want to know how this works,what specifically does awk Fs,NF do here?
FS is the field separator. Setting it to "" (the empty string) means that every single character will be a separate field. So in your case you've got two fields: 4, and 5.
NF is the number of fields in a given record. In your case, that's 2. So i ranges from 1 to 2, which means that $i takes the values 4 and 5.
So this AWK script iterates over the characters and prints their sum — in this case 9.
These are built-in variables, FS being Field Separator - blank meaning split each character out. NF being Num Fields split by FS... so in this case num of chars, 2. So split the input by each character ("4", "5"), iterate each char (2) while adding their values up, print the result.
http://www.thegeekstuff.com/2010/01/8-powerful-awk-built-in-variables-fs-ofs-rs-ors-nr-nf-filename-fnr/
FS is the field separator. Normally fields are separated by whitespace, but when you set FS to the null string, each character of the input line is a separate field.
NF is the number of fields in the current input line. Since each character is a field, in this case it's the number of characters.
The for loop then iterates over each character on the line, adding it to x. So this is adding the value of each digit in input; for 45 it adds 4+5 and prints 9.
I have a file that contains a number of fields separated by tab. I am trying to print all columns except the first one but want to print them all in only one column with AWK. The format of the file is
col 1 col 2 ... col n
There are at least 2 columns in one row.
Sample
2012029754 901749095
2012028240 901744459 258789
2012024782 901735922
2012026032 901738573 257784
2012027260 901742004
2003062290 901738925 257813 257822
2012026806 901741040
2012024252 901733947 257493
2012024365 901733700
2012030848 901751693 260720 260956 264843 264844
So I want to tell awk to print column 2 to column n for n greater than 2 without printing blank lines when there is no info in column n of that row, all in one column like the following.
901749095
901744459
258789
901735922
901738573
257784
901742004
901738925
257813
257822
901741040
901733947
257493
901733700
901751693
260720
260956
264843
264844
This is the first time I am using awk, so bear with me. I wrote this from command line which works:
awk '{i=2;
while ($i ~ /[0-9]+/)
{
printf "%s\n", $i
i++
}
}' bth.data
It is more of a seeking approval than asking a question whether it is the right way of doing something like this in AWK or is there a better/shorter way of doing it.
Note that the actual input file could be millions of lines.
Thanks
Is this what you want as output?
awk '{for(i=2; i<=NF; i++) print $i}' bth.data
gives
901749095
901744459
258789
901735922
901738573
257784
901742004
901738925
257813
257822
901741040
901733947
257493
901733700
901751693
260720
260956
264843
264844
NF is one of several pre-defined awk variables. It indicates the number of fields on a given input line. For instance, it is useful if you want to always print out the last field in a line print $NF. Or of course if you want to iterate through all or part of the fields on a given line to the end of the line.
Seems like awk is the wrong tool. I would do:
cut -f 2- < bth.data | tr -s '\t' '\n'
Note that with -s, this avoids printing blank lines as stated in the original problem.