awk: print each column of a file into separate files - awk
I have a file with 100 columns of data. I want to print the first column and i-th column in 99 separate files, I am trying to use
for i in {2..99}; do awk '{print $1" " $i }' input.txt > data${i}; done
But I am getting errors
awk: illegal field $(), name "i"
input record number 1, file input.txt
source line number 1
How to correctly use $i inside the {print }?
Following single awk may help you too here:
awk -v start=2 -v end=99 '{for(i=start;i<=end;i++){print $1,$i > "file"i;close("file"i)}}' Input_file
An all awk solution. First test data:
$ cat foo
11 12 13
21 22 23
Then the awk:
$ awk '{for(i=2;i<=NF;i++) print $1,$i > ("data" i)}' foo
and results:
$ ls data*
data2 data3
$ cat data2
11 12
21 22
The for iterates from 2 to the last field. If there are more fields that you desire to process, change the NF to the number you'd like. If, for some reason, a hundred open files would be a problem in your system, you'd need to put the print into a block and add a close call:
$ awk '{for(i=2;i<=NF;i++){f=("data" i); print $1,$i >> f; close(f)}}' foo
If you want to do what you try to accomplish :
for i in {2..99}; do
awk -v x=$i '{print $1" " $x }' input.txt > data${i}
done
Note
the -v switch of awk to pass variables
$x is the nth column defined in your variable x
Note2 : this is not the fastest solution, one awk call is fastest, but I just try to correct your logic. Ideally, take time to understand awk, it's never a wasted time
Related
awk conditional statement based on a value between colon
I was just introduced to awk and I'm trying to retrieve rows from my file based on the value on column 10. I need to filter the data based on the value of the third value if ":" was used as a separator in column 10 (last column). Here is an example data in column 10. 0/1:1,9:10:15:337,0,15. I was able to extract the third value using this command awk '{print $10}' file.txt | awk -F ":" '/1/ {print $3}' This returns the value 10 but how can I return other rows (not just the value in column 10) if this third value is less than or greater than a specific number? I tried this awk '{if($10 -F ":" "/1/ ($3<10))" print $0;}' file.txt but it returns a syntax error. Thanks!
Your code: awk '{print $10}' file.txt | awk -F ":" '/1/ {print $3}' should be just 1 awk script: awk '$10 ~ /1/ { split($10,f,/:/); print f[3] }' file.txt but I'm not sure that code is doing what you think it does. If you want to print the 3rd value of all $10s that contain :s, as it sounds like from your text, that'd be: awk 'split($10,f,/:/) > 1 { print f[3] }' file.txt and to print the rows where that value is less than 7 would be: awk '(split($10,f,/:/) > 1) && (f[3] < 7)' file.txt
Sort a file preserving the header as first position with bash
When sorting a file, I am not preserving the header in its position: file_1.tsv Gene Number a 3 u 7 b 9 sort -k1,1 file_1.tsv Result: a 3 b 9 Gene Number u 7 So I am tryig this code: sed '1d' file_1.tsv | sort -k1,1 > file_1_sorted.tsv first='head -1 file_1.tsv' sed '1 "$first"' file_1_sorted.tsv What I did is to remove the header and sort the rest of the file, and then trying to add again the header. But I am not able to perform this last part, so I would like to know how can I copy the header of the original file and insert it as the first row of the new file without substituting its actuall first row.
You can do this as well : { head -1; sort; } < file_1.tsv ** Update ** For macos : { IFS= read -r header; printf '%s\n' "$header" ; sort; } < file_1.tsv
a simpler awk $ awk 'NR==1{print; next} {print | "sort"}' file
$ head -1 file; tail -n +2 file | sort Output: Gene Number a 3 b 9 u 7
Could you please try following. awk ' FNR==1{ first=$0 next } { val=(val?val ORS:"")$0 } END{ print first print val | "sort" } ' Input_file Logical explanation: Check condition FNR==1 to see if its first line; then save its values to variable and move on to next line by next. Then keep appending all lines values to another variable with new line till last line. Now come to END block of this code which executes when Input_file is done being read, there print first line value and put sort command on rest of the lines value there.
This will work using any awk, sort, and cut in any shell on every UNIX box and will work whether the input is coming from a pipe (when you can't read it twice) or from a file (when you can) and doesn't involve awk spawning a subshell: awk -v OFS='\t' '{print (NR>1), $0}' file | sort -k1,1n -k2,2 | cut -f2- The above uses awk to stick a 0 at the front of the header line and a 1 in front of the rest so you can sort by that number then whatever other field(s) you want to sort on and then remove the added field again with a cut. Here it is in stages: $ awk -v OFS='\t' '{print (NR>1), $0}' file 0 Gene Number 1 a 3 1 u 7 1 b 9 $ awk -v OFS='\t' '{print (NR>1), $0}' file | sort -k1,1n -k2,2 0 Gene Number 1 a 3 1 b 9 1 u 7 $ awk -v OFS='\t' '{print (NR>1), $0}' file | sort -k1,1n -k2,2 | cut -f2- Gene Number a 3 b 9 u 7
linux csv file concatenate columns into one column
I've been looking to do this with sed, awk, or cut. I am willing to use any other command-line program that I can pipe data through. I have a large set of data that is comma delimited. The rows have between 14 and 20 columns. I need to recursively concatenate column 10 with column 11 per row such that every row has exactly 14 columns. In other words, this: a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p will become: a,b,c,d,e,f,g,h,i,jkl,m,n,o,p I can get the first 10 columns. I can get the last N columns. I can concatenate columns. I cannot think of how to do it in one line so I can pass a stream of endless data through it and end up with exactly 14 columns per row. Examples (by request): How many columns are in the row? sed 's/[^,]//g' | wc -c Get the first 10 columns: cut -d, -f1-10 Get the last 4 columns: rev | cut -d, -f1-4 | rev Concatenate columns 10 and 11, showing columns 1-10 after that: awk -F',' ' NF { print $1","$2","$3","$4","$5","$6","$7","$8","$9","$10$11}'
Awk solution: awk 'BEGIN{ FS=OFS="," } { diff = NF - 14; for (i=1; i <= NF; i++) printf "%s%s", $i, (diff > 1 && i >= 10 && i < (10+diff)? "": (i == NF? ORS : ",")) }' file The output: a,b,c,d,e,f,g,h,i,jkl,m,n,o,p
With GNU awk for the 3rd arg to match() and gensub(): $ cat tst.awk BEGIN{ FS="," } match($0,"(([^,]+,){9})(([^,]+,){"NF-14"})(.*)",a) { $0 = a[1] gensub(/,/,"","g",a[3]) a[5] } { print } $ awk -f tst.awk file a,b,c,d,e,f,g,h,i,jkl,m,n,o,p
If perl is okay - can be used just like awk for stream processing $ cat ip.txt a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p 1,2,3,4,5,6,3,4,2,4,3,4,3,2,5,2,3,4 1,2,3,4,5,6,3,4,2,4,a,s,f,e,3,4,3,2,5,2,3,4 $ awk -F, '{print NF}' ip.txt 16 18 22 $ perl -F, -lane '$n = $#F - 4; print join ",", (#F[0..8], join("", #F[9..$n]), #F[$n+1..$#F]) ' ip.txt a,b,c,d,e,f,g,h,i,jkl,m,n,o,p 1,2,3,4,5,6,3,4,2,43432,5,2,3,4 1,2,3,4,5,6,3,4,2,4asfe3432,5,2,3,4 -F, -lane split on , results saved in #F array $n = $#F - 4 magic number, to ensure output ends with 14 columns. $#F gives the index of last element of array (won't work if input line has less than 14 columns) join helps to stitch array elements together with specified string #F[0..8] array slice with first 9 elements #F[9..$n] and #F[$n+1..$#F] the other slices as needed Borrowing from Ed Morton's regex based solution $ perl -F, -lape '$n=$#F-13; s/^([^,]*,){9}\K([^,]*,){$n}/$&=~tr|,||dr/e' ip.txt a,b,c,d,e,f,g,h,i,jkl,m,n,o,p 1,2,3,4,5,6,3,4,2,43432,5,2,3,4 1,2,3,4,5,6,3,4,2,4asfe3432,5,2,3,4 $n=$#F-13 magic number ^([^,]*,){9}\K first 9 fields ([^,]*,){$n} fields to change $&=~tr|,||dr use tr to delete the commas e this modifier allows use of Perl code in replacement section this solution also has the added advantage of working even if input field is less than 14
You can try this gnu sed sed -E ' s/,/\n/9g :A s/([^\n]*\n)(.*)(\n)(([^\n]*\n){4})/\1\2\4/ tA s/\n/,/g ' infile
First variant - with awk awk -F, ' { for(i = 1; i <= NF; i++) { OFS = (i > 9 && i < NF - 4) ? "" : "," if(i == NF) OFS = "\n" printf "%s%s", $i, OFS } }' input.txt Second variant - with sed sed -r 's/,/#/10g; :l; s/#(.*)((#[^#]){4})/\1\2/; tl; s/#/,/g' input.txt or, more straightforwardly (without loop) and probably faster. sed -r 's/,(.),(.),(.),(.)$/#\1#\2#\3#\4/; s/,//10g; s/#/,/g' input.txt Testing Input a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u Output a,b,c,d,e,f,g,h,i,jkl,m,n,o,p a,b,c,d,e,f,g,h,i,jklmn,o,p,q,r a,b,c,d,e,f,g,h,i,jklmnopq,r,s,t,u
Solved a similar problem using csvtool. Source file, copied from one of the other answers: $ cat input.txt a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p 1,2,3,4,5,6,3,4,2,4,3,4,3,2,5,2,3,4 1,2,3,4,5,6,3,4,2,4,a,s,f,e,3,4,3,2,5,2,3,4 Concatenating columns: $ cat input.txt | csvtool format '%1,%2,%3,%4,%5,%6,%7,%8,%9,%10%11%12,%13,%14,%15,%16,%17,%18,%19,%20,%21,%22\n' - a,b,c,d,e,f,g,h,i,jkl,m,n,o,p,,,,,, 1,2,3,4,5,6,3,4,2,434,3,2,5,2,3,4,,,, 1,2,3,4,5,6,3,4,2,4as,f,e,3,4,3,2,5,2,3,4 anatoly#anatoly-workstation:cbs$ cat input.txt
How to do calculations over lines of a file in awk
I've got a file that looks like this: 88.3055 45.1482 37.7202 37.4035 53.777 What I have to do is isolate the value from the first line and divide it by the values of the other lines (it's a speedup calculation). I thought of maybe storing the first line in a variable (using NR) and then iterate over the other lines to obtain the values from the divisions. Desired output is: 1,9559 2,3410 2,3608 1,6420 UPDATE Sorry Ed, my mistake, the desired decimal point is . I made some small changes to Ed's answer so that awk prints the division of 88.3055 by itself and outputs it to a file speedup.dat: awk 'NR==1{n=$0} {print n/$0}' tavg.dat > speedup.dat Is it possible to combine the contents of speedup.dat and the results from another awk command without using intermediate files and in one single awk command? First command: awk 'BEGIN { FS = \"[ \\t]*=[ \\t]*\" } /Total processes/ { if (! CP) CP = $2 } END {print CP}' cg.B.".n.".log ".(n == 1 ? ">" : ">>")." processes.dat This first command outputs: 1 2 4 8 16 Paste of the two files: paste processes.dat speedup.dat > prsp.dat which gives the now desired output: 1 1 2 1.9559 4 2.34107 8 2.36089 16 1.64207
$ awk 'NR==1{n=$0;next} {print n/$0}' file 1.9559 2.34107 2.36089 1.64207 $ awk 'NR==1{n=$0;next} {printf "%.4f\n", n/$0}' file 1.9559 2.3411 2.3609 1.6421 $ awk 'NR==1{n=$0;next} {printf "%.4f\n", int(n*10000/$0)/10000}' file 1.9559 2.3410 2.3608 1.6420 $ awk 'NR==1{n=$0;next} {x=sprintf("%.4f",int(n*10000/$0)/10000); sub(/\./,",",x); print x}' file 1,9559 2,3410 2,3608 1,6420 Normally you'd just use the correct locale to have . or , as your decimal point but your input uses . while your output uses , so I don't think that's an option.
awk '{if(n=="") n=$1; else print n/$1}' inputFile
How to print last two columns using awk
All I want is the last two columns printed.
You can make use of variable NF which is set to the total number of fields in the input record: awk '{print $(NF-1),"\t",$NF}' file this assumes that you have at least 2 fields.
awk '{print $NF-1, $NF}' inputfile Note: this works only if at least two columns exist. On records with one column you will get a spurious "-1 column1"
#jim mcnamara: try using parentheses for around NF, i. e. $(NF-1) and $(NF) instead of $NF-1 and $NF (works on Mac OS X 10.6.8 for FreeBSD awkand gawk). echo ' 1 2 2 3 one one two three ' | gawk '{if (NF >= 2) print $(NF-1), $(NF);}' # output: # 1 2 # 2 3 # two three
using gawk exhibits the problem: gawk '{ print $NF-1, $NF}' filename 1 2 2 3 -1 one -1 three # cat filename 1 2 2 3 one one two three I just put gawk on Solaris 10 M4000: So, gawk is the cuplrit on the $NF-1 vs. $(NF-1) issue. Next question what does POSIX say? per: http://www.opengroup.org/onlinepubs/009695399/utilities/awk.html There is no direction one way or the other. Not good. gawk implies subtraction, other awks imply field number or subtraction. hmm.
Please try this out to take into account all possible scenarios: awk '{print $(NF-1)"\t"$NF}' file or awk 'BEGIN{OFS="\t"}' file or awk '{print $(NF-1), $NF} {print $(NF-1), $NF}' file
try with this $ cat /tmp/topfs.txt /dev/sda2 xfs 32G 10G 22G 32% / awk print last column $ cat /tmp/topfs.txt | awk '{print $NF}' awk print before last column $ cat /tmp/topfs.txt | awk '{print $(NF-1)}' 32% awk - print last two columns $ cat /tmp/topfs.txt | awk '{print $(NF-1), $NF}' 32% /