Split and add header on the command line - header

What's the easiest way to split a file and add a header to each section?
The unix split command does everything that I need minus being able to add a header.
Any easy way to do it with existing tools before I script it up?

It is probably easiest to do this in either awk or perl. If you aren't processing much data, then using a simple shell script to post-process the output of split is probably fine. However, this will traverse the input more than once which can be a bottleneck if you are doing this for any sort of online data processing task. Something like the following should work:
bash$ cat input-file | awk '
BEGIN {
fnum = 1
print "HEADER" > fnum
}
{ if ((NR % 10) == 0) {
close(fnum)
fnum++
print "HEADER" > fnum
}
print >> fnum
}
'
bash$ wc -l input-file
239 input-file
bash$ ls
1 19 6
10 2 7
11 20 8
12 21 9
13 22 input-file
14 23
15 24
16 3
17 4
18 5
bash$

Related

How can I pass a predefined variable into an awk column function?

I'd like to pass a predefined variable as the column number for an awk script. I've stripped out the unnecessary bits and below is an example of what I'd like to get done. Further below is a portion of what I've tried so far.
Reason: This is a semi-long script that currently works though I'd like to define the columns early in the script as this would make the script much easier to update as columns change.
I'd like for the "state" variable to be passed on to awk's column identifier, eg:
#/bin/bash
export state='$6'
cat ~/file | awk -v column="$state" 'state!="FAILED"'
Running the above code produces rows that do indeed have column 6 as "FAILED", so there must be something wrong. While awk '$6!="FAILED"' works as expected
Different things I've tried so far:
defining $state as 6 rather than '$6' and including the $ in the awk != command.
awk '{ENVIRON["state"]!="FAILED"}', with the same modifications as 1
This should work:
state=6
cat ~/file | awk -v column="$state" '$column != "FAILED"'
$var in awk will get the field specified by the value of variable var.
So, $NF will get the last field. Note that the awk variable here is column, not state.
For example:
% seq 1 20 | paste - - - -
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
% seq 1 20 | paste - - - - | awk -v column=3 '{print $column}'
3
7
11
15
19

Print every second consequtive field in two columns - awk

Assume the following file
#zvview.exe
#begin Present/3
77191.0000 189.320100 0 0 3 0111110 16 1
-8.072430+6-8.072430+6 77190 0 1 37111110 16 2
37 2 111110 16 3
8.115068+6 0.000000+0 8.500000+6 6.390560-2 9.000000+6 6.803440-1111110 16 4
9.500000+6 1.685009+0 1.000000+7 2.582780+0 1.050000+7 3.260540+0111110 16 5
37 2 111110 16 18
What I would like to do, is print in two columns, the fields after line 6. This can be done using NR. The tricky part is the following : Every second field, should go in one column as well as adding an E before the sign, so that the output file will look like this
8.115068E+6 0.000000E+0
8.500000E+6 6.390560E-2
9.000000E+6 6.803440E-1
9.500000E+6 1.685009E+0
1.000000E+7 2.582780E+0
1.050000E+7 3.260540E+0
From the output file you see that I want to keep in $6 only length($6)=10 characters.
How is it possible to do it in awk?
can do all in awk but perhaps easier with the unix toolset
$ sed -n '6,7p' file | cut -c2-66 | tr ' ' '\n' | pr -2ats' '
8.115068+6 0.000000+0
8.500000+6 6.390560-2
9.000000+6 6.803440-1
9.500000+6 1.685009+0
1.000000+7 2.582780+0
1.050000+7 3.260540+0
Here is a awk only solution or comparison
$ awk 'NR>=6 && NR<=7{$6=substr($6,1,10);
for(i=1;i<=6;i+=2) {f[++c]=$i;s[c]=$(i+1)}}
END{for(i=1;i<=c;i++) print f[i],s[i]}' file
8.115068+6 0.000000+0
8.500000+6 6.390560-2
9.000000+6 6.803440-1
9.500000+6 1.685009+0
1.000000+7 2.582780+0
1.050000+7 3.260540+0
Perhaps shorter version,
$ awk 'NR>=6 && NR<=7{$6=substr($6,1,10);
for(i=1;i<=6;i+=2) print $i FS $(i+1)}' file
8.115068+6 0.000000+0
8.500000+6 6.390560-2
9.000000+6 6.803440-1
9.500000+6 1.685009+0
1.000000+7 2.582780+0
1.050000+7 3.260540+0
to convert format to standard scientific notation, you can pipe the result to
sed or embed something similar in awk script (using gsub).
... | sed 's/[+-]/E&/g'
8.115068E+6 0.000000E+0
8.500000E+6 6.390560E-2
9.000000E+6 6.803440E-1
9.500000E+6 1.685009E+0
1.000000E+7 2.582780E+0
1.050000E+7 3.260540E+0
With GNU awk for FIELDWIDTHS:
$ cat tst.awk
BEGIN { FIELDWIDTHS="9 2 9 2 9 2 9 2 9 2 9 2" }
NR>5 && NR<8 {
for (i=1;i<NF;i+=4) {
print $i "E" $(i+1), $(i+2) "E" $(i+3)
}
}
$ awk -f tst.awk file
8.115068E+6 0.000000E+0
8.500000E+6 6.390560E-2
9.000000E+6 6.803440E-1
9.500000E+6 1.685009E+0
1.000000E+7 2.582780E+0
1.050000E+7 3.260540E+0
If you really want to get rid of the leading blanks then there's various ways to do it (simplest being gsub(/ /,"",$<field number>) on the relevant fields) but I left them in because the above allows your output to line up properly if/when your numbers start with a -, like they do on line 4 of your sample input.
If you don't have GNU awk, get it as you're missing a LOT of extremely useful functionality.
I tried to combine #karafka 's answer using substr, so the following does the trick!
awk 'NR>=6 && NR<=7{$6=substr($6,1,10);for(i=1;i<=6;i+=2) print substr($i,1,8) "E" substr($i,9) FS substr($(i+1),1,8) "E" substr($(i+1),9)}' file
and the output is
8.115068E+6 0.000000E+0
8.500000E+6 6.390560E-2
9.000000E+6 6.803440E-1
9.500000E+6 1.685009E+0
1.000000E+7 2.582780E+0
1.050000E+7 3.260540E+0

Cut column from multiple files with the same name in different directories and paste into one

I have multiple files with the same name (3pGtoA_freq.txt), but all located in different directories.
Each file looks like this:
pos 5pG>A
1 0.162421557770395
2 0.0989643268124281
3 0.0804131316857248
4 0.0616563298066399
5 0.0577551761714493
6 0.0582450832072617
7 0.0393129770992366
8 0.037037037037037
9 0.0301016419077404
10 0.0327510917030568
11 0.0301598837209302
12 0.0309050772626932
13 0.0262089331856774
14 0.0254612546125461
15 0.0226130653266332
16 0.0206971677559913
17 0.0181280059193489
18 0.0243993993993994
19 0.0181347150259067
20 0.0224429727740986
21 0.0175690211545357
22 0.0183916336098089
23 0.0196078431372549
24 0.0187983781791375
25 0.0173192771084337
I want to cut column 2 from each file and paste column by column in one file
I tried running:
for s in results_Sample_*_hg19/results_MapDamage_Sample_*/results_Sample_*_bwa_LongSeed_sorted_hg19_noPCR/3pGtoA_freq.txt; do awk '{print $2}' $s >> /home/users/istolarek/aDNA/3pGtoA_all; done
but it's not pasting the columns next to each other.
Also I wanted to name each column by the '*', which is the only string that changes in path.
Any help with that?
for i in $(find you_file_dir -name 3pGtoA_freq.txt);do awk '{print $2>>"NewFile"}' $i; done
I would do this by processing all files in parallel in awk:
awk 'BEGIN{printf "pos ";
for(i=1;i<ARGC;++i)
printf "%-19s",gensub("^results_Sample_","",1,gensub("_hg19.*","",1,ARGV[i]));
printf "\n";
while(getline<ARGV[1]){
printf "%-4s%-19s",$1,$2;
for(i=2;i<ARGC;++i){
getline<ARGV[i];
printf "%-19s",$2}
printf "\n"}}{exit}' \
results_Sample_*_hg19/results_MapDamage_Sample_*/results_Sample_*_bwa_LongSeed_sorted_hg19_noPCR/3pGtoA_freq.txt
If your awk doesn't have gensub (I'm using cygwin), you can remove the first four lines (printf-printf); headers won't be printed in that case.

How to print columns containing value

Let's say I have a data file containing the following:
1 2 3 4 5
67 88 12 32 22
9 99 34 59 86
17 0 78 0 77
11 0 0 0 43
I would like to have a code that searches through each column for the number 0. If the number 0 is found, the code will print out that entire column in a separate file.
With this data, the outputted file would look like so:
2 3 4
88 12 32
99 34 59
0 78 0
0 0 0
It'd be great if the code didn't require me knowing the exact number of columns and/or row.
This will do what you want. It does not requiring knowing anything about how many rows or columns are present.
$ awk 'FNR==NR{for (i=1;i<=NF;i++)if ($i==o)a[i]=1;next} {tab="";for (i=1;i<=NF;i++)if (a[i]){printf "%s%s",tab,$i; tab="\t"};print ""}' file file
2 3 4
88 12 32
99 34 59
0 78 0
0 0 0
How it works
Because the file name is specified twice on the command line, the awk script will read the file twice, the first time to look for zeros, the second time to print.
FNR==NR{for (i=1;i<=NF;i++)if ($i==o)a[i]=1;next}
One the first run through the file, a[i] is set to one for any column i that has a zero in it.
This code only applies to the first run through because of the condition FNR==NR. NR is the total number of records (lines) that we have read so far. FNR is the number of records (lines) that we have read so far from the current file. Thus, when FNR==NR, we are still reading the first file. The next at the end of the commands tells awk to skip the remaining commands and start over on the next line.
tab="";for (i=1;i<=NF;i++)if (a[i]){printf "%s%s",tab,$i; tab="\t"};print ""
When we are reading through the file for the second time, we print out each column i for which a[i] is non-zero. I chose tab-separated output but, by simply adjusting the printf statement, any format could be used.
sed '#n
# init and load line in buffer (1st line copied, other added)
s/.*/>& /;1!H;1h
# at end of file, load buffer in working area
$ {x
:cycle
# keep column if zero inside
/>[[:blank:]]*0[[:blank:]]/ s/>\(\([[:blank:]]*[0-9]\{1,\}\)[[:blank:]][[:graph:][:blank:]]*\)/\2>\1/g
# remove treated column
s/>[[:blank:]]*[0-9]\{1,\}\([[:blank:]]\{1,\}[[:graph:][:blank:]]*\)/>\1/g
# is there another colum to treat ?
/>[[:blank:]]*[0-9][[:graph:][:blank:]]/ b cycle
# print result after cleanup
s/>//gp
}' YourFile
Self commented sed
posix versioj so --posix on GNU sed
Here is an interesting way of doing it with GNU awk:
parse.awk
# Record number of columns (assuming all columns have the same number of fields)
NR == 1 { n = NF }
# First parse: Remember which columns contain `pat`
FNR == NR {
for(i=1; i<=NF; i++)
if($i == pat) {
h[i] = i
last = i>last ? i : last
}
next
}
# Before second parse: switch to reading one field at a time
ENDFILE {
RS="[ \t\n]+"
}
# Second parse: print field if current-record-number modulo
# number-of-columns is in the `h` hash
{ m = FNR % n }
m in h {
ORS = (m == last) ? "\n" : OFS # print new-line after last column
print $1
}
Run it like this for example:
awk -f parse.awk pat=0 infile infile
Output:
2 3 4
88 12 32
99 34 59
0 78 0
0 0 0
Or with OFS='\t':
awk -f parse.awk pat=0 OFS='\t' infile infile
Output:
2 3 4
88 12 32
99 34 59
0 78 0
0 0 0

move certain columns to end using awk

I have large tab delimited file with 1000 columns. I want to rearrange so that certain columns have to be moved to the end.
Could anyone help using awk
Example input:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Move columns 5,6,7,8 to the end.
Output:
1 2 3 4 9 10 11 12 13 14 15 16 17 18 19 20 5 6 7 8
This prints columns 1 to a, then b to the last, and then columns a+1 to b-1:
$ awk -v a=4 -v b=9 '{for (i=1;i<=NF;i+=i==a?b-a:1) {printf "%s\t",$i};for (i=a+1;i<b;i++) {printf "%s\t",$i};print""}' file
1 2 3 4 9 10 11 12 13 14 15 16
17 18 19 20 5 6 7 8
The columns are moved in this way for every line in the input file, however many lines there are.
How it works
-v a=4 -v b=9
This defines the variables a and b which determine the limits on which columns will be moved.
for (i=1;i<=NF;i+=i==a?b-a:1) {printf "%s\t",$i}
This prints all columns except the ones from a+1 to b-1.
In this loop, i is incremented by one except when i==a in which case it is incremented by b-a so as to skip over the columns to be moved. This is done with awk's ternary statement:
i += i==a ? b-a : 1
+= simply means "add to." i==a ? b-a : 1 is the ternary statement. The value that it returns depends on whether i==a is true or false. If it is true, the value before the colon is returned. If it is false, the value after the colon is returned.
for (i=a+1;i<b;i++) {printf "%s\t",$i}
This prints columns a+1 to b-1.
print""
This prints a newline character to end the line.
Alternative solution that avoids printf
This approach assembles the output into the variable out and then prints with a plain print command, avoiding printf and the need for percent signs:
awk -v a=4 -v b=9 '{out="";for (i=1;i<=NF;i+=i==a?b-a:1) out=out $i"\t";for (i=a+1;i<b;i++) out=out $i "\t";print out}' file
One way to rearrange 2 columns ($5 become $20 and $20 become $5) the rest stay unchanged :
$ awk '{x=$5; $5=$20; $20=x; print}' file.txt
for 4 columns :
$ awk '{
x=$5; $5=$20; $9=x;
y=$9; $9=$10; $10=y;
print
}' file.txt
My approach:
awk 'BEGIN{ f[5];f[6];f[7];f[8] } \
{ for(i=1;i<=NF;i++) if(!(i in f)) printf "%s\t", $i; \
for(c in f) printf "%s\t", $c; printf "\n"} ' file
It's splitted in 3 parts:
The BEGIN{} part determines which field should be moved to the end. The indexes of the array f are moved. In the example it's 5, 6, 7 and 8.
Cycle trough every field (doesn't matter if there are 1000 fields or more) and check if they are in the array. If not print them.
Now we need the skipped fields. Cycle trough the f array and print those values.
Another way in awk
Switch last A-B with last N fields
awk -vA=4 -vB=8 '{x=B-A;for(i=A;i<=B;i++){y=$i;$i=$(t=(NF-x--));$t=y}}1' file
Put N rows from end into positon A
awk -vA=3 -vB=8 '{split($0,a," ");x=A++;while(x++<B)$x=a[NF-(B-x)];while(B++<NF)$B=a[A++]}1' file