Which AWK program can do this manipulation? - awk

Given a file containing a structure arranged like the following (with fields separated by SP or HT)
4 5 6 2 9 8 4 8
m d 6 7 9 5 4 g
t 7 4 2 4 2 5 3
h 5 6 2 5 s 3 4
r 5 7 1 2 2 4 1
4 1 9 0 5 6 d f
x c a 2 3 4 5 9
0 0 3 2 1 4 q w
Which AWK program do I need to get the following output?
4 5
m d
t 7
h 5
r 5
4 1
x c
0 0
6 2
6 7
4 2
6 2
7 1
9 0
a 2
3 2
9 8
9 5
4 2
5 s
2 2
5 6
3 4
1 4
4 8
4 g
5 3
3 4
4 1
d f
5 9
q w
Thanks in advance for any and all help.
Postscript
Please bear in mind,
My input file is much larger than the one depicted in this question.
My computer science skills are seriously limited.
This task has been imposed on me.

awk -v n=4 '
function join(start, end, result, i) {
for (i=start; i<=end; i++)
result = result $i (i==end ? ORS : FS)
return result
}
{
c=0
for (i=1; i<NF; i+=n) {
c++
col[c] = col[c] join(i, i+n-1)
}
}
END {
for (i=1; i<=c; i++)
printf "%s", col[i] # the value already ends with newline
}
' file
The awk info page has a short primer on awk, so read that too.
Benchmarking
create an input file with 1,000,000 columns and 8 rows (as specified by OP)
#!perl
my $cols = 2**20; # 1,048,576
my $rows = 8;
my #alphabet=( 'a'..'z', 0..9 );
my $size = scalar #alphabet;
for ($r=1; $r <= $rows; $r++) {
for ($c = 1; $c <= $cols; $c++) {
my $idx = int rand $size;
printf "%s ", $alphabet[$idx];
}
printf "\n";
}
$ perl createfile.pl > input.file
$ wc input.file
8 8388608 16777224 input.file
time various implementations: I use the fish shell, so the timing output is different from bash's
my awk
$ time awk -f columnize.awk -v n=4 input.file > output.file
________________________________________________________
Executed in 3.62 secs fish external
usr time 3.49 secs 0.24 millis 3.49 secs
sys time 0.11 secs 1.96 millis 0.11 secs
$ wc output.file
2097152 8388608 16777216 output.file
Timur's perl:
$ time perl -lan columnize.pl input.file > output.file
________________________________________________________
Executed in 3.25 secs fish external
usr time 2.97 secs 0.16 millis 2.97 secs
sys time 0.27 secs 2.87 millis 0.27 secs
Ravinder's awk
$ time awk -f columnize.ravinder input.file > output.file
________________________________________________________
Executed in 4.01 secs fish external
usr time 3.84 secs 0.18 millis 3.84 secs
sys time 0.15 secs 3.75 millis 0.14 secs
kvantour's awk, first version
$ time awk -f columnize.kvantour -v n=4 input.file > output.file
________________________________________________________
Executed in 3.84 secs fish external
usr time 3.71 secs 166.00 micros 3.71 secs
sys time 0.11 secs 1326.00 micros 0.11 secs
kvantour's second awk version: Crtl-C interrupted after a few minutes
$ time awk -f columnize.kvantour2 -v n=4 input.file > output.file
^C
________________________________________________________
Executed in 260.80 secs fish external
usr time 257.39 secs 0.13 millis 257.39 secs
sys time 1.68 secs 2.72 millis 1.67 secs
$ wc output.file
9728 38912 77824 output.file
The $0=a[j] line is pretty expensive, as it has to parse the string into fields each time.
dawg's python
$ timeout 60s fish -c 'time python3 columnize.py input.file 4 > output.file'
[... 60 seconds later ...]
$ wc output.file
2049 8196 16392 output.file
another interesting data point: using different awk implementations. I'm on a Mac with GNU awk and mawk installed via homebrew
with many columns, few rows
$ time gawk -f columnize.awk -v n=4 input.file > output.file
________________________________________________________
Executed in 3.78 secs fish external
usr time 3.62 secs 174.00 micros 3.62 secs
sys time 0.13 secs 1259.00 micros 0.13 secs
$ time /usr/bin/awk -f columnize.awk -v n=4 input.file > output.file
________________________________________________________
Executed in 17.73 secs fish external
usr time 14.95 secs 0.20 millis 14.95 secs
sys time 2.72 secs 3.45 millis 2.71 secs
$ time mawk -f columnize.awk -v n=4 input.file > output.file
________________________________________________________
Executed in 2.01 secs fish external
usr time 1892.31 millis 0.11 millis 1892.21 millis
sys time 95.14 millis 2.17 millis 92.97 millis
with many rows, few columns, this test took over half an hour on a MacBook Pro, 6 core Intel cpu, 16GB ram
$ time mawk -f columnize.awk -v n=4 input.file > output.file
________________________________________________________
Executed in 32.30 mins fish external
usr time 23.58 mins 0.15 millis 23.58 mins
sys time 8.63 mins 2.52 millis 8.63 mins

Use this Perl script:
perl -lane '
push #rows, [#F];
END {
my $delim = "\t";
my $cols_per_group = 2;
my $col_start = 0;
while ( 1 ) {
for my $row ( #rows ) {
print join $delim, #{$row}[ $col_start .. ($col_start + $cols_per_group - 1) ];
}
$col_start += $cols_per_group;
last if ($col_start + $cols_per_group - 1) > $#F;
}
}
' in_file > out_file
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
This script reads the file into memory. This is okay for most modern computers and the file sizes in question.
Each line is split on whitespace (use -F'\t' for TAB as delimiter) into array #F. The references to this array for each line are stored as elements in array #rows. After the file is read, and the end of the script (in the END { ... } block), the contents of the file are printed in groups of columns, with $cols_per_group columns per group. Columns are delimited by $delim.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches

Could you please try following, written and tested with ONLY shown samples in GNU awk.
awk '
{
for(i=1;i<=NF;i+=2){
arr[i]=(arr[i]?arr[i] ORS :"")$i OFS $(i+1)
}
}
END{
for(i=1;i<=NF;i+=2){
print arr[i]
}
}' Input_file

Since we all love awk, here is another one:
awk -v n=2 '{for(i=1;i<=NF;++i) { j=int((i-1)/n); a[j] = a[j] $i (i%n==0 || i==NF ?ORS:OFS) }}
END{for(j=0;j<=int(NF/n);j++) printf "%s", a[j]}'
This will output exactly what is requested by the OP.
How does it work?
Awk performs actions per record/line it reads. For each record, it processes all the fields and appends them to a set of strings stored in an array a. It processes it in such way that a[1] contains the first n columns. a[2] the second set of n columns, etc. The relation between field number and string number is based on the equation int((i-1)/n).
When creating the strings, we try to keep track if we need to add a field separator OFS or a new line (record separator ORS). We decide this based on the modulus of the field number and the total number of columns we expect (i.e. n). Note, that we always use ORS if we process the last field.
An alternative approach:
Thanks to the comment Dawg, a flaw in the above code was found. He informed us that the program scales really badly when moving to large files. The real reason for this is not 100% known, but I assume it is from constantly having to rewrite memory by doing operations as a[j] = a[j] $i (i%n==0 || i==NF ?ORS:OFS). This can be eliminated by just buffering the entire file and do all operations in the end:
awk -v n=2 '{a[NR]=$0}
END{ for(i=1;i<=NF;i+=n)
for(j=1;j<=NR;++j) {
$0=a[j]
for(k=0;k<n&&(i+k<=NF);++k)
printf "%s%s", $(i+k), ((i+k==NF || ((i+k) % n == 0)) ? ORS : OFS)
}
}' file
Note: the latter seems only efficient for a small amount of columns. This is because of the constant re-splitting done with $0=a[j]. The split takes much more time due to the large amount of fields. The complexity of this system is O(NF^2*NR)
A final alternative approach: While the first solution is fast for large amount of columns and small amount of rows, the second is fast for small amount of columns and large amount of rows. Below you find a final version that is not as fast, but stable and produces the same timing for the transposed files.
awk -v n=2 '{ for(i=1;i<=NF;i+=n) {
s=""
for(k=0;k<n&&(i+k<=NF);++k)
s=s $(i+k) ((i+k==NF || ((i+k) % n == 0)) ? "ORS" : OFS);
a[i,NR]=s
}
}
END{for(i=1;i<=NF;i+=n)for(j=1;j<=NR;++j) printf "%s",a[i,j]}' file

My old answer is below and no longer applicable...
You can use this awk for a file that could be millions of rows or millions of columns. The basic scheme is to suck all the values into a single array then use indexing arithmetic and nested loops at the end to get the correct order:
$ cat col.awk
{
for (i=1; i<=NF; i++) {
vals[++numVals] = $i
}
}
END {
for(col_offset=0; col_offset + cols <= NF; col_offset+=cols) {
for (i=1; i<=numVals; i+=NF) {
for(j=0; j<cols; j++) {
printf "%s%s", vals[i+j+col_offset], (j<cols-1 ? FS : ORS)
}
}
}
}
$ awk -f col.awk -v cols=2 file
4 5
m d
t 7
h 5
...
3 4
4 1
d f
5 9
q w
My old answer is based on the dramatic slowdown seen in most of these awks with a large number of rows.
See this question for more discussion regarding the slowdown.
The comments in the original answer below are no longer applicable.
OLD ANSWER
Only here for consistency...
The awk solutions here are all good for small files. What they all have in common is the file either needs to fit in RAM or the OS virtual memory is an acceptable fall back if the file does not fit in RAM. But with a larger file, since the time of the awk increases exponentially, you can have a very bad result. With a 12MB version of your file, an in-memory awk becomes unusably slow.
This is the case if there are millions of rows not millions of columns.
The only alternative to an in-memory solution is reading the file multiple times or managing temp files yourself. (Or use a scripting language that manages VM internally, such as Perl or Python... Timur Shatland's Perl is fast even on huge files.)
awk does not have an easy mechanism to loop over a file multiple times until a process is done. You would need to use the shell to do that and invoke awk multiple times.
Here is a python script that reads a file line-by-line and prints cols number of columns until all the original columns have been printed
$ cat sys.py
import sys
filename=sys.argv[1]
cols=int(sys.argv[2])
offset=0
delimiter=" "
with open(filename, "r") as f:
max_cols=len(f.readline().split())
while offset<max_cols:
with open(filename, "r") as f:
for line in f:
col_li=line.rstrip().split()
l=len(col_li)
max_cols=l if max_cols>l else l
print(delimiter.join(col_li[offset:offset+cols]))
offset+=cols
It is counterintuitive, but it is often significantly faster and more efficient to read a file multiple times than it is to gulp the entire thing -- if that gulp then results in a bad result with larger data.
So how does this perform compared to one of the awks in this post? Let's time it.
Given your example, the in memory awk will likely be faster:
$ cat file
4 5 6 2 9 8 4 8
m d 6 7 9 5 4 g
t 7 4 2 4 2 5 3
h 5 6 2 5 s 3 4
r 5 7 1 2 2 4 1
4 1 9 0 5 6 d f
x c a 2 3 4 5 9
0 0 3 2 1 4 q w
$ time python pys.py file 2 >file2
real 0m0.027s
user 0m0.009s
sys 0m0.016s
$ time awk -v n=2 '{for(i=1;i<=NF;++i) {
j=int((i-1)/n); a[j] = a[j] $i (i%n==0 || i==NF ?ORS:OFS) }}
END{for(j=0;j<=int(NF/n);j++) printf "%s", a[j]}' file >file3
real 0m0.009s
user 0m0.003s
sys 0m0.003s
And that is true. BUT, let's make the file 1000x bigger with this Python script:
txt='''\
4 5 6 2 9 8 4 8
m d 6 7 9 5 4 g
t 7 4 2 4 2 5 3
h 5 6 2 5 s 3 4
r 5 7 1 2 2 4 1
4 1 9 0 5 6 d f
x c a 2 3 4 5 9
0 0 3 2 1 4 q w
'''
with open('/tmp/file', 'w') as f:
f.write(txt*1000) # change the 1000 to the multiple desired
# file will have 8000 lines and about 125KB
Rerun those timings, same way, and you get:
#python
real 0m0.061s
user 0m0.044s
sys 0m0.015s
# awk
real 0m0.050s
user 0m0.043s
sys 0m0.004s
About the same time... Now make the file BIGGER by multiplying the original by 100,000 to get 800,000 lines and 12MB and run the timing again:
# python
real 0m3.475s
user 0m3.434s
sys 0m0.038s
#awk
real 22m45.118s
user 16m40.221s
sys 6m4.652s
With a 12MB file, the in memory method becomes essentially unusable since the VM system on this computer with is subject to massive disc swapping to manage that particular type of memory allocation. It is likely O n**2 or worse. This computer is a 2019 Mac Pro 16 core Xeon with 192GB memory, so it is not hardware...

Related

Transpose columns of data table using awk [duplicate]

I have a huge tab-separated file formatted like this
X column1 column2 column3
row1 0 1 2
row2 3 4 5
row3 6 7 8
row4 9 10 11
I would like to transpose it in an efficient way using only bash commands (I could write a ten or so lines Perl script to do that, but it should be slower to execute than the native bash functions). So the output should look like
X row1 row2 row3 row4
column1 0 3 6 9
column2 1 4 7 10
column3 2 5 8 11
I thought of a solution like this
cols=`head -n 1 input | wc -w`
for (( i=1; i <= $cols; i++))
do cut -f $i input | tr $'\n' $'\t' | sed -e "s/\t$/\n/g" >> output
done
But it's slow and doesn't seem the most efficient solution. I've seen a solution for vi in this post, but it's still over-slow. Any thoughts/suggestions/brilliant ideas? :-)
awk '
{
for (i=1; i<=NF; i++) {
a[NR,i] = $i
}
}
NF>p { p = NF }
END {
for(j=1; j<=p; j++) {
str=a[1,j]
for(i=2; i<=NR; i++){
str=str" "a[i,j];
}
print str
}
}' file
output
$ more file
0 1 2
3 4 5
6 7 8
9 10 11
$ ./shell.sh
0 3 6 9
1 4 7 10
2 5 8 11
Performance against Perl solution by Jonathan on a 10000 lines file
$ head -5 file
1 0 1 2
2 3 4 5
3 6 7 8
4 9 10 11
1 0 1 2
$ wc -l < file
10000
$ time perl test.pl file >/dev/null
real 0m0.480s
user 0m0.442s
sys 0m0.026s
$ time awk -f test.awk file >/dev/null
real 0m0.382s
user 0m0.367s
sys 0m0.011s
$ time perl test.pl file >/dev/null
real 0m0.481s
user 0m0.431s
sys 0m0.022s
$ time awk -f test.awk file >/dev/null
real 0m0.390s
user 0m0.370s
sys 0m0.010s
EDIT by Ed Morton (#ghostdog74 feel free to delete if you disapprove).
Maybe this version with some more explicit variable names will help answer some of the questions below and generally clarify what the script is doing. It also uses tabs as the separator which the OP had originally asked for so it'd handle empty fields and it coincidentally pretties-up the output a bit for this particular case.
$ cat tst.awk
BEGIN { FS=OFS="\t" }
{
for (rowNr=1;rowNr<=NF;rowNr++) {
cell[rowNr,NR] = $rowNr
}
maxRows = (NF > maxRows ? NF : maxRows)
maxCols = NR
}
END {
for (rowNr=1;rowNr<=maxRows;rowNr++) {
for (colNr=1;colNr<=maxCols;colNr++) {
printf "%s%s", cell[rowNr,colNr], (colNr < maxCols ? OFS : ORS)
}
}
}
$ awk -f tst.awk file
X row1 row2 row3 row4
column1 0 3 6 9
column2 1 4 7 10
column3 2 5 8 11
The above solutions will work in any awk (except old, broken awk of course - there YMMV).
The above solutions do read the whole file into memory though - if the input files are too large for that then you can do this:
$ cat tst.awk
BEGIN { FS=OFS="\t" }
{ printf "%s%s", (FNR>1 ? OFS : ""), $ARGIND }
ENDFILE {
print ""
if (ARGIND < NF) {
ARGV[ARGC] = FILENAME
ARGC++
}
}
$ awk -f tst.awk file
X row1 row2 row3 row4
column1 0 3 6 9
column2 1 4 7 10
column3 2 5 8 11
which uses almost no memory but reads the input file once per number of fields on a line so it will be much slower than the version that reads the whole file into memory. It also assumes the number of fields is the same on each line and it uses GNU awk for ENDFILE and ARGIND but any awk can do the same with tests on FNR==1 and END.
awk
Gawk version which uses arrays of arrays:
tp(){ awk '{for(i=1;i<=NF;i++)a[i][NR]=$i}END{for(i in a)for(j in a[i])printf"%s"(j==NR?RS:FS),a[i][j]}' "${1+FS=$1}";}
Plain awk version which uses multidimensional arrays (this was about twice as slow in my benchmark):
tp(){ awk '{for(i=1;i<=NF;i++)a[i,NR]=$i}END{for(i=1;i<=NF;i++)for(j=1;j<=NR;j++)printf"%s"(j==NR?RS:FS),a[i,j]}' "${1+FS=$1}";}
macOS comes with a version of Brian Kerningham's nawk from 2007 which doesn't support arrays of arrays.
To use space as a separator without collapsing sequences of multiple spaces, use FS='[ ]'.
rs
rs is a BSD utility which also comes with macOS, but it should be available from package managers on other platforms. It is named after the reshape function in APL.
Use sequences of spaces and tabs as column separator:
rs -T
Use tab as column separator:
rs -c -C -T
Use comma as column separator:
rs -c, -C, -T
-c changes the input column separator and -C changes the output column separator. A lone -c or -C sets the separator to tab. -T transposes rows and columns.
Do not use -t instead of -T, because it automatically selects the number of output columns so that the output lines fill the width of the display (which is 80 characters by default but which can be changed with -w).
When an output column separator is specified using -C, an extra column separator character is added to the end of each row, but you can remove it with sed:
$ seq 4|paste -d, - -|rs -c, -C, -T
1,3,
2,4,
$ seq 4|paste -d, - -|rs -c, -C, -T|sed s/.\$//
1,3
2,4
rs -T determines the number of columns based on the number of columns on the first row, so it produces the wrong result when the first line ends with one or more empty columns:
$ rs -c, -C, -T<<<$'1,\n3,4'
1,3,4,
R
The t function transposes a matrix or dataframe:
Rscript -e 'write.table(t(read.table("stdin",sep=",",quote="",comment.char="")),sep=",",quote=F,col.names=F,row.names=F)'
If you replace Rscript -e with R -e, then it echoes the code that is being run to STDOUT, and it also results in the error ignoring SIGPIPE signal if the R command is followed by a command like head -n1 which exits before it has read the whole STDIN.
quote="" can be removed if the input doesn't contain double quotes or single quotes, and comment.char="" can be removed if the input doesn't contain lines that start with a hash character.
For a big input file, fread and fwrite from data.table are faster than read.table and write.table:
$ seq 1e6|awk 'ORS=NR%1e3?FS:RS'>a
$ time Rscript --no-init-file -e 'write.table(t(read.table("a")),quote=F,col.names=F,row.names=F)'>/dev/null
real 0m1.061s
user 0m0.983s
sys 0m0.074s
$ time Rscript --no-init-file -e 'write.table(t(data.table::fread("a")),quote=F,col.names=F,row.names=F)'>/dev/null
real 0m0.599s
user 0m0.535s
sys 0m0.048s
$ time Rscript --no-init-file -e 'data.table::fwrite(t(data.table::fread("a")),sep=" ",col.names=F)'>t/b
x being coerced from class: matrix to data.table
real 0m0.375s
user 0m0.296s
sys 0m0.073s
jq
tp(){ jq -R .|jq --arg x "${1-$'\t'}" -sr 'map(./$x)|transpose|map(join($x))[]';}
jq -R . prints each input line as a JSON string literal, -s (--slurp) creates an array for the input lines after parsing each line as JSON, and -r (--raw-output) outputs the contents of strings instead of JSON string literals. The / operator is overloaded to split strings.
Ruby
ruby -e'STDIN.map{|x|x.chomp.split(",",-1)}.transpose.each{|x|puts x*","}'
The -1 argument to split disables discarding empty fields at the end:
$ ruby -e'p"a,,".split(",")'
["a"]
$ ruby -e'p"a,,".split(",",-1)'
["a", "", ""]
Function form:
$ tp(){ ruby -e's=ARGV[0];STDIN.map{|x|x.chomp.split(s==" "?/ /:s,-1)}.transpose.each{|x|puts x*s}' -- "${1-$'\t'}";}
$ seq 4|paste -d, - -|tp ,
1,3
2,4
The function above uses s==" "?/ /:s because when the argument to the split function is a single space, it enables awk-like special behavior where strings are split based on contiguous runs of spaces and tabs:
$ ruby -e'p" a \tb ".split(" ",-1)'
["a", "b", ""]
$ ruby -e'p" a \tb ".split(/ /,-1)'
["", "a", "", "\tb", ""]
A Python solution:
python -c "import sys; print('\n'.join(' '.join(c) for c in zip(*(l.split() for l in sys.stdin.readlines() if l.strip()))))" < input > output
The above is based on the following:
import sys
for c in zip(*(l.split() for l in sys.stdin.readlines() if l.strip())):
print(' '.join(c))
This code does assume that every line has the same number of columns (no padding is performed).
Have a look at GNU datamash which can be used like datamash transpose.
A future version will also support cross tabulation (pivot tables)
Here is how you would do it with space separated columns:
datamash transpose -t ' ' < file > transposed_file
the transpose project on sourceforge is a coreutil-like C program for exactly that.
gcc transpose.c -o transpose
./transpose -t input > output #works with stdin, too.
Pure BASH, no additional process. A nice exercise:
declare -a array=( ) # we build a 1-D-array
read -a line < "$1" # read the headline
COLS=${#line[#]} # save number of columns
index=0
while read -a line ; do
for (( COUNTER=0; COUNTER<${#line[#]}; COUNTER++ )); do
array[$index]=${line[$COUNTER]}
((index++))
done
done < "$1"
for (( ROW = 0; ROW < COLS; ROW++ )); do
for (( COUNTER = ROW; COUNTER < ${#array[#]}; COUNTER += COLS )); do
printf "%s\t" ${array[$COUNTER]}
done
printf "\n"
done
GNU datamash is perfectly suited for this problem with only one line of code and potentially arbitrarily large filesize!
datamash -W transpose infile > outfile
There is a purpose built utility for this,
GNU datamash utility
apt install datamash
datamash transpose < yourfile
Taken from this site, https://www.gnu.org/software/datamash/ and http://www.thelinuxrain.com/articles/transposing-rows-and-columns-3-methods
Here is a moderately solid Perl script to do the job. There are many structural analogies with #ghostdog74's awk solution.
#!/bin/perl -w
#
# SO 1729824
use strict;
my(%data); # main storage
my($maxcol) = 0;
my($rownum) = 0;
while (<>)
{
my(#row) = split /\s+/;
my($colnum) = 0;
foreach my $val (#row)
{
$data{$rownum}{$colnum++} = $val;
}
$rownum++;
$maxcol = $colnum if $colnum > $maxcol;
}
my $maxrow = $rownum;
for (my $col = 0; $col < $maxcol; $col++)
{
for (my $row = 0; $row < $maxrow; $row++)
{
printf "%s%s", ($row == 0) ? "" : "\t",
defined $data{$row}{$col} ? $data{$row}{$col} : "";
}
print "\n";
}
With the sample data size, the performance difference between perl and awk was negligible (1 millisecond out of 7 total). With a larger data set (100x100 matrix, entries 6-8 characters each), perl slightly outperformed awk - 0.026s vs 0.042s. Neither is likely to be a problem.
Representative timings for Perl 5.10.1 (32-bit) vs awk (version 20040207 when given '-V') vs gawk 3.1.7 (32-bit) on MacOS X 10.5.8 on a file containing 10,000 lines with 5 columns per line:
Osiris JL: time gawk -f tr.awk xxx > /dev/null
real 0m0.367s
user 0m0.279s
sys 0m0.085s
Osiris JL: time perl -f transpose.pl xxx > /dev/null
real 0m0.138s
user 0m0.128s
sys 0m0.008s
Osiris JL: time awk -f tr.awk xxx > /dev/null
real 0m1.891s
user 0m0.924s
sys 0m0.961s
Osiris-2 JL:
Note that gawk is vastly faster than awk on this machine, but still slower than perl. Clearly, your mileage will vary.
Assuming all your rows have the same number of fields, this awk program solves the problem:
{for (f=1;f<=NF;f++) col[f] = col[f]":"$f} END {for (f=1;f<=NF;f++) print col[f]}
In words, as you loop over the rows, for every field f grow a ':'-separated string col[f] containing the elements of that field. After you are done with all the rows, print each one of those strings in a separate line. You can then substitute ':' for the separator you want (say, a space) by piping the output through tr ':' ' '.
Example:
$ echo "1 2 3\n4 5 6"
1 2 3
4 5 6
$ echo "1 2 3\n4 5 6" | awk '{for (f=1;f<=NF;f++) col[f] = col[f]":"$f} END {for (f=1;f<=NF;f++) print col[f]}' | tr ':' ' '
1 4
2 5
3 6
If you have sc installed, you can do:
psc -r < inputfile | sc -W% - > outputfile
I normally use this little awk snippet for this requirement:
awk '{for (i=1; i<=NF; i++) a[i,NR]=$i
max=(max<NF?NF:max)}
END {for (i=1; i<=max; i++)
{for (j=1; j<=NR; j++)
printf "%s%s", a[i,j], (j==NR?RS:FS)
}
}' file
This just loads all the data into a bidimensional array a[line,column] and then prints it back as a[column,line], so that it transposes the given input.
This needs to keep track of the maximum amount of columns the initial file has, so that it is used as the number of rows to print back.
A hackish perl solution can be like this. It's nice because it doesn't load all the file in memory, prints intermediate temp files, and then uses the all-wonderful paste
#!/usr/bin/perl
use warnings;
use strict;
my $counter;
open INPUT, "<$ARGV[0]" or die ("Unable to open input file!");
while (my $line = <INPUT>) {
chomp $line;
my #array = split ("\t",$line);
open OUTPUT, ">temp$." or die ("unable to open output file!");
print OUTPUT join ("\n",#array);
close OUTPUT;
$counter=$.;
}
close INPUT;
# paste files together
my $execute = "paste ";
foreach (1..$counter) {
$execute.="temp$counter ";
}
$execute.="> $ARGV[1]";
system $execute;
The only improvement I can see to your own example is using awk which will reduce the number of processes that are run and the amount of data that is piped between them:
/bin/rm output 2> /dev/null
cols=`head -n 1 input | wc -w`
for (( i=1; i <= $cols; i++))
do
awk '{printf ("%s%s", tab, $'$i'); tab="\t"} END {print ""}' input
done >> output
Some *nix standard util one-liners, no temp files needed. NB: the OP wanted an efficient fix, (i.e. faster), and the top answers are usually faster than this answer. These one-liners are for those who like *nix software tools, for whatever reasons. In rare cases, (e.g. scarce IO & memory), these snippets can actually be faster than some of the top answers.
Call the input file foo.
If we know foo has four columns:
for f in 1 2 3 4 ; do cut -d ' ' -f $f foo | xargs echo ; done
If we don't know how many columns foo has:
n=$(head -n 1 foo | wc -w)
for f in $(seq 1 $n) ; do cut -d ' ' -f $f foo | xargs echo ; done
xargs has a size limit and therefore would make incomplete work with a long file. What size limit is system dependent, e.g.:
{ timeout '.01' xargs --show-limits ; } 2>&1 | grep Max
Maximum length of command we could actually use: 2088944
tr & echo:
for f in 1 2 3 4; do cut -d ' ' -f $f foo | tr '\n\ ' ' ; echo; done
...or if the # of columns are unknown:
n=$(head -n 1 foo | wc -w)
for f in $(seq 1 $n); do
cut -d ' ' -f $f foo | tr '\n' ' ' ; echo
done
Using set, which like xargs, has similar command line size based limitations:
for f in 1 2 3 4 ; do set - $(cut -d ' ' -f $f foo) ; echo $# ; done
I used fgm's solution (thanks fgm!), but needed to eliminate the tab characters at the end of each row, so modified the script thus:
#!/bin/bash
declare -a array=( ) # we build a 1-D-array
read -a line < "$1" # read the headline
COLS=${#line[#]} # save number of columns
index=0
while read -a line; do
for (( COUNTER=0; COUNTER<${#line[#]}; COUNTER++ )); do
array[$index]=${line[$COUNTER]}
((index++))
done
done < "$1"
for (( ROW = 0; ROW < COLS; ROW++ )); do
for (( COUNTER = ROW; COUNTER < ${#array[#]}; COUNTER += COLS )); do
printf "%s" ${array[$COUNTER]}
if [ $COUNTER -lt $(( ${#array[#]} - $COLS )) ]
then
printf "\t"
fi
done
printf "\n"
done
I was just looking for similar bash tranpose but with support for padding. Here is the script I wrote based on fgm's solution, that seem to work. If it can be of help...
#!/bin/bash
declare -a array=( ) # we build a 1-D-array
declare -a ncols=( ) # we build a 1-D-array containing number of elements of each row
SEPARATOR="\t";
PADDING="";
MAXROWS=0;
index=0
indexCol=0
while read -a line; do
ncols[$indexCol]=${#line[#]};
((indexCol++))
if [ ${#line[#]} -gt ${MAXROWS} ]
then
MAXROWS=${#line[#]}
fi
for (( COUNTER=0; COUNTER<${#line[#]}; COUNTER++ )); do
array[$index]=${line[$COUNTER]}
((index++))
done
done < "$1"
for (( ROW = 0; ROW < MAXROWS; ROW++ )); do
COUNTER=$ROW;
for (( indexCol=0; indexCol < ${#ncols[#]}; indexCol++ )); do
if [ $ROW -ge ${ncols[indexCol]} ]
then
printf $PADDING
else
printf "%s" ${array[$COUNTER]}
fi
if [ $((indexCol+1)) -lt ${#ncols[#]} ]
then
printf $SEPARATOR
fi
COUNTER=$(( COUNTER + ncols[indexCol] ))
done
printf "\n"
done
I was looking for a solution to transpose any kind of matrix (nxn or mxn) with any kind of data (numbers or data) and got the following solution:
Row2Trans=number1
Col2Trans=number2
for ((i=1; $i <= Line2Trans; i++));do
for ((j=1; $j <=Col2Trans ; j++));do
awk -v var1="$i" -v var2="$j" 'BEGIN { FS = "," } ; NR==var1 {print $((var2)) }' $ARCHIVO >> Column_$i
done
done
paste -d',' `ls -mv Column_* | sed 's/,//g'` >> $ARCHIVO
If you only want to grab a single (comma delimited) line $N out of a file and turn it into a column:
head -$N file | tail -1 | tr ',' '\n'
Not very elegant, but this "single-line" command solves the problem quickly:
cols=4; for((i=1;i<=$cols;i++)); do \
awk '{print $'$i'}' input | tr '\n' ' '; echo; \
done
Here cols is the number of columns, where you can replace 4 by head -n 1 input | wc -w.
Another awk solution and limited input with the size of memory you have.
awk '{ for (i=1; i<=NF; i++) RtoC[i]= (RtoC[i]? RtoC[i] FS $i: $i) }
END{ for (i in RtoC) print RtoC[i] }' infile
This joins each same filed number positon into together and in END prints the result that would be first row in first column, second row in second column, etc.
Will output:
X row1 row2 row3 row4
column1 0 3 6 9
column2 1 4 7 10
column3 2 5 8 11
#!/bin/bash
aline="$(head -n 1 file.txt)"
set -- $aline
colNum=$#
#set -x
while read line; do
set -- $line
for i in $(seq $colNum); do
eval col$i="\"\$col$i \$$i\""
done
done < file.txt
for i in $(seq $colNum); do
eval echo \${col$i}
done
another version with set eval
Here is a Bash one-liner that is based on simply converting each line to a column and paste-ing them together:
echo '' > tmp1; \
cat m.txt | while read l ; \
do paste tmp1 <(echo $l | tr -s ' ' \\n) > tmp2; \
cp tmp2 tmp1; \
done; \
cat tmp1
m.txt:
0 1 2
4 5 6
7 8 9
10 11 12
creates tmp1 file so it's not empty.
reads each line and transforms it into a column using tr
pastes the new column to the tmp1 file
copies result back into tmp1.
PS: I really wanted to use io-descriptors but couldn't get them to work.
Another bash variant
$ cat file
XXXX col1 col2 col3
row1 0 1 2
row2 3 4 5
row3 6 7 8
row4 9 10 11
Script
#!/bin/bash
I=0
while read line; do
i=0
for item in $line; { printf -v A$I[$i] $item; ((i++)); }
((I++))
done < file
indexes=$(seq 0 $i)
for i in $indexes; {
J=0
while ((J<I)); do
arr="A$J[$i]"
printf "${!arr}\t"
((J++))
done
echo
}
Output
$ ./test
XXXX row1 row2 row3 row4
col1 0 3 6 9
col2 1 4 7 10
col3 2 5 8 11
I'm a little late to the game but how about this:
cat table.tsv | python -c "import pandas as pd, sys; pd.read_csv(sys.stdin, sep='\t').T.to_csv(sys.stdout, sep='\t')"
or zcat if it's gzipped.
This is assuming you have pandas installed in your version of python
Here's a Haskell solution. When compiled with -O2, it runs slightly faster than ghostdog's awk and slightly slower than Stephan's thinly wrapped c python on my machine for repeated "Hello world" input lines. Unfortunately GHC's support for passing command line code is non-existent as far as I can tell, so you will have to write it to a file yourself. It will truncate the rows to the length of the shortest row.
transpose :: [[a]] -> [[a]]
transpose = foldr (zipWith (:)) (repeat [])
main :: IO ()
main = interact $ unlines . map unwords . transpose . map words . lines
An awk solution that store the whole array in memory
awk '$0!~/^$/{ i++;
split($0,arr,FS);
for (j in arr) {
out[i,j]=arr[j];
if (maxr<j){ maxr=j} # max number of output rows.
}
}
END {
maxc=i # max number of output columns.
for (j=1; j<=maxr; j++) {
for (i=1; i<=maxc; i++) {
printf( "%s:", out[i,j])
}
printf( "%s\n","" )
}
}' infile
But we may "walk" the file as many times as output rows are needed:
#!/bin/bash
maxf="$(awk '{if (mf<NF); mf=NF}; END{print mf}' infile)"
rowcount=maxf
for (( i=1; i<=rowcount; i++ )); do
awk -v i="$i" -F " " '{printf("%s\t ", $i)}' infile
echo
done
Which (for a low count of output rows is faster than the previous code).
A oneliner using R...
cat file | Rscript -e "d <- read.table(file('stdin'), sep=' ', row.names=1, header=T); write.table(t(d), file=stdout(), quote=F, col.names=NA) "
I've used below two scripts to do similar operations before. The first is in awk which is a lot faster than the second which is in "pure" bash. You might be able to adapt it to your own application.
awk '
{
for (i = 1; i <= NF; i++) {
s[i] = s[i]?s[i] FS $i:$i
}
}
END {
for (i in s) {
print s[i]
}
}' file.txt
declare -a arr
while IFS= read -r line
do
i=0
for word in $line
do
[[ ${arr[$i]} ]] && arr[$i]="${arr[$i]} $word" || arr[$i]=$word
((i++))
done
done < file.txt
for ((i=0; i < ${#arr[#]}; i++))
do
echo ${arr[i]}
done
Simple 4 line answer, keep it readable.
col="$(head -1 file.txt | wc -w)"
for i in $(seq 1 $col); do
awk '{ print $'$i' }' file.txt | paste -s -d "\t"
done

Print lines between two patterns and sleep for a second then repeat till the EOF

There are many answers to the first part of the question, that is printing lines between two patterns. For example :
awk '/start/{flag=1} flag; /end/{flag=0}' file
This prints out the lines between the patterns "start" and "end" that occurs several times within the "file". Suppose the file contains :
start a b c d 4 5 3 7 8 end some garbage start 6 d 0 0 1 d g end other garbages start 6 5 ... end some other garbage
The above "awk" command prints out the whole file eliminating the garbage.. which is fine, but what I want is a pause between each such chunk. That is print
start a b c d 4 5 3 7 8 end
sleep for a second (sleep 1 can be used) then print the next chunk :
start 6 d 0 0 1 d g end
and so on till the end of the file. I searched extensively but could not find the second "pausing/waiting" part hence I believe this is not a duplicate question.
With GNU awk for multi-char RS:
$ cat tst.awk
BEGIN { RS="[[:space:]]+"; ORS=" " }
/start/ { found=1 }
found { print }
/end/ { printf "\n"; found=0; system("sleep 1") }
$ time awk -f tst.awk file
start a b c d 4 5 3 7 8 end
start 6 d 0 0 1 d g end
start 6 5 ... end
real 0m3.214s
user 0m0.015s
sys 0m0.122s
Note that the variable name should be found or a similarly meaningful word, not flag as a flag is what type of variable it is, not what it represents. Naming a flag variable flag is like naming your integer variables integer instead of count or sum or something else meaningful.
An alternative to using system("sleep 1") as suggested by EdMorton would be to pipe awk's output into a bash while loop:
awk ... | while read -r line ; do
echo "${line}"
sleep 1
done
That would avoid to spawn an extra shell for each sleep operation.

awk command to print out entire row that contains a certain string

output.txt
Test Results 1 PASSED with 2 minutes to process 0 issues
Test Results 2 PASSED with 10 minutes to process 0 issues
Test Results 3 FAILED ERROR 1 issues
Test Results 4 PASSED with 4 minutes to process 0 issues
Test Results 5 FAILED ERROR 3 issues
Test Results 6 PASSED with 19 minutes to process 0 issues
I need help coming up with a awk command to parse through this text. I want to list only rows that has more than 0 issue.
So in this case
Test Results 1 PASSED with 2 minutes to process 0 issues
Test Results 5 FAILED ERROR 3 issues
Try this:
$ awk '$(NF-1)' file
Test Results 3 FAILED 1 issues
Test Results 5 FAILED 3 issues
Try the following
awk '{if (($0 ~ /[0-9] issues/) && ($(NF-1) != "0")) {print $0}}' output.txt
Here the text is checked for number of issues.
It could have been done using
awk '{if ($0 ~ /[1-9] issues/) {print $0}}' output.txt
in case you are certain that there would be 1-9 issues only and not 10 or more. (regex would ignore 10, 20, 100 ... issues)

awk associative array grows fast

I have a file that assigns numbers to md5sums like follows:
0 0000001732816557DE23435780915F75
1 00000035552C6F8B9E7D70F1E4E8D500
2 00000051D63FACEF571C09D98659DC55
3 0000006D7695939200D57D3FBC30D46C
4 0000006E501F5CBD4DB56CA48634A935
5 00000090B9750D99297911A0496B5134
6 000000B5AEA2C9EA7CC155F6EBCEF97F
7 00000100AD8A7F039E8F48425D9CB389
8 0000011ADE49679AEC057E07A53208C1
Another file containts three md5sums in each line like follows:
00000035552C6F8B9E7D70F1E4E8D500 276EC96E149571F8A27F4417D7C6BC20 9CFEFED8FB9497BAA5CD519D7D2BB5D7
00000035552C6F8B9E7D70F1E4E8D500 44E48C092AADA3B171CE899FFC6943A8 1B757742E1BF2AA5DB6890E5E338F857
What I want to to is replace the first and third md5sums in the second file with the integers of the first file. Currently i am trying the following awk script:
awk '{OFS="\t"}FNR==NR{map[$2]=$1;next}
{print map[$1],$2,map[$3]}' mapping.txt relation.txt
The problem is that the script needs more that 16g ram despite the fact that the first file is only 5.7g on the hard drive.
If you don't have enough memory to store the first file, then you need to write something like this to look up the 1st file for each value in the 2nd file:
awk 'BEGIN{OFS="\t"}
{
val1 = val3 = ""
while ( (getline line < "mapping.txt") > 0 ) {
split(line,flds)
if (flds[2] == $1) {
val1 = flds[1]
}
if (flds[2] == $3) {
val3 = flds[1]
}
if ( (val1 != "") && (val3 != "") ) {
break
}
}
close("mapping.txt")
print val1,$2,val3
}' relation.txt
It will be slow. You could add a cache of N getline-d lines to speed it up if you like.
This problem could be solved, as follows (file1.txt is the file with the integers and md5sums while file2.txt is the file with the three columns of md5sums):
#!/bin/sh
# First sort each of file 1 and the first and third columns of file 2 by MD5
awk '{ print $2 "\t" $1}' file1.txt | sort >file1_n.txt
# Before we sort the file 2 columns, we number the rows so we can put them
# back into the original order later
cut -f1 file2.txt | cat -n - | awk '{ print $2 "\t" $1}' | sort >file2_1n.txt
cut -f3 file2.txt | cat -n - | awk '{ print $2 "\t" $1}' | sort >file2_3n.txt
# Now do a join between them, extract the two columns we want, and put them back in order
join -t' ' file2_1n.txt file1_n.txt | awk '{ print $2 "\t" $3}' | sort -n | cut -f2 >file2_1.txt
join -t' ' file2_3n.txt file1_n.txt | awk '{ print $2 "\t" $3}' | sort -n | cut -f2 >file2_3.txt
cut -f2 file2.txt | paste file2_1.txt - file2_3.txt >file2_new1.txt
For a case where file1.txt and file2.txt are each 1 million lines long, this solution and Ed Morton's awk-only solution take about the same length of time on my system. My system would take a very long time to solve the problem for 140 million lines, regardless of the approach used but I ran a test case for files with 10 million lines.
I had assumed that a solution that relied on sort (which automatically uses temporary files when required) should be faster for large numbers of lines because it would be O(N log N) runtime, whereas a solution that re-reads the mapping file for each line of the input would be O(N^2) if the two files are of similar size.
Timing results
My assumption with respect to the performance relationship of the two candidate solutions turned out to be faulty for the test cases that I've tried. On my system, the sort-based solution and the awk-only solution took similar (within 30%) amounts of time to each other for each of 1 million and 10 million line input files, with the awk-only solution being faster in each case. I don't know if that relationship will hold true when the input file size goes up by another factor of more than 10, of course.
Strangely, the 10 million line problem took about 10 times as long to run with both solutions as the 1 million line problem, which puzzles me as I would have expected a non-linear relationship with file length for both solutions.
If the size of a file causes awk to run out of memory, then either use another tool, or another approach entirely.
The sed command might succeed with much less memory usage. The idea is to read the index file and create a sed script which performs the remapping, and then invoke sed on the generated sedscript.
The bash script below is a implementation of this idea. It includes some STDERR output to help track progress. I like to produce progress-tracking output for problems with large data sets or other kinds of time-consuming processing.
This script has been tested on a small set of data; it may work on your data. Please give it a try.
#!/bin/bash
# md5-indexes.txt
# 0 0000001732816557DE23435780915F75
# 1 00000035552C6F8B9E7D70F1E4E8D500
# 2 00000051D63FACEF571C09D98659DC55
# 3 0000006D7695939200D57D3FBC30D46C
# 4 0000006E501F5CBD4DB56CA48634A935
# 5 00000090B9750D99297911A0496B5134
# 6 000000B5AEA2C9EA7CC155F6EBCEF97F
# 7 00000100AD8A7F039E8F48425D9CB389
# 8 0000011ADE49679AEC057E07A53208C1
# md5-data.txt
# 00000035552C6F8B9E7D70F1E4E8D500 276EC96E149571F8A27F4417D7C6BC20 9CFEFED8FB9497BAA5CD519D7D2BB5D7
# 00000035552C6F8B9E7D70F1E4E8D500 44E48C092AADA3B171CE899FFC6943A8 1B757742E1BF2AA5DB6890E5E338F857
# Goal replace field 1 and field 3 with indexes to md5 checksums from md5-indexes
md5_indexes='md5-indexes.txt'
md5_data='md5-data.txt'
talk() { echo 1>&2 "$*" ; }
talkf() { printf 1>&2 "$#" ; }
track() {
local var="$1" interval="$2"
local val
eval "val=\$$var"
if (( interval == 0 || val % interval == 0 )); then
shift 2
talkf "$#"
fi
eval "(( $var++ ))" # increment the counter
}
# Build a sedscript to translate all occurances of the 1st & 3rd MD5 sums into their
# corresponding indexes
talk "Building the sedscript from the md5 indexes.."
sedscript=/tmp/$$.sed
linenum=0
lines=`wc -l <$md5_indexes`
interval=$(( lines / 100 ))
while read index md5sum ; do
track linenum $interval "..$linenum"
echo "s/^[[:space:]]*[[:<:]]$md5sum[[:>:]]/$index/" >>$sedscript
echo "s/[[:<:]]$md5sum[[:>:]]\$/$index/" >>$sedscript
done <$md5_indexes
talk ''
sedlength=`wc -l <$sedscript`
talkf "The sedscript is %d lines\n" $sedlength
cmd="sed -E -f $sedscript -i .bak $md5_data"
talk "Invoking: $cmd"
$cmd
changes=`diff -U 0 $md5_data.bak $md5_data | tail +3 | grep -c '^+'`
talkf "%d lines changed in $md5_data\n" $changes
exit
Here are the two files:
cat md5-indexes.txt
0 0000001732816557DE23435780915F75
1 00000035552C6F8B9E7D70F1E4E8D500
2 00000051D63FACEF571C09D98659DC55
3 0000006D7695939200D57D3FBC30D46C
4 0000006E501F5CBD4DB56CA48634A935
5 00000090B9750D99297911A0496B5134
6 000000B5AEA2C9EA7CC155F6EBCEF97F
7 00000100AD8A7F039E8F48425D9CB389
8 0000011ADE49679AEC057E07A53208C1
cat md5-data.txt
00000035552C6F8B9E7D70F1E4E8D500 276EC96E149571F8A27F4417D7C6BC20 9CFEFED8FB9497BAA5CD519D7D2BB5D7
00000035552C6F8B9E7D70F1E4E8D500 44E48C092AADA3B171CE899FFC6943A8 1B757742E1BF2AA5DB6890E5E338F857
Here is the sample run:
$ ./md5-reindex.sh
Building the sedscript from the md5 indexes..
..0..1..2..3..4..5..6..7..8
The sedscript is 18 lines
Invoking: sed -E -f /tmp/83800.sed -i .bak md5-data.txt
2 lines changed in md5-data.txt
Finally, the resulting file:
$ cat md5-data.txt
1 276EC96E149571F8A27F4417D7C6BC20 9CFEFED8FB9497BAA5CD519D7D2BB5D7
1 44E48C092AADA3B171CE899FFC6943A8 1B757742E1BF2AA5DB6890E5E338F857

move certain columns to end using awk

I have large tab delimited file with 1000 columns. I want to rearrange so that certain columns have to be moved to the end.
Could anyone help using awk
Example input:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Move columns 5,6,7,8 to the end.
Output:
1 2 3 4 9 10 11 12 13 14 15 16 17 18 19 20 5 6 7 8
This prints columns 1 to a, then b to the last, and then columns a+1 to b-1:
$ awk -v a=4 -v b=9 '{for (i=1;i<=NF;i+=i==a?b-a:1) {printf "%s\t",$i};for (i=a+1;i<b;i++) {printf "%s\t",$i};print""}' file
1 2 3 4 9 10 11 12 13 14 15 16
17 18 19 20 5 6 7 8
The columns are moved in this way for every line in the input file, however many lines there are.
How it works
-v a=4 -v b=9
This defines the variables a and b which determine the limits on which columns will be moved.
for (i=1;i<=NF;i+=i==a?b-a:1) {printf "%s\t",$i}
This prints all columns except the ones from a+1 to b-1.
In this loop, i is incremented by one except when i==a in which case it is incremented by b-a so as to skip over the columns to be moved. This is done with awk's ternary statement:
i += i==a ? b-a : 1
+= simply means "add to." i==a ? b-a : 1 is the ternary statement. The value that it returns depends on whether i==a is true or false. If it is true, the value before the colon is returned. If it is false, the value after the colon is returned.
for (i=a+1;i<b;i++) {printf "%s\t",$i}
This prints columns a+1 to b-1.
print""
This prints a newline character to end the line.
Alternative solution that avoids printf
This approach assembles the output into the variable out and then prints with a plain print command, avoiding printf and the need for percent signs:
awk -v a=4 -v b=9 '{out="";for (i=1;i<=NF;i+=i==a?b-a:1) out=out $i"\t";for (i=a+1;i<b;i++) out=out $i "\t";print out}' file
One way to rearrange 2 columns ($5 become $20 and $20 become $5) the rest stay unchanged :
$ awk '{x=$5; $5=$20; $20=x; print}' file.txt
for 4 columns :
$ awk '{
x=$5; $5=$20; $9=x;
y=$9; $9=$10; $10=y;
print
}' file.txt
My approach:
awk 'BEGIN{ f[5];f[6];f[7];f[8] } \
{ for(i=1;i<=NF;i++) if(!(i in f)) printf "%s\t", $i; \
for(c in f) printf "%s\t", $c; printf "\n"} ' file
It's splitted in 3 parts:
The BEGIN{} part determines which field should be moved to the end. The indexes of the array f are moved. In the example it's 5, 6, 7 and 8.
Cycle trough every field (doesn't matter if there are 1000 fields or more) and check if they are in the array. If not print them.
Now we need the skipped fields. Cycle trough the f array and print those values.
Another way in awk
Switch last A-B with last N fields
awk -vA=4 -vB=8 '{x=B-A;for(i=A;i<=B;i++){y=$i;$i=$(t=(NF-x--));$t=y}}1' file
Put N rows from end into positon A
awk -vA=3 -vB=8 '{split($0,a," ");x=A++;while(x++<B)$x=a[NF-(B-x)];while(B++<NF)$B=a[A++]}1' file