How to remove field separators in awk when printing $0? - awk

eg, each row of the file is like :
1, 2, 3, 4,..., 1000
How can print out
1 2 3 4 ... 1000
?

If you just want to delete the commas, you can use tr:
$ tr -d ',' <file
1 2 3 4 1000
If it is something more general, you can set FS and OFS (read about FS and OFS) in your begin block:
awk 'BEGIN{FS=","; OFS=""} ...' file

You need to set OFS (the output field separator). Unfortunately, this has no effect unless you also modify the string, leading the rather cryptic:
awk '{$1=$1}1' FS=, OFS=
Although, if you are happy with some additional space being added, you can leave OFS at its default value (a single space), and do:
awk -F, '{$1=$1}1'
and if you don't mind omitting blank lines in the output, you can simplify further to:
awk -F, '$1=$1'

You could also remove the field separators:
awk -F, '{gsub(FS,"")} 1'

Set FS to the input field separators. Assigning to $1 will then reformat the field using the output field separator, which defaults to space:
awk -F',\s*' '{$1 = $1; print}'
See the GNU Awk Manual for an explanation of $1 = $1

Related

Filtering rows based on column values of csv file

I have a dataset with 1000 rows and 10 columns. Here is the sample dataset
A,B,C,D,E,F,
a,b,c,d,e,f,
g,h,i,j,k,l,
m,n,o,p,q,r,
s,t,u,v,w,x,
From this dataset I want to copy the rows whose has value of column A as 'a' or 'm' to a new csv file. Also I want the header to get copied.
I have tried using awk. It copied all the rows but not the header.
awk '{$1~/a//m/ print}' inputfile.csv > outputfile.csv
How can I copy the header also into the new outputfile.csv?
Thanks in advance.
Considering that your header will be on 1st row, could you please try following.
awk 'BEGIN{FS=OFS=","} FNR==1{print;next} $1 ~ /^a$|^m$/' Input_file > outputfile.csv
OR as per Cyrus sir's comment adding following:
awk 'BEGIN{FS=OFS=","} FNR==1{print;next} $1 ~ /^(a|m)$/' Input_file > outputfile.csv
OR as per Ed sir's comment try following:
awk -F, 'NR==1 || $1~/^[am]$/' Input_file > outputfile.csv
Added corrections in OP's attempt:
Added FS and OFS as , here for all lines since lines are comma delimited.
Added FNR==1 condition which means it is checking 1st line here and printing it simply, since we want to print headers in out file. It will print very first line and then next will skip all further statements from here.
Used a better regex for checking 1st field's condition $1 ~ /^a$|^m$/
This might work for you (GNU sed):
sed '1b;/^[am],/!d' oldFile >newFile
Always print the first line and delete any other line that does not beging a, or m,.
Alternative:
awk 'NR==1 || /^[am],/' oldFile >newFile
With awk. Set field separator (FS) to , and output current row if it's first row or if its first column contains a or m.
awk 'NR==1 || $1=="a" || $1=="m"' FS=',' in.csv >out.csv
Output to out.csv:
A,B,C,D,E,F,
a,b,c,d,e,f,
m,n,o,p,q,r,
$ awk -F, 'BEGIN{split("a,m",tmp); for (i in tmp) tgts[tmp[i]]} NR==1 || $1 in tgts' file
A,B,C,D,E,F,
a,b,c,d,e,f,
m,n,o,p,q,r,
It appears that awk's default delimiter is whitespace. Link
Changing the delimiter can be denoted by using the FS variable:
awk 'BEGIN { FS = "," } ; { print $2 }'

How to remove 0's from the second column

I have a file that looks like this :
k141_173024,001
k141_173071,002
k141_173527,021
k141_173652,034
k141_173724,041
...
How do I remove 0's from each line of the second field?
The desired result is :
k141_173024,1
k141_173071,2
k141_173527,21
k141_173652,34
k141_173724,41
...
What I've tied was
cut -f 2 -d ',' file | awk '{print $1 + 0} > file2
cut -f 1 -d ',' file > file1
paste file1 file2 > final_file
This was an inefficient way to edit it.
Thank you.
awk 'BEGIN{FS=OFS=","} {print $1 OFS $2+0}' Input.txt
Force to Integer value by adding 0
If it's only the zeros following the comma (,001 to ,1 but ,010 to ,10; it's not remove 0's from the second column but the example doesn't clearly show the requirement), you could replace the comma and zeros with another comma:
$ awk '{gsub(/,0+/,",")}1' file
k141_173024,1
k141_173071,2
k141_173527,21
k141_173652,34
k141_173724,41
Could you please try following.
awk 'BEGIN{FS=OFS=","} {gsub(/0/,"",$2)}1' Input_file
EDIT: To remove only leading zeros try following.
awk 'BEGIN{FS=OFS=","} {sub(/^0+/,"",$2)}1' Input_file
If the second field is a number, you can do this to remove the leading zeroes:
awk 'BEGIN{FS=OFS=","} {print $1 OFS int($2)}' file
As per #Inian's suggestion, this can be further simplified to:
awk -F, -v OFS=, '{$2=int($2)}1' file
This might work for you (GNU sed):
sed 's/,0\+/,/' file
This removes leading zeroes from the second column by replacing a comma followed by one or more zeroes by a comma.
P.S. I guess the OP did not mean to remove zeroes that are part of the number.

Awk editing with field delimiter

Imagine if you have a string like this
Amazon.com Inc.:181,37:184,22
and you do awk -F':' '{print $1 ":" $2 ":" $3}' then it will output the same thing.
But can you declare $2 in this example so it only outputs 181 and not ,37?
Thanks in advance!
You can change the field separator so that it contains either : or ,, using a bracket expression:
awk -F'[:,]' '{ print $2 }' file
If you are worried that , may appear in the first field (which will break this approach), you could use split:
awk -F: '{ split($2, a, /,/); print a[1] }' file
This splits the second field on the comma and then prints the first part. Any other fields containing a comma are unaffected.

gawk FS to split record into individual characters

If the field separator is the empty string, each character becomes a separate field
$ echo hello | awk -F '' -v OFS=, '{$1 = NF OFS $1} 1'
5,h,e,l,l,o
However, if FS is a regex that can possibly match zero times, the same behaviour does not occur:
$ echo hello | awk -F ' *' -v OFS=, '{$1 = NF OFS $1} 1'
1,hello
Anyone know why that is? I could not find anything in the gawk manual. Is FS="" just a special case?
I'm most interested in understanding why the 2nd case does not split the record into more fields. It's as if awk is treating FS=" *" like FS=" +"
Interesting question!
I just pulled gnu-awk 4.1.0's codes, I think the answer we could find in the file field.c.
line 371:
* re_parse_field --- parse fields using a regexp.
*
* This is called both from get_field() and from do_split()
* via (*parse_field)(). This variation is for when FS is a regular
* expression -- either user-defined or because RS=="" and FS==" "
*/
static long
re_parse_field(lo...
also this line: (line 425):
if (REEND(rp, scan) == RESTART(rp, scan)) { /* null match */
here is the case of <space>* matching in your question. The implementation didn't increment the nf, that is, it thinks the whole line is one single field. Note this function was used in do_split() function too.
First, if FS is null string, gawk separates each char into its own field. gawk's doc has clearly written this, also in codes, we could see:
line 613:
* null_parse_field --- each character is a separate field
*
* This is called both from get_field() and from do_split()
* via (*parse_field)(). This variation is for when FS is the null string.
*/
static long
null_parse_field(long up_to,
If the FS has single character, awk won't consider it as regex. This was mentioned in doc too. Also in codes:
#line 667
* sc_parse_field --- single character field separator
*
* This is called both from get_field() and from do_split()
* via (*parse_field)(). This variation is for when FS is a single character
* other than space.
*/
static long
sc_parse_field(l
if we read the function, no regex match handling was done there.
In the comments of the function re_parse_field(), and sc_parse_field(), we see do_split invokes them too. It explains why we have 1 in following command instead of 3:
kent$ echo "foo"|awk '{split($0,a,/ */);print length(a)}'
1
Note, to avoid to make the post too long, I didn't paste the complete codes here, we can find the codes here:
http://git.savannah.gnu.org/cgit/gawk.git/
As was mentioned, an empty field separator generates undefined behavior; the same code will give different results on different platforms / flavors of awk. For example (all Mac OSX 10.8.5):
> echo hello | awk -F '' -v OFS=, '{$1 = NF OFS $1} 1'
awk: field separator FS is empty
1,hello
So awk complains, but keeps going.
Let's look at some other examples:
> echo hello | awk -F '.' -v OFS=, '{$1 = NF OFS $1} 1'
1,hello
A . by itself is not considered a regular expression
> echo hello | awk -F '[.]' -v OFS=, '{$1 = NF OFS $1} 1'
1,hello
Still nothing
> echo hello | awk -F '.?' -v OFS=, '{$1 = NF OFS $1} 1'
6,,,,,,
Now we have something like a regex: .? is "zero or one character". It is expanded to one character (which is consumed), so the output is "a whole lot of nothings"
> echo hello | awk -F '*' -v OFS=, '{$1 = NF OFS $1} 1'
1,hello
Not a regular expression
> echo hello | awk -F '.*' -v OFS=, '{$1 = NF OFS $1} 1'
2,,
A regular expression that consumes the entire thing
> echo hello | awk -F 'l' -v OFS=, '{$1 = NF OFS $1} 1'
3,he,,o
Match the letter l twice - two empty strings
> echo hello | awk -F 'ell' -v OFS=, '{$1 = NF OFS $1} 1'
2,h,o
Match all of ell at once
> echo hello | awk -F '.?|' -v OFS=, '{$1 = NF OFS $1} 1'
awk: illegal primary in regular expression .?| at
input record number 1, file
source line number 1
Attempt to be clever: sometimes an | with empty string on one side will match "anything" but awk's regex engine doesn't like it.
Conclusion - the regular expressions cannot match "empty", and whatever is matched is consumed. Attempts to use (?:.) or even (?=.) generate errors.
It seems to be a special case in gawk.
Traditionally, the behavior of FS equal to "" was not defined. In this
case, most versions of Unix awk simply treat the entire record as only
having one field. (d.c.) In compatibility mode (see Options), if FS is
the null string, then gawk also behaves this way.
What POSIX has to say about this:
If FS is a null string, the behavior is unspecified.
So the gawk behaviour is implementation-specific and sort of explains why your two examples don't yield the same output.
Another data point: gawk and perl disagree on how to do this:
$ perl -E '$,=","; $s="hello"; $r=qr( *); #s=split($r,$s); say scalar(#s), #s'
5,h,e,l,l,o
$ gawk 'BEGIN {s="hello";r=" *";n=split(s,a,r); print n,a[n]; if (s~r) print "match"}'
1 hello
match
$ gawk 'BEGIN {s="hello";r=""; n=split(s,a,r); print n,a[n]; if (s~r) print "match"}'
5 o
match

awk command to change field seperator from tilde to tab

I want to replace the delimter tilde into tab space in awk command, I have mentioned below how I would have expect.
input
~1~2~3~
Output
1 2 3
this wont work for me
awk -F"~" '{ OFS ="\t"; print }' inputfile
It's really a job for tr:
tr '~' '\t'
but in awk you just need to force the record to be recompiled by assigning one of the fields to its own value:
awk -F'~' -v OFS='\t' '{$1=$1}1'
awk NF=NF FS='~' OFS='\t'
Result
1 2 3
Code for sed:
$echo ~1~2~3~|sed 'y/~/\t/'
1 2 3