Add a comma to every column value in a table [unix] - awk

I have a file produced from a program that is filled with values as such :
1 [4:space] 2 [4:space] 3 [4:space] ... N
There is 4 space between each values, I want to remove the 3 spaces and place commas after each values to get the final results :
1, 2, 3, ..., N
I found out from other topics that this command can remove the 3 spaces :
awk -F' +' -v OFS='\t' '{sub(/ +$/,""); $1=$1}1' file
I need to add commas then, or maybe is there a way to removes the space and add commas at the same time.

To replace all space with comma and a space, use:
$ awk '{gsub(/ +/,", ")}1' file
1, 2, 3, ..., N
To replace exactly three spaces with a comma, use:
$ awk '{gsub(/ {3}/,",")}1' file
Using field delimiters for it:
$ awk -F" " -v OFS=", " '{$1=$1}1' file

Using GNU sed to modify file in place:
sed -i -e 's/ /, /g' file
And with the brackets:
sed -i -e 's/ /, /g;s/^/[/;s/$/]/' file

how about hands-free driving with awk?
{m,n,g}awk NF=NF RS='\r?\n' OFS=', '
It'll handle both CRLF from windows and unix LF, trim both ends,and place ", " in between each field

Related

Find the second word delimited by space or comma then insert strings before and after

I have a file containing TABLE schema.table and want to put strings around it to make a command like MARK string REJECT
the file contains many lines
TABLE SCHEMA.MYTAB, etc. etc....
or
TABLE SCHEMA.MYTAB , etc. etc....
The result is
MARK SCHEMA.MYTAB REJECT
..etc
I have
grep TABLE dirx/myfile.txt | awk -F, '{print $1}' | awk '{print $2}' | sed -e 's/^/MARK /' |sed -e 's/$/ REJECT/'
It works, but can this be tidier? I think I can combine the awk and sed into single commands but not sure how.
Maybe:
awk '/^TABLE/ {gsub(/,.*$/, ""); print "MARK " $2 " REJECT"}' dirx/myfile.txt

gawk - Delimit lines with custom character and no similar ending character

Let's say I have a file like so:
test.txt
one
two
three
I'd like to get the following output: one|two|three
And am currently using this command: gawk -v ORS='|' '{ print $0 }' test.txt
Which gives: one|two|three|
How can I print it so that the last | isn't there?
Here's one way to do it:
$ seq 1 | awk -v ORS= 'NR>1{print "|"} 1; END{print "\n"}'
1
$ seq 3 | awk -v ORS= 'NR>1{print "|"} 1; END{print "\n"}'
1|2|3
With paste:
$ seq 1 | paste -sd'|'
1
$ seq 3 | paste -sd'|'
1|2|3
Convert one column to one row with field separator:
awk '{$1=$1} 1' FS='\n' OFS='|' RS='' file
Or in another notation:
awk -v FS='\n' -v OFS='|' -v RS='' '{$1=$1} 1' file
Output:
one|two|three
See: 8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR
awk solutions work great. Here is tr + sed solution:
tr '\n' '|' < file | sed 's/\|$//'
1|2|3
just flatten it :
gawk/mawk 'BEGIN { FS = ORS; RS = "^[\n]*$"; OFS = "|"
} NF && ( $NF ? NF=NF : —-NF )'
ascii | = octal \174 = hex 0x7C. The reason for —-NF is that more often than not, the input includes a trailing new line, which makes field count 1 too many and result in
1|2|3|
Both NF=NF and --NF are similar concepts to $1=$1. Empty inputs, regardless of whether trailing new lines exist or not, would result in nothing printed.
At the OFS spot, you can delimit it with any string combo you like instead of being constrained by tr, which has inconsistent behavior. For instance :
gtr '\012' '高' # UTF8 高 = \351\253\230 = xE9 xAB x98
on bsd-tr, \n will get replaced by the unicode properly 1高2高3高 , but if you're on gnu-tr, it would only keep the leading byte of the unicode, and result in
1 \351 2 \351 . . .
For unicode equiv-classes, bsd-tr works as expected while gtr '[=高=]' '\v' results in
gtr: ?\230: equivalence class operand must be a single character
and if u attempt equiv-classes with an arbitrary non-ASCII byte, bsd-tr does nothing while gnu-tr would gladly oblige, even if it means slicing straight through UTF8-compliant characters :
g3bn 77138 | (g)tr '[=\224=]' '\v'
bsd-tr : 77138=Koyote 코요태 KYT✜ 高耀太
gnu-tr : 77138=Koyote ?
?
태 KYT✜ 高耀太
I would do it following way, using GNU AWK, let test.txt content be
one
two
three
then
awk '{printf NR==1?"%s":"|%s", $0}' test.txt
output
one|two|three
Explanation: If it is first line print that line content sans trailing newline, otherwise | followed by line content sans trailing newline. Note that I assumed that test.txt has not trailing newline, if this is not case test this solution before applying it.
(tested in gawk 5.0.1)
Also you can try this with awk:
awk '{ORS = (NR%3 ? "|" : RS)} 1' file
one|two|three
% is the modulo operator and NR%3 ? "|" : RS is a ternary expression.
See Ed Morton's explanation here: https://stackoverflow.com/a/55998710/14259465
With a GNU sed, you can pass -z option to match line breaks, and thus all you need is replace each newline but the last one at the end of string:
sed -z 's/\n\(.\)/|\1/g' test.txt
perl -0pe 's/\n(?!\z)/|/g' test.txt
perl -pe 's/\n/|/g if !eof' test.txt
See the online demo.
Details:
s - substitution command
\n\(.\) - an LF char followed with any one char captured into Group 1 (so \n at the end of string won't get matched)
|\1 - a | char and the captured char
g - all occurrences.
The first perl command matches any LF char (\n) not at the end of string ((?!\z)) after slurping the whole file into a single string input (again, to make \n visible to the regex engine).
The second perl command replaces an LF char at the end of each line except the one at the end of file (eof).
To make the changes inline add -i option (mind this is a GNU sed example):
sed -i -z 's/\n\(.\)/|\1/g' test.txt
perl -i -0pe 's/\n(?!\z)/|/g' test.txt
perl -i -pe 's/\n/|/g if !eof' test.txt

Extract the first part of a string before a double backslash in bash

I would like to get the substring before "\". My example is as follows:
from
-3\\0.3748
I would like to extract -3.
Split on the \:
$ echo '-3\\0.3748' | awk -F '\\' '{print $1}'
-3

awk or sed etc replace comma with | but where between quotes

I have a delimited file that I'm trying to replace the commas with an or bar | except where the comma (and other text) is between quotes (")
I know that I can replace the comma using sed 's/,/|/g' filename but I'm not sure how to have the text between quotes as an exception to the rule. Or if it is even possible this easily.
As guys recommended here the best and safest is to read csv as csv with appropriate module/library.
Anyway if You wanna sed here it is:
sed -i 's/|//g;y/,/|/;:r;s/\("[^"]*\)|\([^"]*"\)/\1,\2/g;tr' file.csv
Procedure:
Firstly it removes any pipes from csv not to corrupt csv.
Secondly it transforms all commas to pipes
Thirdly it "recovers" recursively all quoted pipes to commas.
Test:
$ cat file.csv
aaa,1,"what's up"
bbb,2,"this is pipe | in text"
ccc,3,"here is comma, in text"
ddd,4, ",, here a,r,e multi, commas,, ,,"
"e,e",5,first column
$ cat file.csv | sed 's/|//g;y/,/|/;:r;s/\("[^"]*\)|\([^"]*"\)/\1,\2/g;tr'
aaa|1|"what's up"
bbb|2|"this is pipe in text"
ccc|3|"here is comma, in text"
ddd|4| ",, here a,r,e multi, commas,, ,,"
"e,e"|5|first column
$ cat file.csv | sed 's/|//g;y/,/|/;:r;s/\("[^"]*\)|\([^"]*"\)/\1,\2/g;tr' | awk -F'|' '{ print NF }'
3
3
3
3
3
You can try this sed :
sed ':A;s/\([^"]*"[^"]*"\)\([^"]*\)\(,\)/\1|/;tA' infile
Using GNU awk, FPAT and #Kubator's sample file:
$ awk '
BEGIN {
FPAT="([^,]+)|( *\"[^\"]+\" *)" # define the field pattern, notice the space before "
OFS="|" # output file separator
}
{
$1=$1 # rebuild the record
}1' file # output
aaa|1|"what's up"
bbb|2|"this is pipe | in text"
ccc|3|"here is comma, in text"
ddd|4| ",, here a,r,e multi, commas,, ,,"
"e,e"|5|first column

Print line numbers starting at zero using awk

Can anyone tell me how to print line numbers including zero using awk?
Here is my input file stackfile2.txt
when I run the below awk command I get actual_output.txt
awk '{print NR,$0}' stackfile2.txt | tr " ", "," > actual_output.txt
whereas my expected output is file.txt
How do I print the line numbers starting with zero (0)?
NR starts at 1, so use
awk '{print NR-1 "," $0}'
Using awk.
i starts at 0, i++ will increment the value of i, but return the original value that i held before being incremented.
awk '{print i++ "," $0}' file
Another option besides awk is nl which allows for options -v for setting starting value and -n <lf,rf,rz> for left, right and right with leading zeros justified. You can also include -s for a field separator such as -s "," for comma separation between line numbers and your data.
In a Unix environment, this can be done as
cat <infile> | ...other stuff... | nl -v 0 -n rz
or simply
nl -v 0 -n rz <infile>
Example:
echo "Here
are
some
words" > words.txt
cat words.txt | nl -v 0 -n rz
Out:
000000 Here
000001 are
000002 some
000003 words
If Perl is an option, you can try this:
perl -ne 'printf "%s,$_" , $.-1' file
$_ is the line
$. is the line number