How to extract the first column from a tsv file? - awk

I have a file containing some data and I want to use only the first column as a stdin for my script, but I'm having trouble extracting it.
I tried using this
awk -F"\t" '{print $1}' inputs.tsv
but it only shows the first letter of the first column. I tried some other things but it either shows the entire file or just the first letter of the first column.
My file looks something like this:
Harry_Potter 1
Lord_of_the_rings 10
Shameless 23
....

You can use cut which is available on all Unix and Linux systems:
cut -f1 inputs.tsv
You don't need to specify the -d option because tab is the default delimiter. From man cut:
-d delim
Use delim as the field delimiter character instead of the tab character.
As Benjamin has rightly stated, your awk command is indeed correct. Shell passes literal \t as the argument and awk does interpret it as a tab, while other commands like cut may not.
Not sure why you are getting just the first character as the output.
You may want to take a look at this post:
Difference between single and double quotes in Bash

Try this (better rely on a real csv parser...):
csvcut -c 1 -f $'\t' file
Check csvkit
Output :
Harry_Potter
Lord_of_the_rings
Shameless
Note :
As #RomanPerekhrest said, you should fix your broken sample input (we saw spaces where tabs are expected...)

Related

Parse strings within quotations

I have a log file that includes lines with the pattern as below. I want to extract the two strings within the quotations and write them to another file, each one in a separate column. (Not all lines have this pattern, but these specific lines come sequentially.)
Input
(multiple lines of header)
Of these, 0 are new, while 1723332 are present in the base dataset.
Warning: Variants 'Variant47911' and 'Variant47910' have the same position.
Warning: Variants 'exm2254099' and 'exm12471' have the same position.
Warning: Variants 'newrs140234726' and 'exm15862' have the same position.
Desired output:
Variant47911 Variant47910
exm2254099 exm12471
newrs140234726 exm15862
This retrieves the lines but do not know how to specify the strings that need to be printed.
awk '/Warning: Variants '*'/ Input
Using the single quote as a field delimiter should get you most of the way there, and then you have to have a way to uniquely identify the lines you want to match. Below works for the sample you gave, but might have to be tweaked depending on the lines from the file that we're not seeing.
$ awk -v q="'" 'BEGIN {FS=q; OFS="\t"} /Warning: Variants/ && NF==5 {print $2, $4}' file
Variant47911 Variant47910
exm2254099 exm12471
newrs140234726 exm15862
This might work for you (GNU sed):
sed -En "/Variant/{s/[^']*'([^']*)'[^']*/\1\t/g;T;s/.$//p}" file
For all lines that contain Variant, remove everything except the text between single quotes and tab separate the results.

Extract Regex Pattern For Each Line - Leave Blank Line If No Pattern Exists

I am working with the following input:
"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"phone":"549-287-5287","city":"Marshall","gender":"female","email":"mortina.curabia#gmail.com"
I need to be able to extract both the phone number and email of each line into separate files. However, both values don't always appear in the same field - they will always be prefaced with "phone": or "email":, but they may be in the first, second, third or even twentieth field.
I have tried chopping together solutions in SED and AWK to remove everything up until "phone" and then every after the next , but this doesn't not work as desired. It also means that, if "phone" and/or "email do not exist, the line is not changed at all.
I need a solution that will give me an output with the phone value of each line in one file, and the email value in another. HOWEVER, if no phone or email value exists, a blank line in the output needs to be in place.
Any ideas?
This might work for you (GNU sed):
sed -Ene 'h;/.*"phone":([^,]*).*/!z;s//\1/;w phoneFile' -e 'g;/.*"email":([^,]*).*/!z;s//\1/;w emailFile' file
Make a copy of line.
If the line does not contain a phone number empty the line, otherwise remove everything but the phone number.
Write the result to the phone number file.
Replace the current pattern space by the copy of the original line.
Repeat as above for an email address.
N.B. My first attempt used s/.*// instead of z to empty the line which worked but should not have. If the line contained no phone/email, the substitution should have reset default regexp and the second substitution should have objected that it did not contain a back reference. However the second substitution worked in either case.
After fixing your file to be valid json and adding an extra line missing the phone attribute so we can test more of your requirements:
$ cat file
{"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"phone":"549-287-5287","city":"Marshall","gender":"female","email":"mortina.curabia#gmail.com"}
{"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"city":"Marshall","gender":"female","email":"foo.bar#gmail.com"}
you can do whatever you like with the data:
$ jq -r '.email // ""' file
mortina.curabia#gmail.com
foo.bar#gmail.com
$
$ jq -r '.phone // ""' file
549-287-5287
$
As long as it doesn't contain embedded newlines you can used sed 's/.*/{&}/' file to convert the input in your question to valid json as in my answer:
$ cat file
"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"phone":"549-287-5287","city":"Marshall","gender":"female","email":"mortina.curabia#gmail.com"
"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"city":"Marshall","gender":"female","email":"foo.bar#gmail.com"
$ sed 's/.*/{&}/' file
{"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"phone":"549-287-5287","city":"Marshall","gender":"female","email":"mortina.curabia#gmail.com"}
{"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"city":"Marshall","gender":"female","email":"foo.bar#gmail.com"}
$ sed 's/.*/{&}/' file | jq -r '.email // ""'
mortina.curabia#gmail.com
foo.bar#gmail.com
but I'm betting you started out with valid json and removed the {} by mistake along the way so you probably just need to not do that.
Using grep
Try:
grep -o '"phone":"[0-9-]*"' < Input > phone.txt
grep -o '"email":"[^"]*"' <Input > email.txt
Demo:
$echo '"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"phone":"549-287-5287","city":"Marshall","gender":"female","email":"mortina.curabia#gmail.com"' | grep -o '"phone":"[0-9-]*"'
"phone":"549-287-5287"
$echo '"visit_date":{"$date":"2017-11-28T04:43:00.000Z"},"phone":"549-287-5287","city":"Marshall","gender":"female","email":"mortina.curabia#gmail.com"' | grep -o '"email":"[^"]*"'
"email":"mortina.curabia#gmail.com"
$

Replace character except between pattern using grep -o or sed (or others)

In the following file I want to replace all the ; by , with the exception that, when there is a string (delimited with two "), it should not replace the ; inside it.
Example:
Input
A;B;C;D
5cc0714b9b69581f14f6427f;5cc0714b9b69581f14f6428e;1;"5cc0714b9b69581f14f6427f;16a4fba8d13";xpto;
5cc0723b9b69581f14f64285;5cc0723b9b69581f14f64294;2;"5cc0723b9b69581f14f64285;16a4fbe3855";xpto;
5cc072579b69581f14f6428a;5cc072579b69581f14f64299;3;"5cc072579b69581f14f6428a;16a4fbea632";xpto;
output
A,B,C,D
5cc0714b9b69581f14f6427f,5cc0714b9b69581f14f6428e,1,"5cc0714b9b69581f14f6427f;16a4fba8d13",xpto,
5cc0723b9b69581f14f64285,5cc0723b9b69581f14f64294,2,"5cc0723b9b69581f14f64285;16a4fbe3855",xpto,
5cc072579b69581f14f6428a,5cc072579b69581f14f64299,3,"5cc072579b69581f14f6428a;16a4fbea632",xpto,
For sed I have: sed 's/;/,/g' input.txt > output.txt but this would replace everything.
The regex for the " delimited string: \".*;.*\" .
(A regex for hexadecimal would be better -- something like: [0-9a-fA-F]+)
My problem is combining it all to make a grep -o / sed that replaces everything except for that pattern.
The file size is in the order of two digit Gb (max 99Gb), so performance is important. Relevant.
Any ideas are appreciated.
sed is for doing simple s/old/new on individual strings. grep is for doing g/re/p. You're not trying to do either of those tasks so you shouldn't be considering either of those tools. That leaves the other standard UNIX tool for manipulating text - awk.
You have a ;-separated CSV that you want to make ,-separated. That's simply:
$ awk -v FPAT='[^;]*|"[^"]+"' -v OFS=',' '{$1=$1}1' file
A,B,C,D
5cc0714b9b69581f14f6427f,5cc0714b9b69581f14f6428e,1,"5cc0714b9b69581f14f6427f;16a4fba8d13",xpto,
5cc0723b9b69581f14f64285,5cc0723b9b69581f14f64294,2,"5cc0723b9b69581f14f64285;16a4fbe3855",xpto,
5cc072579b69581f14f6428a,5cc072579b69581f14f64299,3,"5cc072579b69581f14f6428a;16a4fbea632",xpto,
The above uses GNU awk for FPAT. See What's the most robust way to efficiently parse CSV using awk? for more details on parsing CSVs with awk.
If I get correctly your requirements, one option would be to make a three pass thing.
From your comment about hex, I'll consider nothing like # will come in the input so you can do (using GNU sed) :
sed -E 's/("[^"]+);([^"]+")/\1#\2/g' original > transformed
sed -i 's/;/,/g' transformed
sed -i 's/#/;/g' transformed
The idea being to replace the ; when within quotes by something else and write it to a new file, then replace all ; by , and then set back the ; in place within the same file (-i flag of sed).
The three pass can be combined in a single command with
sed -E 's/("[^"]+);([^"]+")/\1#\2/g;s/;/,/g;s/#/;/g' original > transformed
That said, there's probably a bunch of csv parser witch already handle quoted fields that you can probably use in the final use case as I bet this is just an intermediary step for something else later in the chain.
From Ed Morton's comment: if you do it in one pass, you can use \n as replacement separator as there can't be a newline in the text considered line by line.
This might work for you (GNU sed):
sed -E ':a;s/^([^"]*("[^"]*"[^"]*)*"[^";]*);/\1\n/;ta;y/;/,/;y/\n/;/' file
Replace ;'s inside double quotes with newlines, transpose ;'s to ,'s and then transpose newlines to ;'s.

awk to remove 5th column from N column with fixed delimiter

I have file with Nth columns
I want to remove the 5th column from last of Nth columns
Delimiter is "|"
I tested with simple example as shown below:
bash-3.2$ echo "1|2|3|4|5|6|7|8" | nawk -F\| '{print $(NF-4)}'
4
Expecting result:
1|2|3|5|6|7|8
How should I change my command to get the desired output?
If I understand you correctly, you want to use something like this:
sed -E 's/\|[^|]*((\|[^|]*){4})$/\1/'
This matches a pipe character \| followed by any number of non-pipe characters [^|]*, then captures 4 more of the same pattern ((\|[^|]*){4}). The $ at the end matches the end of the line. The first part of the match (i.e. the fifth field from the end) is dropped.
Testing it out:
$ sed -E 's/\|[^|]*((\|[^|]*){4})$/\1/' <<<"1|2|3|4|5|6|7"
1|2|4|5|6|7
You could achieve the same thing using GNU awk with gensub but I think that sed is the right tool for the job in this case.
If your version of sed doesn't support extended regex syntax with -E, you can modify it slightly:
sed 's/|[^|]*\(\(|[^|]*\)\{4\}\)$/\1/'
In basic mode, pipes are interpreted literally but parentheses for capture groups and curly brcneed to be escaped.
AWK is your friend :
Sample Input
A|B|C|D|E|F|G|H|I
A|B|C|D|E|F|G|H|I|A
A|B|C|D|E|F|G|H|I|F|E|D|O|R|Q|U|I
A|B|C|D|E|F|G|H|I|E|O|Q
A|B|C|D|E|F|G|H|I|X
A|B|C|D|E|F|G|H|I|J|K|L
Script
awk 'BEGIN{FS="|";OFS="|"}
{$(NF-5)="";sub(/\|\|/,"|");print}' file
Sample Output
A|B|C|E|F|G|H|I
A|B|C|D|F|G|H|I|A
A|B|C|D|E|F|G|H|I|F|E|O|R|Q|U|I
A|B|C|D|E|F|H|I|E|O|Q
A|B|C|D|F|G|H|I|X
A|B|C|D|E|F|H|I|J|K|L
What we did here
As you are aware awk's has special variables to store each field in the record, which ranges from $1,$2 upto $(NF)
To exclude the 5th from the last column is as simple as
Emptying the colume ie $(NF-5)=""
Removing from the record, the consecutive | formed by the above step ie do sub(/\|\|/,"|")
another alternative, using #sjsam's input file
$ rev file | cut -d'|' --complement -f6 | rev
A|B|C|E|F|G|H|I
A|B|C|D|F|G|H|I|A
A|B|C|D|E|F|G|H|I|F|E|O|R|Q|U|I
A|B|C|D|E|F|H|I|E|O|Q
A|B|C|D|F|G|H|I|X
A|B|C|D|E|F|H|I|J|K|L
not sure you want the 5'th from the last or 6th. But it's easy to adjust.
Thanks for the help and guidance.
Below is what I tested:
bash-3.2$ echo "1|2|3|4|5|6|7|8|9" | nawk 'BEGIN{FS="|";OFS="|"} {$(NF-4)="!";print}' | sed 's/|!//'
Output: 1|2|3|4|6|7|8|9
Further tested on the file that I have extracted from system and so it worked fine.

Replace chars after column X

Lets say my data looks like this
iqwertyuiop
and I want to replace all the letters i after column 3 with a Z.. so my output would look like this
iqwertyuZop
How can I do this with sed or awk?
It's not clear what you mean by "column" but maybe this is what you want using GNU awk for gensub():
$ echo iqwertyuiop | awk '{print substr($0,1,3) gensub(/i/,"Z","g",substr($0,4))}'
iqwertyuZop
Perl is handy for this: you can assign to a substring
$ echo "iiiiii" | perl -pe 'substr($_,3) =~ s/i/Z/g'
iiiZZZ
This would totally be ideal for the tr command, if only you didn't have the requirement that the first 3 characters remain untouched.
However, if you are okay using some bash tricks plus cut and paste, you can split the file into two parts and paste them back together afterwords:
paste -d'\0' <(cut -c-3 foo) <(cut -c4- foo | tr i Z)
The above uses paste to rejoin together the two parts of the file that get split with cut. The second section is piped through tr to translate i's to Z's.
(1) Here's a short-and-simple way to accomplish the task using GNU sed:
sed -r -e ':a;s/^(...)([^i]*)i/\1\2Z/g;ta'
This entails looping (t), and so would not be as efficient as non-looping approaches. The above can also be written using escaped parentheses instead of unescaped characters, and so there is no real need for the -r option. Other implementations of sed should (in principle) be up to the task as well, but your MMV.
(2) It's easy enough to use "old awk" as well:
awk '{s=substr($0,4);gsub(/i/,"Z",s); print substr($0,1,3) s}'
The most intuitive way would be to use awk:
awk 'BEGIN{FS="";OFS=FS}{for(i=4;i<=NF;i++){if($i=="i"){$i="Z"}}}1' file
FS="" splits the input string by characters into fields. We iterate trough character/field 4 to end and replace i by Z.
The final 1 evaluates to true and make awk print the modified input line.
With sed it looks not very intutive but still it is possible:
sed -r '
h # Backup the current line in hold buffer
s/.{3}// # Remove the first three characters
s/i/Z/g # Replace all i by Z
G # Append the contents of the hold buffer to the pattern buffer (this adds a newline between them)
s/(.*)\n(.{3}).*/\2\1/ # Remove that newline ^^^ and assemble the result
' file