Delete string from line that matches regex with AWK - awk

I have file that contains a lot of data like this and I have to delete everything that matches this regex [-]+\d+(.*)
Input:
zxczxc-6-9hw7w
qweqweqweqweqwe-18-8c5r6
asdasdasasdsad-11-br9ft
Output should be:
zxczxc
qweqweqweqweqwe
asdasdasasdsad
How can I do this with AWK?

sed might be easier...
$ sed -E 's/-+[0-9].*//' file
note that .* covers +.*

AFAIK awk doesn't support \d so you could use [0-9], your regex is correct only thing you need to put it in correct function of awk.
awk '{sub(/-+[0-9].*/,"")} 1' Input_file
You don't need the extra <plus> sign afther [0-9] as this is covered by the .*

Generally, if you want to delete a string that matches a regular expression, then all you need to do is substitute it with an empty string. The most straightforward solution is sed which is presented by karafka, the other solution is using awk as presented by RavinderSingh13.
The overall syntax would look like this:
sed -e 's/ere//g' file
awk '{gsub(/ere/,"")}1' file
with ere the regular expression representation. Note I use g and gsub here to substitute all non-overlapping strings.
Due to the nature of the regular expression in the OP, i.e. it ends with .*, the g can be dropped. It also allows us to write a different awk solution which works with field separators:
awk -F '-+[0-9]' '{print $1}' file

Related

awk command works, but not in openwrt's awk

Works here: 'awk.js.org/`
but not in openwrt's awk, which returns the error message:
awk: bad regex '^(server=|address=)[': Missing ']'
Hello everyone!
I'm trying to use an awk command I wrote which is:
'!/^(server=|address=)[/][[:alnum:]][[:alnum:]-.]+([/]|[/]#)$|^#|^\s*$/ {count++}; END {print count+0}'
Which counts invalid lines in a dns blocklist (oisd in this case):
Input would be eg:
server=/0--foodwarez.da.ru/anyaddress.1.1.1
serverspellerror=/0-000.store/
server=/0-24bpautomentes.hu/
server=/0-29.com/
server=/0-day.us/
server=/0.0.0remote.cryptopool.eu/
server=/0.0mail6.xmrminingpro.com/
server=/0.0xun.cryptopool.space/
Output for this should be "2" since there are two lines that don't match the criteria (correctly formed address, comments, or blank lines).
I've tried formatting the command every which way with [], but can't find anything that works. Does anyone have an idea what format/syntax/option needs adjusting?
Thanks!
To portably include - in a bracket expression it has to be the first or last character, otherwise it means a range, and \s is shorthand for [[:space:]] in only some awks. This will work in any POSIX awk:
$ awk '!/^(server=|address=)[/][[:alnum:]][[:alnum:].-]+([/]|[/]#)$|^#|^[[:space:]]*$/ {count++}; END {print count+0}' file
2
Per #tripleee's comment below if your awk is broken such that a / inside a bracket expression isn't treated as literal then you may need this instead:
$ awk '!/^(server=|address=)\/[[:alnum:]][[:alnum:].-]+(\/|\/#)$|^#|^[[:space:]]*$/ {count++}; END {print count+0}' file
2
but get a new awk, e.g. GNU awk, as who knows what other surprises the one you're using may have in store for you!
'!/^(server=|address=)[/][[:alnum:]][[:alnum:]-.]+([/]|[/]#)$|^#|^\s*$/ {count++}; END {print count+0}'
- has special meaning inside [ and ], it is used to denote range e.g. [A-Z] means uppercase ASCII letter, use \ escape sequence to make it literal dash, let file.txt content be
server=/0--foodwarez.da.ru/anyaddress.1.1.1
serverspellerror=/0-000.store/
server=/0-24bpautomentes.hu/
server=/0-29.com/
server=/0-day.us/
server=/0.0.0remote.cryptopool.eu/
server=/0.0mail6.xmrminingpro.com/
server=/0.0xun.cryptopool.space/
then
awk '!/^(server=|address=)[/][[:alnum:]][[:alnum:]\-.]+([/]|[/]#)$|^#|^\s*$/ {count++}; END {print count+0}' file.txt
gives output
2
You might also consider replacing \s using [[:space:]] in order to main consistency.
(tested in GNU Awk 5.0.1)

print dir path after matching its name with wildcards

Have been stuck with this little puzzle. Thank you in advance for helping.
I have a directory path and would like print its path after match.
like
echo /Users/user/Documents/terraform-shared-infra/services/history_book_test | awk -F "terraform-|tfRepo-" '{print $(NF)}'
echo /Users/user/Documents/tfRepo-shared-infra/services/history_book_test | awk -F "terraform-|tfRepo-" '{print $(NF)}'
output:
shared-infra/services/history_book_test
shared-infra/services/history_book_test
When i try to add wildcard in terraform-* it doesn't work.
I would like to print path after match with terraform-* or tfRepo*.
Like:
services/history_book_test
services/history_book_test/../.. so on.
with sed:
echo /Users/user/Documents/terraform-shared-infra/services/history_book_test | sed 's|.*terraform.\([^/]*\)/.*|\1|'
shared-infra
Have tried different ways with awk and grep but no luck. Any leads or idea that I can try. Please.
Thank you.
You're confusing regular expressions with globbing patterns. Both have wildcards and look similar but have quite different meanings and uses. regexps are used by text processing tools like grep, sed, and awk to match text in input strings while globbing patterns are used by shells to match file/directory names. For example, foo* in a regexp means fo followed by zero or more additional os while foo* in a globbing pattern means foo followed by zero or more other characters (which in a regexp would be foo.*). So never just say "wildcard", say "regexp wildcard" or "globbing wildcard" for clarity.
This might be what you're trying to do, using a sed that has a -E arg to enable EREs, e.g. GNU or BSD sed:
$ sed -E 's:.*/(terraform|tfRepo)-[^/]*/::' file
services/history_book_test
services/history_book_test
or using any awk:
$ awk '{sub(".*/(terraform|tfRepo)-[^/]*/","")} 1' file
services/history_book_test
services/history_book_test
Regarding your attempt with sed sed 's|.*terraform.\([^/]*\)/.*|\1|' - if you're going to use a char other than / for the delimiters, don't use a char like | that's a regexp or backreference metachar as at best that obfuscates your code, pick some char that's always literal instead, e.g. :.

Replace character except between pattern using grep -o or sed (or others)

In the following file I want to replace all the ; by , with the exception that, when there is a string (delimited with two "), it should not replace the ; inside it.
Example:
Input
A;B;C;D
5cc0714b9b69581f14f6427f;5cc0714b9b69581f14f6428e;1;"5cc0714b9b69581f14f6427f;16a4fba8d13";xpto;
5cc0723b9b69581f14f64285;5cc0723b9b69581f14f64294;2;"5cc0723b9b69581f14f64285;16a4fbe3855";xpto;
5cc072579b69581f14f6428a;5cc072579b69581f14f64299;3;"5cc072579b69581f14f6428a;16a4fbea632";xpto;
output
A,B,C,D
5cc0714b9b69581f14f6427f,5cc0714b9b69581f14f6428e,1,"5cc0714b9b69581f14f6427f;16a4fba8d13",xpto,
5cc0723b9b69581f14f64285,5cc0723b9b69581f14f64294,2,"5cc0723b9b69581f14f64285;16a4fbe3855",xpto,
5cc072579b69581f14f6428a,5cc072579b69581f14f64299,3,"5cc072579b69581f14f6428a;16a4fbea632",xpto,
For sed I have: sed 's/;/,/g' input.txt > output.txt but this would replace everything.
The regex for the " delimited string: \".*;.*\" .
(A regex for hexadecimal would be better -- something like: [0-9a-fA-F]+)
My problem is combining it all to make a grep -o / sed that replaces everything except for that pattern.
The file size is in the order of two digit Gb (max 99Gb), so performance is important. Relevant.
Any ideas are appreciated.
sed is for doing simple s/old/new on individual strings. grep is for doing g/re/p. You're not trying to do either of those tasks so you shouldn't be considering either of those tools. That leaves the other standard UNIX tool for manipulating text - awk.
You have a ;-separated CSV that you want to make ,-separated. That's simply:
$ awk -v FPAT='[^;]*|"[^"]+"' -v OFS=',' '{$1=$1}1' file
A,B,C,D
5cc0714b9b69581f14f6427f,5cc0714b9b69581f14f6428e,1,"5cc0714b9b69581f14f6427f;16a4fba8d13",xpto,
5cc0723b9b69581f14f64285,5cc0723b9b69581f14f64294,2,"5cc0723b9b69581f14f64285;16a4fbe3855",xpto,
5cc072579b69581f14f6428a,5cc072579b69581f14f64299,3,"5cc072579b69581f14f6428a;16a4fbea632",xpto,
The above uses GNU awk for FPAT. See What's the most robust way to efficiently parse CSV using awk? for more details on parsing CSVs with awk.
If I get correctly your requirements, one option would be to make a three pass thing.
From your comment about hex, I'll consider nothing like # will come in the input so you can do (using GNU sed) :
sed -E 's/("[^"]+);([^"]+")/\1#\2/g' original > transformed
sed -i 's/;/,/g' transformed
sed -i 's/#/;/g' transformed
The idea being to replace the ; when within quotes by something else and write it to a new file, then replace all ; by , and then set back the ; in place within the same file (-i flag of sed).
The three pass can be combined in a single command with
sed -E 's/("[^"]+);([^"]+")/\1#\2/g;s/;/,/g;s/#/;/g' original > transformed
That said, there's probably a bunch of csv parser witch already handle quoted fields that you can probably use in the final use case as I bet this is just an intermediary step for something else later in the chain.
From Ed Morton's comment: if you do it in one pass, you can use \n as replacement separator as there can't be a newline in the text considered line by line.
This might work for you (GNU sed):
sed -E ':a;s/^([^"]*("[^"]*"[^"]*)*"[^";]*);/\1\n/;ta;y/;/,/;y/\n/;/' file
Replace ;'s inside double quotes with newlines, transpose ;'s to ,'s and then transpose newlines to ;'s.

Regex match using awk for a line starting with a non conforming string

I have a huge file, I want to only copy from it lines starting with
,H|756|F:BRN\
but when I do
awk '$1 ~ /^ ,H|756|F:BRN\/' file_1.txt > file_2.txt
I get:
awk: line 1: runaway regular expression /^ ,H|756|F ...
The meta-characters in the regex match needs to be properly escaped to achieve what you are trying to do. In Extended Regular Expressions (ERE) supported by awk by default | has a special meaning to do alternate match, so you need to escape it to deprive it of its special meaning and treat it literally and the same applies for \
awk '/^,H\|756\|F:BRN\\/' file
Also you don't need to use the explicit ~ match on $1. For a simpler case like this, a string pattern starting with, the /regex/ approach is easier to do.
If the file is "huge", you can consider grep or ack or ag, which may bring you better performance.
grep '^,H|756|F:BRN\\' input > output
grep uses BRE as default, so you don't have to escape the pipe |. But the ending back-slash you should escape.

Need help AWK script

Could you let me know how to print "user.%" string in below text by awk?
The value of 'user' is not fixed and the number of strings in '( )' are not fixed.
start user1.table% NOT (%OLD, %2016%) user.% another strings
UPDATE
It is the basis of SQL processing. $2 means schema.table but here user can use '%' and also exclude by NOT keyword. It ends with ')'. The next one is a second schema.table and that is the one I want to catch.
I think I should parse the string after ')' with a regular expression but failed.
Regular expression:
[)]\s+(\S+)
Above expression can be used to catch that string I guess.
How can I apply this one in awk script(Not one liner).
If the structure of the query keeps the same, you can use this:
awk -F'[).]' '{print $3".%"}'
I'm using the closing parenthesis or the literal dot as the delimiter. Doing so the value of interest is in field 3.
While it is simple it leaves some whitespace in front of user. We can enhance the field delimiter regex to fix this:
awk -F')[[:space:]]*|[.]' '{print $3".%"}'
Btw, you may use this sed command alternatively:
sed 's/.*)[[:space:]]*\([^.]*\).*/\1.%/'
or if you have GNU grep, use this:
grep -oP '\)\s*\K[^%]*%'
Try this (GNU awk):
awk '{match($0, /[)] +([^ ]+)/, var);print var[1];}'
You need to match first (GNU awk function).
Given your posted sample input, all you need is:
awk '{print $6}'
e.g.:
$ echo 'start user1.table% NOT (%OLD, %2016%) user.% another strings' |
awk '{print $6}'
user.%
If that doesn't work for you then your posted sample input isn't representative enough of your real input so edit your question to include a few lines of truly representative sample input and the expected output given that input.