Git URL - Pull out substring via Shell (awk & sed)? - awk

I have got the following URL:
https://xcg5847#git.rz.bankenit.de/scm/smat/sma-mes-test.git
I need to pull out smat-mes-test and smat:
git config --local remote.origin.url|sed -n 's#.*/\([^.]*\)\.git#\1#p'
sma-mes-test
This works. But I also need the project name, which is smat
I am not really familiar to complex regex and sed, I was able to find the other command in another post here. Does anyone know how I am able to extract the smat value here?

With your shown samples please try following awk code. Simple explanation would be, setting field separator(s) as / and .git for all the lines and in main program printing 3rd last and 3nd last elements from the line.
your_git_command | awk -F'/|\\.git' '{print $(NF-2),$(NF-1)}'

Your sed is pretty close. You can just extend it to capture 2 values and print them:
git config --local remote.origin.url |
sed -E 's~.*/([^/]+)/([^.]+)\.git$~\1 \2~'
smat sma-mes-test
If you want to populate shell variable using these 2 values then use this read command in bash:
read v1 v2 < <(git config --local remote.origin.url |
sed -E 's~.*/([^/]+)/([^.]+)\.git$~\1 \2~')
# check variable values
declare -p v1 v2
declare -- v1="smat"
declare -- v2="sma-mes-test"

Using sed
$ sed -E 's#.*/([^/]*)/#\1 #' input_file
smat sma-mes-test.git

I would harness GNU AWK for this task following way, let file.txt content be
https://xcg5847#git.rz.bankenit.de/scm/smat/sma-mes-test.git
then
awk 'BEGIN{FS="/"}{sub(/\.git$/,"",$NF);print $(NF-1),$NF}' file.txt
gives output
smat sma-mes-test
Explanation: I instruct GNU AWK that field separator is slash character, then I replace .git (observe that . is escaped to mean literal dot) adjacent to end ($) in last field ($NF), then I print 2nd from end field ($(NF-1)) and last field ($NF), which are sheared by space, which is default output field separator, if you wish to use other character for that purpose set OFS (output field separator) in BEGIN. If you want to know more about NF then read 8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR
(tested in gawk 4.2.1)

Why not sed 's!.*/\(.*/.*\)!\1!'?

string=$(config --local remote.origin.url | tail -c -21)
var1=$(echo "${string}" | cut -d'/' -f1)
var2=$(echo "${string}" | cut -d'/' -f2 | sed s'#\.git##')
If you have multiple urls with variable lengths, this will not work, but if you only have the one, it will.
var1=smat
var2=sma-mes-test.git
If I did have something variable, personally I would replace all of the forward slashes with carriage returns, throw them into a file, and then export the last and second last lines with ed, which would give me the two last segments of the url.
Regular expressions literally give me a migraine headache, but as long as I can get everything on its' own line, I can quite easily bypass the need for them entirely.

Related

Delete everything before first pattern match with sed/awk

Let's say I have a line looking like this:
/Users/random/354765478/Tests/StoreTests/Base64Tests.swift
In this example, I would like to get the result:
Tests/StoreTests/Base64Tests.swift
How can I do if I want to get everything before the first pattern match (either Sources or Tests) using sed or awk?
I am using sed 's/^.*\(Tests.*\).*$/\1/' right now but it's falling:
echo '/Users/random/354765478/Tests/StoreTests/Base64Tests.swift' | sed 's/^.*\(Tests\)/\1/'
Tests.swift
Here's another example using Sources (which seems to work):
echo '/Users/random/741672469/Sources/Store/StoreDataSource.swift' | sed 's/^.*\(Sources\)/\1/'
Sources/Store/StoreDataSource.swift
I would like to get everything before the first, and not the last Sources or Tests pattern match.
Any help would be appreciated!
How can I do if I want to get everything before the first pattern match (either Sources or Tests).
Easier to use a grep -o here:
grep -Eo '(Sources|Tests)/.*' file
Tests/StoreTests/Base64Tests.swift
Sources/Store/StoreDataSource.swift
# where input file is
cat file
/Users/random/354765478/Tests/StoreTests/Base64Tests.swift
/Users/random/741672469/Sources/Store/StoreDataSource.swift
Breakdown:
Regex pattern (Sources|Tests)/.* match any text that starts with Sources/ or Tests/ until end of the line.
-E: enables extended regex mode
-o: prints only matched text instead of full line
Alternatively you may use this awk as well:
awk 'match($0, /(Sources|Tests)\/.*/) {
print substr($0, RSTART)
}' file
Tests/StoreTests/Base64Tests.swift
Sources/Store/StoreDataSource.swift
Or this sed:
sed -E 's~.*/((Sources|Tests)/.*)~\1~' file
Tests/StoreTests/Base64Tests.swift
Sources/Store/StoreDataSource.swift
With your shown samples please try following GNU grep. This will look for very first match of /Sources OR /Tests and then print values from these strings to till end of the value.
grep -oP '^.*?\/\K(Sources|Tests)\/.*' Input_file
Using sed
$ sed -E 's~([^/]*/)+((Tests|Sources).*)~\2~' input_file
Tests/StoreTests/Base64Tests.swift
would like to get everything before the first, and not the last
Sources or Tests pattern match.
First thing is to understand reason of that, you are using
sed 's/^.*\(Tests.*\).*$/\1/'
observe that * is greedy, i.e. it will match as much as possible, therefore it will always pick last Tests, if it would be non-greedy it would find first Tests but sed does not support this, if you are using linux there is good chance that you have perl command which does support that, let file.txt content be
/Users/random/354765478/Tests/StoreTests/Base64Tests.swift
then
perl -p -e 's/^.*?(Tests.*)$/\1/' file.txt
gives output
Tests/StoreTests/Base64Tests.swift
Explanation: -p -e means engage sed-like mode, alterations in regular expression made: brackets no longer require escapes, first .* (greedy) changed to .*? (non-greedy), last .* deleted as superfluous (observe that capturing group will always extended to end of line)
(tested in perl 5, version 30, subversion 0)

How can I search for a dot an a number in sed or awk and prefix the number with a leading zero?

I am trying to modify the name of a large number of files, all of them with the following structure:
4.A.1 Introduction to foo.txt
2.C.3 Lectures on bar.pdf
3.D.6 Processes on baz.mp4
5.A.8 History of foo.txt
And I want to add a leading zero to the last digit:
4.A.01 Introduction to foo.txt
2.C.03 Lectures on bar.pdf
3.D.06 Processes on baz.mp4
5.A.08 History of foo.txt
At first I am trying to get the new names with sed (FreeBSD implementation):
ls | sed 's/\.[0-9]/0&/'
But I get the zero before the .
Note: replacing the second dot would also work. I am also open to use awk.
While it may have worked for you here, in general slicing and dicing ls output is fragile, whether using sed or awk or anything else. Fortunately one can accomplish this robustly in plain old POSIX sh using globbing and fancy-pants parameter expansions:
for f in [[:digit:]].[[:alpha:]].[[:digit:]]\ ?*; do
# $f = "[[:digit:]].[[:alpha:]].[[:digit:]] ?*" if no files match.
if [ "$f" != '[[:digit:]].[[:alpha:]].[[:digit:]] ?*' ]; then
tail=${f#*.*.} # filename sans "1.A." prefix
head=${f%"$tail"} # the "1.A." prefix
mv "$f" "${head}0${tail}"
fi
done
(EDIT: Filter out filenames that don't match desired format.)
This pipeline should work for you:
ls | sed 's/\.\([0-9]\)/.0\1/'
The sed command here will capture the digit and replace it with a preceding 0.
Here, \1 references the first (and in this case only) capture group - the parenthesized expression.
I am also open to use awk.
Let file.txt content be:
4.A.1 Introduction to foo.txt
2.C.3 Lectures on bar.pdf
3.D.6 Processes on baz.mp4
5.A.8 History of foo.txt
then
awk 'BEGIN{FS=OFS="."}{$3="0" $3;print}' file.txt
outputs
4.A.01 Introduction to foo.txt
2.C.03 Lectures on bar.pdf
3.D.06 Processes on baz.mp4
5.A.08 History of foo.txt
Explanation: I set dot (.) as both field seperator and output field seperator, then for every line I add leading 0 to third column ($3) by concatenating 0 and said column. Finally I print such altered line.
(tested in GNU Awk 5.0.1)
This might work for you (GNU sed):
sed 's/^\S*\./&0/' file
This appends 0 after the last . in the first string of non-empty characters in each line.
In case it helps somebody else, as an alternative to #costaparas answer:
ls | sed -E -e 's/^([0-9][.][A-Z][.])/\10/' > files
To then create the script the files:
cat files | awk '{printf "mv \"%s\" \"%s\"\n", $0, $0}' | sed 's/\.0/\./' > movefiles.sh

How do I add characters between strings in bash?

Tell me how to add it using the sed command.
[Before revision]
BONDING_OPTS="mode=1 miimon=200 primary=ens192"
[After revision]
BONDING_OPTS="mode=1 miimon=200 primary=ens192 primary=eth0"
I have succeeded so far.
sed "/BOND/s/$/primary=eth0/g" /etc/sysconfig/network-scripts/ifcfg-bond0
Output:
BONDING_OPTS="mode=1 miimon=200 primary=ens192"primary=eth0
Thanks for adding your effort in your post. Could you please try following.
awk '/BOND/{sub(/\"$/," primary=eth0&")} 1' Input_file
In case you want to save output into Input_file itself then try:
awk '/BOND/{sub(/\"$/," primary=eth0&")} 1' Input_file > temp && mv temp Input_file
With sed:
sed '/BOND/s/\"$/ primary=eth0&/' Input_file
Please use either sed -i (to save output into Input_file itself) OR use sed -i.bak (to save output into Input_file itself taking a backup of existing Input_file).
sed '/BOND/s/"$/ primary=eth0"/'
use single quotes, string inside double quotes is subject to shell interpretation before it is passed to sed and that leads to mistakes often
See also: Difference between single and double quotes in Bash
since you need to insert before something at end of line, you need to add that condition before applying $ anchor - which is " in this case
In case you have a non-trivial string or regex before the anchor, you can use & to get back that matched string in replacement section instead of duplicating the text
sed '/BOND/s/"$/ primary=eth0&/' ip.txt
For in-place editing, see sed in-place flag that works both on Mac (BSD) and Linux

Replace chars after column X

Lets say my data looks like this
iqwertyuiop
and I want to replace all the letters i after column 3 with a Z.. so my output would look like this
iqwertyuZop
How can I do this with sed or awk?
It's not clear what you mean by "column" but maybe this is what you want using GNU awk for gensub():
$ echo iqwertyuiop | awk '{print substr($0,1,3) gensub(/i/,"Z","g",substr($0,4))}'
iqwertyuZop
Perl is handy for this: you can assign to a substring
$ echo "iiiiii" | perl -pe 'substr($_,3) =~ s/i/Z/g'
iiiZZZ
This would totally be ideal for the tr command, if only you didn't have the requirement that the first 3 characters remain untouched.
However, if you are okay using some bash tricks plus cut and paste, you can split the file into two parts and paste them back together afterwords:
paste -d'\0' <(cut -c-3 foo) <(cut -c4- foo | tr i Z)
The above uses paste to rejoin together the two parts of the file that get split with cut. The second section is piped through tr to translate i's to Z's.
(1) Here's a short-and-simple way to accomplish the task using GNU sed:
sed -r -e ':a;s/^(...)([^i]*)i/\1\2Z/g;ta'
This entails looping (t), and so would not be as efficient as non-looping approaches. The above can also be written using escaped parentheses instead of unescaped characters, and so there is no real need for the -r option. Other implementations of sed should (in principle) be up to the task as well, but your MMV.
(2) It's easy enough to use "old awk" as well:
awk '{s=substr($0,4);gsub(/i/,"Z",s); print substr($0,1,3) s}'
The most intuitive way would be to use awk:
awk 'BEGIN{FS="";OFS=FS}{for(i=4;i<=NF;i++){if($i=="i"){$i="Z"}}}1' file
FS="" splits the input string by characters into fields. We iterate trough character/field 4 to end and replace i by Z.
The final 1 evaluates to true and make awk print the modified input line.
With sed it looks not very intutive but still it is possible:
sed -r '
h # Backup the current line in hold buffer
s/.{3}// # Remove the first three characters
s/i/Z/g # Replace all i by Z
G # Append the contents of the hold buffer to the pattern buffer (this adds a newline between them)
s/(.*)\n(.{3}).*/\2\1/ # Remove that newline ^^^ and assemble the result
' file

How to quote a shell variable in a TCL-expect string

I'm using the following awk command in an expect script to get the gateway for a particular destination
route | grep $dest | awk '{print $2}'
However the expect script does not like the $2 in the above statement.
Does anyone know of an alternative to awk to perform the same function as above? ie. output 2nd column.
You can use cut:
route | grep $dest | cut -d \ -f 2
That uses spaces as the field delimiter and pulls out the second field
To answer your Expect question, single quotes have no special meaning to the Tcl parser. You need to use braces to protect the body of the awk script:
route | grep $dest | awk {{print $2}}
And as awk can do what grep does, you can get away with one less process:
route | awk -v d=$dest {$0 ~ d {print $2}}
Before switching to another utility, check if changing field separator worrks. Documentation for field separators in GNU Awk here.
SED is the best alternative to use. If you don't mind a dependency, Perl should also be sufficient to solve the task
Depending on the structure of your data, you can use either cut, or use sed to do both filtering and printing the second column.
Alternatively, you could use Perl:
perl -ne 'if(/foo/) { #_ = split(/:/); print $_[1]; }'
This will print second token of each line containing foo, with : as token separator.