all.
I'm using tcsh (I know, it's not good). Anyway, I'm defining a variable as
set arqlink = `ls -aFl /usr/lib64/libelf* | awk '/->/{print $NF}' |
grep "libelf-0"`
but it doesn't work and I don't know why since
ls -aFl /usr/lib64/libelf* | awk '/->/{print $NF}' | grep "libelf-0"
works just fine.
thanks in advance.
best regards,
nc
By default tcsh has nonomatch unset.
nonomatch
If set, a Filename substitution or Directory stack substitution (q.v.) which
does not match any existing files is left untouched rather than
causing an error.
Set this on and your line will work.
set nonomatch
set arqlink = `ls -aFl /usr/lib64/libelf* | awk '/->/{print $NF}' | grep "libelf-0"`
Alternatively, use stat and you won't need to set nonomatch.
set arglink = `stat -c %N /usr/lib64/libelf* | awk '/->/{print $NF}' | grep "libelf-0"`
Bottom line is, -F is forcing ls to append * to the filenames which is considered no match. You can also alter your original command and get rid of -F.
Remove spaces on both sides of = and note the backtick (not single quote) added on both sides of command.
set arqlink=`ls -aFl /usr/lib64/libelf* | awk '/->/{print $NF}' |
grep "libelf-0"`
Related
What is the difference on Ubuntu between awk and awk -F? For example to display the frequency of the cpu core 0 we use the command
cat /proc/cpuinfo | grep -i "^ cpu MHz" | awk -F ":" '{print $ 2}' | head -1
But why it uses awk -F? We could put awk without the -F and it would work of course (already tested).
Because without -F , we couldn't find from wath separator i will begin the calculation and print the right result. It's like a way to specify the kind of separator for this awk's using. Without it, it will choose the trivial separator in the line like if i type on the terminal: ps | grep xeyes | awk '{print $1}' ; in this case it will choose the space ' ' as a separator to print the first value: pid OF the process xeyes. I found it in https://www.shellunix.com/awk.html. Thanks for all.
I have an output like this
I need to get the id 65a8fa6 as an variable to a new command.
I'm familiar with grep and use it to get the line I need. But how do I only pick the first 7 caracters.
This is where I'm now
vagrant global-status | grep DB1
Output
65a8fa6 default vmware_desktop running /Users/USER/Documents/Vagrant/centos7SN/DB1
1st solution: You could simply do this with awk. Simply look for string DB in line and then if its found then print 1st field of that line. Save the output into variable var and later you could use it for your use.
val=$(vagrant global-status | awk '/DB1/{print $1}')
OR for matching either db1 OR DB1 try in any awk:
val=$(vagrant global-status | awk '/[dD][bB]1/{print $1}')
2nd solution: If you have GNU awk and you want to use ignorecase then try:
val=$(vagrant global-status | awk -v IGNORECASE="1" '/DB1/{print $1}')
3rd solution: To get first 7 characters try:
But how do I only pick the first 7 characters.
val=$(vagrant global-status | awk '/[dD][bB]1/{print substr($0,1,7)}')
Sed alternative:
val=$(vagrant global-status | sed -rn 's/(^[[:alnum:]]{7})(.*$)/\1/p')
Split the output of vagrant ... into two sections using sed and regular expressions (-r) Substitute the line for the first section only and print.
You can also use the cut command to find instances of what you're after, provided that there's some consistent text near what you want to find:
Say you want to find Hello out of the following line:
Hello here's some text blablablabla
You can find it doing something like:
echo Hello here's some text blablablabla | grep text | cut -d " " -f 1
Should output Hello
I am not much of an awk user, but after some Googling, determined it would work best for what I am trying to do...only problem is, I can't get it to work. I'm trying to print out the contents of sudoers while inserting the server name ($i) and a comma before the sudoers entry as I'm directing it to a .csv file.
egrep '^[aA-zZ]|^[%]' //$i/etc/sudoers | awk -v var="$i" '{print "$var," $0}' | tee -a $LOG
This is the output that I get:
$var,unixpvfn ALL = (root)NOPASSWD:/usr/bin/passwd
awk: no program given
Thanks in advance
egrep is superfluous here. Just awk:
awk -v var="$i" '/^[[:alpha:]%]/{print var","$0}' //"$i"/etc/sudoers | tee -a "$LOG"
Btw, you may also use sed:
sed "/^[[:alpha:]%]/s/^/${i},/" //"$i"/etc/sudoers | tee -a "$LOG"
You can save the grep and let awk do all the work:
awk -v svr="$i" '/^[aA-zZ%]/{print svr "," $0}' //$i/etc/sudoers
| tee -a $LOG
If you put things between "..", it means literal string, and variable won't be expanded in awk. Also, don't put $ before a variable, it will indicate the column, not the variable you meant.
Why doesn't this work?
x=5
$ ls -l | awk '{print $(($x))}'
should print field 5 of ls -l command, right?
The only ways you should pass in the value of shell variable to awk are the following
$ x=5
$ ls -l | awk -v x="$x" '{print $x}'
$ ls -l | awk '{print $x}' x="$x"
The main difference between these two methods is that by using -v the value of x is set in the BEGIN block whilst the second method the value would not be set. All other methods with quoting tricks or escaping should not be used unless you like headaches.
However you don't want to being parsing ls at all, the command you really want is:
stat --printf="%s\n" *
Assuming the fifth column of your ls is the same as mine, this will display all the file sizes in the current directory.
You could access the shell variable something similar to these;
The first way is not suggested!
x=5
ls -l | awk '{print $'$x'}'
or assigning the value x to the variable shellVar, before execution of the program begins
x=5
ls -l | awk -v shellVar="$x" '{print $shellVar}'
or using an array containing the values of the current environment
export x=5
ls -l | awk '{print $ENVIRON["x"]}'
That's a shell variable, which is not expanded by the shell in single quotes. The reason we put awk scripts in single quotes is precisely to prevent the shell from interpreting things meant for awk's benefit and screwing things up, but sometimes you want the shell to interpret part of it.
For something like this, I prefer to pass the value in as an awk variable:
ls -l | awk -v "x=$x" '{print $x}'
but you could do any number of other ways. For instance, this:
ls -l | awk '{print $'$x'}'
which should really be this:
ls -l | awk '{print $'"$x"'}'
alternatively, this:
ls -l | awk "{print \$$x}"
Try this :
ls -l | awk '{print $'$x'}'
May I know is there a way to find certain synctax of words in awk using one liner command whereby the files I'm finding is on deeper hier. Wondering if 'find" cmd can apply here but I tried with find it is not working for me, probably my usage is wrong.
My command as below and it works beautiful if I placed the command on the same level where the *.abc reside.
awk '/hdl_file/{printf "%s", $0}/input|output/{printf "%s", $0}' *.abc | sed s/\.aux// | sed s/,//g | sed s/input// | sed s/output// | sed s/=// | sed s/\"//g | sed s/\.ac// | sed s/hdl_file// | awk '{print $NF $0}' | awk '{$NF="";print $0}'
However, I need to apply this command to looks for *.abc file on 2 hierarchy deeper. Tried command below with find but it gave error.
find . -mindepth 2 | awk '/hdl_file/{printf "%s", $0}/input|output/{printf "%s", $0}' *.abc | sed s/\.aux// | sed s/,//g | sed s/input// | sed s/output// | sed s/=// | sed s/\"//g | sed s/\.ac// | sed s/hdl_file// | awk '{print $NF $0}' | awk '{$NF="";print $0}'
awk: No match.
Pls help. Thanks.
You could also use
awk '....' */*/*.abc
if the files are exactly 2 subdirectories deep. For arbitrary depth, a recent bash can do
shopt -s globstar
awk '....' **/*.abc
Your pipeline of awks and seds can be reduced to
awk '
/hdl_file/ || /input|output/ {
gsub(/[,"]/, "")
sub(/\.aux|\.ac|input|output|hdl_file|=/, "")
last = $NF
$NF = ""
print last, $0
}
' *.abc
I'm not sure if that's your intention, but -mindepth will make find search in directories that are more than 2 levels down. If you want files that are at most 2 levels down, you need to use -maxdepth.
If you want to use the output of one command (in this case find) as arguments for another command (awk), you can use xargs, for example:
find . -maxdepth 2 | xargs awk '[your awk script goes here]'
Or, you could use -exec to read the files, if you don't care about their names:
find . -maxdepth 2 -exec cat '{}' \; | awk '[your awk script goes here]'
# Or possibly:
find . -maxdepth 2 | xargs cat | awk '[your awk script goes here]'
Note also, that you can avoid spawning multiple instances of sed and piping them together, here's two ways to do that:
sed -e 's/foo/bar/' -e 's/something/somethingelse/'
sed -f sed_script # sed_script is a text file with commands for sed