I am new to any programming and shell scripting.
I am trying to make a if condition in shell script.
I am used of some computed codes for density functional theory (say Quantum espresso).
I want to make the program automatic via a shell script.
My code produce case.data which contains at the end stop (at $2 or we can say at second column).
For example below script should print stop
cat case.data | tail -n 1 | awk '{print $2}'
so if I get stop from above script then then if statement should not produce anything and the rest file should be executed. If I do not get stop from then above script then executable commands in my file should not be executed and a text file containing exit should be executed so that it terminates my job.
What I tried is:
#!bin/bash
# Here I have my other commands that will give us case.data and below is my if statement.
STOP=$(cat $case.dayfile | tail -n 1 | awk '{print $2}')
if [$STOP =="stop"]
then
echo "nil"
else
echo "exit" > exit
chmod u+x exit
./exit
fi
# here after I have other executable that will be executed depending on above if statement
Your whole script should be changed to just this assuming you really do want to print "nil" in the success case for some reason:
awk '{val=$2} END{if (val=="stop") print "nil"; else exit 1}' "${case}.data" || exit
and this otherwise:
awk '{val=$2} END{exit (val=="stop" ? 0 : 1)}' "${case}.data" || exit
I used a ternary in that last script instead of just exit (val!="stop") or similar for clarity given the opposite values of true in a condition and success in an exit status.
Like this?:
if awk 'END{if($2!="stop")exit 1}' case.data
then
echo execute commands here
fi
I'm having trouble finding out how to read in my file into my awk script.
This is what I have so far. Basically, I want to print out the header, and then read in the roster file which then I will edit to the necessary format. However, my problem is just figuring out how to read in the file.
#!/bin/awk -f
BEGIN {print "Last Name:First Name:Student ID:School – Major:Academic Level:ASURITE:Email" "\n" } {print $1,$2} roster
On running this
awk -f script.awk
Last Name:First Name:Student ID:School – Major:Academic Level:ASURITE:Email
^C
This is what I end up with - the file doesn't read in and I have to CTRL-C my way out since it doesn't close.
The idea is right, but the place where you have mentioned the input file roster is wrong. Move it out of the script. You need to understand that awk syntax is always as below
awk <action> <file>
The <action> part could be directly given in the command line or provided from a script using the -f flag. But the <file> argument still needs to be given no-matter which way. Moving it inside the script, makes awk wait for an input to read its standard input but it doesn't get any.
awk -f script.awk roster
You could modify the script.awk to just use awk without -f and use the /usr/bin/env for the shell to get the location of awk to execute
#!/usr/bin/env awk
BEGIN {
print "Last Name:First Name:Student ID:School – Major:Academic Level:ASURITE:Email" "\n"
}
{
print $1,$2
}
Attempting to get the last word of the first line from a file. Not sure why the following command:
send "cat moo.txt | grep QUACK * | awk 'NF>1{print $NF}' meow.txt >> bark.txt "
is getting the error message can't read "NF": no such variable.
I can run the awk 'NF>1{print $NF}' meow.txt >> bark.txt snippet just fine on my machine. Yet, when it runs in my expect script, it gives me that error.
Anyone know why expect doesn't recognize the awk built-in variable?
I think your script is trying to expand the variable $NF with it's value before shooting that command through send. $NF isn't set in your shell since it's internal to awk, which hasn't had a chance to even run yet and so it's balking.
Try escaping that variable so it is treated as a string literal and awk will be able to use it when it comes time for awk to run:
send "cat moo.txt | grep QUACK * | awk 'NF>1{print \$NF}' meow.txt >> bark.txt "
I'm currently writing a shell script that will be given a directory, then output an ls of that directory with the return code from a C program appended to each line. The C program only needs to be called for regular files.
The problem I'm having is that output from the C program is cluttering up the output from awk, and I can't get stdout to redirect to /dev/null inside of awk. I have no use for the output, I just need the return code. Speed is definitely a factor, so if you have a more efficient solution I'd be happy to hear it. Code follows:
directory=$1
ls -i --full-time $directory | awk '
{
rc = 0
if (substr($2,1,1) == "-") {
dbType=system("cprogram '$directory'/"$10)
}
print $0 " " rc
}
'
awk is not shell so you cant just use a shell variable inside an awk script, and in shell always quote your variables. Try this:
directory="$1"
ls -i --full-time "$directory" | awk -v dir="$directory" '
{
rc = 0
if (substr($2,1,1) == "-") {
rc = system("cprogram \"" dir "/" $10 "\" >/dev/null")
}
print $0, rc
}
'
Oh and, of course, don't actually do this. See http://mywiki.wooledge.org/ParsingLs.
I just spent a minute thinking about what your script is actually doing and rather than trying to use awk as a shell and parse the output of ls, it looks like the solution you REALLY want would be more like:
directory="$1"
find "$directory" -type f -maxdepth 1 -print |
while IFS= read -r dirFile
do
op=$(ls -i --full-time "$dirFile")
cprogram "$dirFile" >/dev/null
rc="$?"
printf "%s %s\n" "$op" "$rc"
done
and you could probably save a step by using the -printf arg for find to get whatever info you're currently using ls for.
Can we use shell variables in AWK like $VAR instead of $1, $2? For example:
UL=(AKHIL:AKHIL_NEW,SWATHI:SWATHI_NEW)
NUSR=`echo ${UL[*]}|awk -F, '{print NF}'`
echo $NUSR
echo ${UL[*]}|awk -F, '{print $NUSR}'
Actually am an oracle DBA we get lot of import requests. I'm trying to automate it using the script. The script will find out the users in the dump and prompt for the users to which dump needs to be loaded.
Suppose the dumps has two users AKHIL, SWATHI (there can be may users in the dump and i want to import more number of users). I want to import the dumps to new users AKHIL_NEW and SWATHI_NEW. So the input to be read some think like AKHIL:AKHIL_NEW,SWATHI:SWATHI_NEW.
First, I need to find the Number of users to be created, then I need to get new users i.e. AKHIL_NEW,SWATHI_NEW from the input we have given. So that I can connect to the database and create the new users and then import. I'm not copying the entire code: I just copied the code from where it accepts the input users.
UL=(AKHIL:AKHIL_NEW,SWATHI:SWATHI_NEW) ## it can be many users like USER1:USER1_NEW,USER2_USER2_NEW,USER3:USER_NEW..
NUSR=`echo ${UL[*]}|awk -F, '{print NF}'` #finding number of fields or users
y=1
while [ $y -le $NUSR ] ; do
USER=`echo ${UL[*]}|awk -F, -v NUSR=$y '{print $NUSR}' |awk -F: '{print $2}'` #getting Users to created AKHIL_NEW and SWATHI_NEW and passing to SQLPLUS
if [[ $USER = SCPO* ]]; then
TBS=SCPODATA
else
if [[ $USER = WWF* ]]; then
TBS=WWFDATA
else
if [[ $USER = STSC* ]]; then
TBS=SCPODATA
else
if [[ $USER = CSM* ]]; then
TBS=CSMDATA
else
if [[ $USER = TMM* ]]; then
TBS=TMDATA
else
if [[ $USER = IGP* ]]; then
TBS=IGPDATA
fi
fi
fi
fi
fi
fi
sqlplus -s '/ as sysdba' <<EOF # CREATING the USERS in the database
CREATE USER $USER IDENTIFIED BY $USER DEFAULT TABLESPACE $TBS TEMPORARY TABLESPACE TEMP QUOTA 0K on SYSTEM QUOTA UNLIMITED ON $TBS;
GRANT
CONNECT,
CREATE TABLE,
CREATE VIEW,
CREATE SYNONYM,
CREATE SEQUENCE,
CREATE DATABASE LINK,
RESOURCE,
SELECT_CATALOG_ROLE
to $USER;
EOF
y=`expr $y + 1`
done
impdp sysem/manager DIRECTORY=DATA_PUMP DUMPFILE=imp.dp logfile=impdp.log SCHEMAS=AKHIL,SWATHI REMPA_SCHEMA=${UL[*]}
In the last impdp command I need to get the original users in the dumps i.e AKHIL,SWATHI using the variables.
Yes, you can use the shell variables inside awk. There are a bunch of ways of doing it, but my favorite is to define a variable with the -v flag:
$ echo | awk -v my_var=4 '{print "My var is " my_var}'
My var is 4
Just pass the environment variable as a parameter to the -v flag. For example, if you have this variable:
$ VAR=3
$ echo $VAR
3
Use it this way:
$ echo | awk -v env_var="$VAR" '{print "The value of VAR is " env_var}'
The value of VAR is 3
Of course, you can give the same name, but the $ will not be necessary:
$ echo | awk -v VAR="$VAR" '{print "The value of VAR is " VAR}'
The value of VAR is 3
A note about the $ in awk: unlike bash, Perl, PHP etc., it is not part of the variable's name but instead an operator.
Awk and Gawk provide the ENVIRON associative array that holds all exported environment variables. So in your awk script you can use ENVIRON["VarName"] to get the value of VarName, provided that VarName has been exported before running awk.
Note ENVIRON is a predefined awk variable NOT a shell environment variable.
Since I don't have enough reputation to comment on the other answers I have to include them here!
The earlier answer showing $ENVIRON is incorrect - that syntax would be expanded by the shell, and probably result in expanding to nothing.
Further earlier comments about C not being able to access environment variable is wrong. Contrary to what is said above, C (and C++) can access environment variables using the getenv("VarName") function. Many other languages provide similar access (e.g., Java: System.getenv(), Python: os.environ, Haskell System.Environment, ...). Note in all cases access to environment variables is read-only, you cannot change an environment variable in a program and get that value back to the calling script.
There are two ways to pass variables to awk: one way is defining the variable in a command line argument:
$ echo ${UL[*]}|awk -F, -v NUSR=$NUSR '{print $NUSR}'
SWATHI:SWATHI_NEW
Another way is converting the shell variable to an environment variable using export, and reading the environment variable from the ENVIRON array:
$ export NUSR
$ echo ${UL[*]}|awk -F, '{print $ENVIRON["NUSR"]}'
SWATHI:SWATHI_NEW
Update 2016: The OP has comma-separated data and wants to extract an item given its index. The index is in the shell variable NUSR. The value of NUSR is passed to awk, and awk's dollar operator extracts the item.
Note that it would be simpler to declare UL as an array of more than one element, and do the extraction in bash, and take awk out of the equation completely. This however uses 0-based indexing.
UL=(AKHIL:AKHIL_NEW SWATHI:SWATHI_NEW)
NUSR=1
echo ${UL[NUSR]} # prints SWATHI:SWATHI_NEW
There is another way, but it could cause immense confusion:
$ VarName="howdy" ; echo | awk '{print "Just saying '$VarName'"}'
Just saying howdy
$
So you are temporarily exiting the single quote environment (which would normally prevent the shell from interpreting '$') to interpret the variable and then going back into it. It has the virtue of being relatively brief.
Not sure if i understand your question.
But lets say we got a variable number=3 and we want to use it istead of $3, in awk we can do that with the following code
results="100 Mbits/sec 110 Mbits/sec 90 Mbits/sec"
number=3
speed=$(echo $results | awk '{print '"\$${number}"'}')
so the speed variable will get the value 110.
Hope this helps.
No. You can pass the value of a shell variable to an awk script just like you can pass the value of a shell variable to a C program but you cannot access a shell variable in an awk script any more than you could access a shell variable in a C program. Like C, awk is not shell. See question 24 in the comp.unix.shell FAQ at cfajohnson.com/shell/cus-faq-2.html#Q24.
One way to write your code would be:
UL="AKHIL:AKHIL_NEW,SWATHI:SWATHI_NEW"
NUSR=$(awk -F, -v ul="$UL" 'BEGIN{print gsub(FS,""); exit}')
echo "$NUSR"
echo "$UL" | awk -F, -v nusr="$NUSR" '{print $nusr}' # could have just done print $NF
but since your original starting point:
UL=(AKHIL:AKHIL_NEW,SWATHI:SWATHI_NEW)
was declaring UL as an array with just one entry, you might want to rethink whatever it is you're trying to do as you may have completely the wrong approach.