Can we call SAC(Seismic Analysis Code) within AWK? - awk

I have written a code in the shell...This is a part of my code:-
for I in *.SAC
do
SAC<<EOF
LH Delta
q
EOF
p=echo $Delta| awk '{print $1/20}'
r= saclhdr -B $i
echo $p $r
done
Can I write this code entirely within AWK???
I want to do that; because for my work, running a shell script is taking much more time than AWK.
Please help me if you can

Related

Running awk command in awk script

I am just looking to run a simple script that runs an awk command inside of the awk script.
sample_enrollment.csv file: "EffectiveDate","Status","EmployeeID","ClientID"
Below is the Lab4_1.awk
#!/bin/bash
BEGIN{FS=","}
{
awk 'gsub(/EfectiveDate/, "Effective Date")'
}
I am running the command from the command line like this
awk -f lab4_1.awk sample_enrollment.csv
The error that I am getting seems to indicate that the ' ' in the awk gsub command seem to be wrong. I have tried many variations on this awk command with out any luck. I am just asking for this portion, as I will need to add more to the awk script after I get this done
Any help would be appreciated. Thank you
I don't think there is need for using 2 awk commands here as per your shown effort it could be done in single awk like as follows too.
awk -F, '{gsub(/EfectiveDate/, "Effective Date")} 1' Input_file
As I mentioned in comments too in case you have more requirements you could let us know with samples in code tags in your post and we could help you from there too.
EDIT: As OP mentioned a script is needed so now adding code in a bash script format too.
cat script
#!/bin/bash
awk '{gsub("EffectiveDate","Effective Date")} 1' Input_file
......... do my other stuff too here in bash or awk...........

How to run awk script on multiple files

I need to run a command on hundreds of files and I need help to get a loop to do this:
have a list of input files /path/dir/file1.csv, file2.csv, ..., fileN.csv
need to run a script on all those input files
script is something like: command input=/path/dir/file1.csv output=output1
I have tried things like:
for f in /path/dir/file*.csv; do command, but how do I get to read and write the new file every time?
Thank you....
Try this, (after changing /path/to/data to the correct path. Same with /path/to/awkscript and other places, pointing to your test data.)
#!/bin/bash
cd /path/to/data
for f in *.csv ; do
echo "awk -f /path/to/awkscript \"$f\" > ${f%.csv}.rpt"
#remove_me awk -f /path/to/awkscript "$f" > ${f%.csv}.rpt
done
make the script "executable" with
chmod 755 myScript.sh
The echo version will help you ensure the script is going to work as expected. You still have to carefully examine that output OR work on a copy of your data so you don't wreck you base-line data.
You could take the output of the last iteration
awk -f /path/to/awkscript myFileLast.csv > myFileLast.rpt
And copy/paste to cmd-line to confirm it will work.
WHen you comfortable that the awk script works as you need, then comment out the echo awk .. line, and uncomment the word #remove_me (and save your bash script).
for f in /path/to/files/*.csv ; do
bname=`basename $f`
pref=${bname%%.csv}
awk -f /path/to/awkscript $f > /path/to/store/output/${pref}_new.txt
done
Hopefully this helps, I am on my blackberry so there may be typos

Read one line at a time using shell

I'm trying to create a shell script. It will read one line at a time, assign the values to macros, then run a query with macros. After query is done, it will read the second line, create macros, run queries...
I developed the code below, but it will read all lines together then run the query. Should I use do i=1 to n ?
#!/bin/sh
$HOME/.profile
file=$1
OutputTable=$2
file=rule_flg.txt
cat $file|(
read flg table_num rule_num
while test "$flg" != ""
do
echo table_num is $table_num and rule_num is $rule_num
echo time is `date`
(here are some parameters of database)... -v flg=$flg -v table_num=$table_num -v rule_num=$rule_num -f query_1.sql &> query_1.log
read flg table_num rule_num
done
)
echo run finished!
exit 0
Instead of what you describe, what about reading the file line by line with the while read; do ... done < file syntax?
This way, every iteration will just contain the data from the current line.
while IFS= read -r flg table_num rule_num
echo "table_num is $table_num and rule_num is $rule_num"
echo "time is `date`"
(here are some parameters of database)... -v flg=$flg -v table_num=$table_num -v rule_num=$rule_num -f query_1.sql &> query_1.log
done < "$file"
You can find more details by reading How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?.
Note also that it is good to quote whatever variable you are working with, to prevent problems on formatting, etc. So say echo "$var" instead of echo $var unless you are very sure you don't want it.
So your question seems to be - "the script I wrote will read all the lines together then run the query, but I want it to execute the query once for each line it reads"
Since your script isn't complete or testable, I can't really debug it. :)
It is, however, a bit overly complex. Since you dont need to get variables or other shell data out side of the subshell, you can just do a cat $file | while read vars; do echo $vars;done type of thing.
the while read will read each line of input into a variable (or set of variables), and execute the inner part of the loop once for each line.
A hacked up version of your script to show it:
#!/bin/sh
file=rule_flg.txt
echo "1 2 3" > $file
echo "3 4 5" >> $file
echo "4 5 6" >> $file
cat $file | while read flg table_num rule_num
do
echo table_num is $table_num and rule_num is $rule_num
#do your other things here
done
produces
%% sh whileread.sh
table_num is 2 and rule_num is 3
table_num is 4 and rule_num is 5
table_num is 5 and rule_num is 6
Now, there are many other ways to approach this - and there's no handling for the case where you send more than 3 arguments to read (they'll get stuffed into $rule_num...) but assuming your input data is consistently whitespace/tab delimited into 3 columns it should be a good start.
I would take another approach here. As awk is part of POSIX, I dare to ignore your requirement of shell.
General pattern:
awk 'prog' inputfile | sh
So you write the repetitive statements with awk. When you are satisfied with the result, you pipe it to sh.
If you'd like some specifics, please post a sample of your input data and the commands that you want to execute in the end.

Is there a way to create an awk input inactivity timer?

I have a text source (a log file), which gets new lines appended to it by some third party.
I can output the additions to my source file using tail -f source. I can then pipe that through an awk script awk -f parser.awk to parse and format the output.
My question is: while tail -f source | awk -f parser.awk is running, is there a way to call function foo() inside my parser.awk script every time there is more than 5 seconds elapsed without anything coming through the pipe into the standard input of the awk script?
Edit: Currently using GNU Awk 3.1.6. May be able to upgrade to newer version if required.
If your shell's read supports -t and -u, here's an ugly hack:
{ echo hello; sleep 6; echo world; } | awk 'BEGIN{
while( "while read -t 5 -u 3 line; do echo \"$line\"; done" | getline > 0 )
print
}' 3<&0
You can replace the print in the body of the while loop with your script. However, it would probably make a lot more sense to put the read timeout between tail and awk in the pipeline, and it would make even more sense to re-implement tail to timeout.
Not exactly the answer to your question. However there is a little hack in shell that can do practically what you want:
{ tail -f log.file >&2 | { while : ; do sleep 5; echo SECRET_PHRASE ; done ; } ; } 2>&1 | awk -f script.awk
When awk receives SECRET_PHRASE it will run foo function every 5 seconds. Unfortunately is will run it every 5 second even in case there was some output during this time from tail.
ps. You can replace '{}' with '()' and vice versa. In the first case it won't create subshell, in the second one it will.
The another way is to append this secret phrase dirctly to log file in case nobody wrote there during last five seconds. But looks like it's not good idea due to you will have spoiled log file.

Awk unable to store the value into an array

I am using a script like below
SCRIPT
declare -a GET
i=1
awk -F "=" '$1 {d[$1]++;} {for (c in d) {GET[i]=d[c];i=i+1}}' session
echo ${GET[1]} ${GET[2]}
DESCRIPTION
The problem is the GET value printed outside is not the correct value ...
I understand your question as "how can I use the results of my awk script inside the shell where awk was called". The truth is that it isn't really trivial. You wouldn't expect to be able to use the output from a C program or python script easily inside your shell. Same with awk, which is a scripting language of its own.
There are some workarounds. For a robust solution, write your results from the awk script to a file in a suitably easy format and read them from the shell. As a hack, you could also try to ready the output from awk directly into the shell using $(). Combine that with the set command and you could do:
set -- $(awk <awk script here>)
and then use $1 $2 etc. but you have to be careful with spaces in the output from awk.