Awk file shows an error - udp

i run a simulation with udp connection and cbr application in ns2.
I run an awk file to show some results. I have 10 nodes and 2 of them (n0 and n10) send packets to each other. My problem is that i run these 2 codes to get result for both nodes but only the first one runs smoothly (n0 => n10):
1)gawk -f stats.awk src=0 dst=10 flow=0 pkt=512 file.tr
2) gawk -f stats .awk src=10 dst=0 flow=0 pkt=512 file.tr
The second code gives "division by zero attempted" error"
I don't include the stats.awk file here because it works in the first example so i guess the problem is something else? eg the flow in the second line is 0? i tried other numbers too but nothing happened.
Thanks in advance.

Related

phpseclib: "CAT" command works randomly

I have a script that fetches data from a site.
Basically it is divided into two section.
1.executes commands on a remote machine and saves output in a file
2.read the contents of a file.
For some reason it works from time to time. Section 1 always works (checked the remote machine and found the files). The problem is related to cat.
I've added to my code the option to dump the results of "CAT" command to a file.
Sometimes it has info sometimes it doesn't. However the file is always created!
The nodes i'm querying are the same. Timeout of execution of Section 1 on a remote server is 11-12 secs.
Thanks beforehand.
$ssh->exec("rm toolkit/mybatch/$newfileid");
$ssh->exec("mobatch $newsiteid 'lt all;ue print -admitted;' toolkit/mybatch");
$ssh->setTimeout(15);
echo $ssh->exec('cat ' . escapeshellarg("toolkit/mybatch/$newfileid") . '| grep -A 10 \'$ ue print \' > toolkit/mybatch/traffic.txt');
$traffic = $ssh->exec("cat toolkit/mybatch/traffic.txt");
$traffic = substr($traffic,21,-16);
$ssh->disconnect();
echo $traffic;
I've updated the code above, however, it worked several times , but after deletion of old files, it only creates "traffic.txt" and sometimes it has info in it, sometimes no.
Also, the file "traffic.txtescapeshellarg" is not created anymore. So i was forced to go back to my previous solution and read the "traffic.txt".
So, the solution was very simple. All i needed to do is adding "timeout".
The final code:
$ssh->exec("rm toolkit/traffic/$newfileid");
$ssh->setTimeout(0);
$ssh->exec("mobatch $newsiteid 'lt all;ue print -admitted;' toolkit/traffic");
sleep(8);
$ssh->exec('cat ' . escapeshellarg("toolkit/traffic/$newfileid") . '| grep -A 10 \'$ ue print \' > toolkit/traffic/traffic.txt');
$traffic = $ssh->exec("cat toolkit/traffic/traffic.txt");
$traffic = substr($traffic,21,-83);
echo $traffic;

Read all lines at the same time individually - Solaris ksh

I need some help with a script. Solaris 10 and ksh.
I Have a file called /temp.list with this content:
192.168.0.1
192.168.0.2
192.168.0.3
So, I have a script which reads this list and executes some commands using the lines values:
FILE_TMP="/temp.list"
while IFS= read line
do
ping $line
done < "$FILE_TMP"
It works, but it executes the command on line 1. When it's over, it goes to the line 2, and it goes successively until the end. I would like to find a way to execute the command ping at the same time in each line of the list. Is there a way to do it?
Thank you in advance!
Marcus Quintella
As Ari's suggested, googling ksh multithreading will produce a lot of ideas/solutions.
A simple example:
FILE_TMP="/temp.list"
while IFS= read line
do
ping $line &
done < "$FILE_TMP"
The trailing '&' says to kick the ping command off in the background, allowing loop processing to continue while the ping command is running in the background.
'course, this is just the tip of the proverbial iceberg as you now need to consider:
multiple ping commands are going to be dumping output to stdout (ie, you're going to get a mish-mash of ping output in your console), so you'll need to give some thought as to what to do with multiple streams of output (eg, redirect to a common file? redirect to separate files?)
you need to have some idea as to how you want to go about managing and (possibly) terminating commands running in the background [ see jobs, ps, fg, bg, kill ]
if running in a shell script you'll likely find yourself wanting to suspend the main shell script processing until all background jobs have completed [ see wait ]

while [[ condition ]] stalls on loop exit

I have a problem with ksh in that a while loop is failing to obey the "while" condition. I should add now that this is ksh88 on my client's Solaris box. (That's a separate problem that can't be addressed in this forum. ;) I have seen Lance's question and some similar but none that I have found seem to address this. (Disclaimer: NO I haven't looked at every ksh question in this forum)
Here's a very cut down piece of code that replicates the problem:
1 #!/usr/bin/ksh
2 #
3 go=1
4 set -x
5 tail -0f loop-test.txt | while [[ $go -eq 1 ]]
6 do
7 read lbuff
8 set $lbuff
9 nwords=$#
10 printf "Line has %d words <%s>\n" $nwords "${lbuff}"
11 if [[ "${lbuff}" = "0" ]]
12 then
13 printf "Line consists of %s; time to absquatulate\n" $lbuff
14 go=0 # Violate the WHILE condition to get out of loop
15 fi
16 done
17 printf "\nLooks like I've fallen out of the loop\n"
18 exit 0
The way I test this is:
Run loop-test.sh in background mode
In a different window I run commands like "echo some nonsense >>loop_test.txt" (w/o the quotes, of course)
When I wish to exit, I type "echo 0 >>loop-test.txt"
What happens? It indeed sets go=0 and displays the line:
Line consists of 0; time to absquatulate
but does not exit the loop. To break out I append one more line to the txt file. The loop does NOT process that line and just falls out of the loop, issuing that "fallen out" message before exiting.
What's going on with this? I don't want to use "break" because in the actual script, the loop is monitoring the log of a database engine and the flag is set when it sees messages that the engine is shutting down. The actual script must still process those final lines before exiting.
Open to ideas, anyone?
Thanks much!
-- J.
OK, that flopped pretty quick. After reading a few other posts, I found an answer given by dogbane that sidesteps my entire pipe-to-while scheme. His is the second answer to a question (from 2013) where I see neeraj is using the same scheme I'm using.
What was wrong? The pipe-to-while has always worked for input that will end, like a file or a command with a distinct end to its output. However, from a tail command, there is no distinct EOF. Hence, the while-in-a-subshell doesn't know when to terminate.
Dogbane's solution: Don't use a pipe. Applying his logic to my situation, the basic loop is:
while read line
do
# put loop body here
done < <(tail -0f ${logfile})
No subshell, no problem.
Caveat about that syntax: There must be a space between the two < operators; otherwise it looks like a HEREIS document with bad syntax.
Er, one more catch: The syntax did not work in ksh, not even in the mksh (under cygwin) which emulates ksh93. But it did work in bash. So my boss is gonna have a good laugh at me, 'cause he knows I dislike bash.
So thanks MUCH, dogbane.
-- J
After articulating the problem and sleeping on it, the reason for the described behavior came to me: After setting go=0, the control flow of the loop still depends on another line of data coming in from STDIN via that pipe.
And now that I have realized the cause of the weirdness, I can speculate on an alternative way of reading from the stream. For the moment I am thinking of the following solution:
Open the input file as STDIN (Need to research the exec syntax for that)
When the condition occurs, close STDIN (Again, need to research the syntax for that)
It should then be safe to use the more intuitive:while read lbuffat the top of the loop.
I'll test this out today and post the result. I'd hope someone else benefit from the method (if it works).

Tail and grep combo to show text in a multi line pattern

Not sure if you can do this through tail and grep. Lets say I have a log file that I would like to tail. It spits out quite a bit of information when in debug mode. I want to grep for information pertaining to only my module, and the module name is in the log like so:
/*** Module Name | 2014.01.29 14:58:01
a multi line
dump of some stacks
or whatever
**/
/*** Some Other Module Name | 2014.01.29 14:58:01
this should show up in the grep
**/
So as you can imagine, the number of lines that would be spit out pertaining to "Module Name" could be 2, or 50 until the end pattern appears (**/)
From your description, it seems like this should work:
tail -f log | awk '/^\/\*\*\* Module Name/,/^\*\*\//'
but be wary of buffering issues. (Lines printed to the file will very likely see high latency before actually being printed.)
I think you will have to install pcregrep for this. Then this will work:
tail -f logfile | pcregrep -M "^\/\*\*\* Some Other Module Name \|.*(\n|.)*?^\*\*/$"
This worked in testing, but for some reason, I had to append your example text to the log file over 100 times before output started showing up. However, all of the "Some other Module Name" data that was written to the log file as soon as this line was invoked was eventually printed to stdout.

alternative to tail -F

I am monitoring a log file by doing "TAIL -n -0 -F filename". But this is taking up a lof of CPU as there are many messages being written to the logfile. Is there a way, I can open a file and read new/few entries and close it and repeat it every 5 second interval? So that I don't need to keep following the file? How can I remember the last read line to start from the next one in the next run? I am trying to do this in nawk by spawning tail shell cmd.
You won't be able to magically use less resources to tail a file by writing your own implementation. If tail -f is using resources because the file is growing fast, a custom version won't help any if you still want to view all lines as they are being written. You are simply limited by your hardware I/O and/or CPU.
Try using --sleep-interval=S where "S" is a number of seconds (the default is 1.0 - you can specify decimals).
tail -n 0 --sleep-interval=.5 -F filename
If you have so many log entries that tail is bogging down the CPU, how are you able to monitor them?