result not showing when looping - sql

I'm doing a powershell which runs through a loop to get result from an sql query with different parameters. the issue is only the first run of loop has a result. the rest all blank result which should also show some result.
$Days = #(90,60,30,15)
$Start = 0
$End = 0
$Affiliation = 1
for ($i = 0 ; $i -lt $Affiliation.Count ; $i++) {
for ($i = 0; $i -lt $Days.Count ; $i++){
if ($Days[$i] -eq 1) {
$End = $Days[$i]
$Start = 0
write-host "Start/End r:" $Start $End
}
elseIf ($index -eq $Days.Length) {
$End = $Days[$i]
$Start = 0
write-host "Start/End m:" $Start $End
}
else {
$End = $Days[$i]
$Start = $Days[$i+1] + 1
write-host "Start/End s:" $Start $End
}
########## Queries
$Querydays = " SELECT
distinct tblcustomer.CustomerName AS Customer_Name
, tblsma.sorno AS SMA_SOR_Number
, FUNC_GET_JSON_VALUES('Name', tblsma.products) AS Products
, DATE_FORMAT(tblsma.expiryDate, '%m/%d/%Y') AS Expiry_Date
, tblsma.Remarks AS 'Remarks'
, DATEDIFF( DATE(tblsma.expiryDate), CURDATE()) AS Days_Left_Before_Expiration
#,GROUP_CONCAT(tblsma.products) AS 'PRODUCT'
FROM
where
tblcustomer.affiliationID = '" + $Affiliation[$i] + "'
and tblcustomer.affiliationID <> 95
and tblaccountsgroup.GroupID <> 5
and datediff(DATE(tblsma.expiryDate), CURDATE()) between " + $Start + " and " + $End + "
order by Days_Left_Before_Expiration DESC"
#####################
$ResultDays=$null
$ResultDays = MysqlConn -Query $Querydays
Write-Host $ResultDays
$EmailBody = $EmailBody + "`r `n" + "<b>List of Valid License that will due in next " + $Days[$i] + " days:</b>"
foreach($row in $ResultDays)
{
$EmailBody = "`n" + $EmailBody + "`r `n" + "Account Name : " + $row.Item(0) + "`r" +
"SOR No. : " + $row.Item(1) + "`r" +
"Product Name : " + $row.Item(2) + "`r" +
"Expiry Date : " + $row.Item(3) + "`r" +
"Remarks : " + $row.Item(4) + "`r" +
"Days Before Expiry : " + $row.Item(5) + "`r" +
"`n"
}
$rep = $Days[$i+1]
}
}
Start/End s: 61 90
System.Data.DataRow System.Data.DataRow System.Data.DataRow
Start/End s: 31 60
Start/End s: 16 30
Start/End s: 1 15

Because you use in both forloops the same variable $i this will be overwritten all the time. Change the name of one of them.
Bad:
for ($i = 1; $i -lt 11; $i ++)
{
('First for loop: ' + $i)
for ($i = 1; $i -lt 11; $i ++)
{
('Second for loop: ' + $i)
}
}
First for loop: 1
Second for loop: 1
Second for loop: 2
Second for loop: 3
Second for loop: 4
Second for loop: 5
Second for loop: 6
Second for loop: 7
Second for loop: 8
Second for loop: 9
Second for loop: 10
Good:
for ($i = 1; $i -lt 11; $i ++)
{
('First for loop: ' + $i)
for ($x = 1; $x -lt 11; $x ++)
{
('Second for loop: ' + $x)
}
}
First for loop: 1
Second for loop: 1
Second for loop: 2
Second for loop: 3
Second for loop: 4
Second for loop: 5
Second for loop: 6
Second for loop: 7
Second for loop: 8
Second for loop: 9
Second for loop: 10
First for loop: 2
Second for loop: 1
Second for loop: 2
...

Related

AWK new line sorting

I have a script that sorts numbers:
{
if ($1 <= 9) xd++
else if ($1 > 9 && $1 <= 19) xd1++
else if ($1 > 19 && $1 <= 29) xd2++
else if ($1 > 29 && $1 <= 39) xd3++
else if ($1 > 39 && $1 <= 49) xd4++
else if ($1 > 49 && $1 <= 59) xd5++
else if ($1 > 59 && $1 <= 69) xd6++
else if ($1 > 69 && $1 <= 79) xd7++
else if ($1 > 79 && $1 <= 89) xd8++
else if ($1 > 89 && $1 <= 99) xd9++
else if ($1 == 100) xd10++
} END {
print "0-9 : "xd, "10-19 : " xd1, "20-29 : " xd2, "30-39 : " xd3, "40-49 : " xd4, "50-59 : " xd5, "60-69 : " xd6, "70-79 : " xd7, "80-89 : " xd8, "90-99 : " xd9, "100 : " xd10
}
output:
$ cat xd1 | awk -f script.awk
0-9 : 16 10-19 : 4 20-29 : 30-39 : 2 40-49 : 1 50-59 : 1 60-69 : 1 70-79 : 1 80-89 : 1 90-99 : 1 100 : 2
how to make that every tenth was on a new line?
like this:
0-9 : 16
10-19 : 4
20-29 :
30-39 : 2
print with \n doesn't work
additionally:
in the top ten I have 16 numbers, how can I get this information using the "+" sign
like this:
0-9 : 16 ++++++++++++++++
10-19 : 4 ++++
20-29 :
30-39 : 2 ++
thank you in advance
If we rewrite the current code to use an array to keep track of counts, we can then use a simple for loop to print the results on individual lines, eg:
{ if ($1 <= 9) xd[0]++
else if ($1 <= 19) xd[1]++
else if ($1 <= 29) xd[2]++
else if ($1 <= 39) xd[3]++
else if ($1 <= 49) xd[4]++
else if ($1 <= 59) xd[5]++
else if ($1 <= 69) xd[6]++
else if ($1 <= 79) xd[7]++
else if ($1 <= 89) xd[8]++
else if ($1 <= 99) xd[9]++
else xd[10]++
}
END { for (i=0;i<=9;i++)
print (i*10) "-" (i*10)+9, ":", xd[i]
print "100 :", xd[10]
}
At this point we could also replace the 1st part of the script with a comparable for loop, eg:
{ for (i=0;i<=9;i++)
if ($1 <= (i*10)+9) {
xd[i]++
next
}
xd[10]++
}
END { for (i=0;i<=9;i++)
print (i*10) "-" (i*10)+9, ":", xd[i]
print "100 :", xd[10]
}
As for the additional requirement to print a variable number of + on the end of each line we can add a function (prt()) to generate the variable number of +:
function prt(n ,x) {
x=""
if (n) {
x=sprintf("%*s",n," ")
gsub(/ /,"+",x)
}
return x
}
{ for (i=0;i<=9;i++)
if ($1 <= (i*10)+9) {
xd[i]++
next
}
xd[10]++
}
END { for (i=0;i<=9;i++)
print (i*10) "-" (i*10)+9, ":", xd[i], prt(xd[i])
print "100 :", xd[10], prt(xd[10])
}
how to make that every tenth was on a new line?
Inform GNU AWK that you want OFS (output field separator) to be newline, consider following simple example
awk 'BEGIN{x=1;y=2;z=3}END{print "x is " x, "y is " y, "z is " z}' emptyfile
gives output
x is 1 y is 2 z is 3
whilst
awk 'BEGIN{OFS="\n";x=1;y=2;z=3}END{print "x is " x, "y is " y, "z is " z}' emptyfile
gives output
x is 1
y is 2
z is 3
Explanation: OFS value (default: space) is used for joining arguments of print. If you want to know more about OFS then read 8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR
(tested in gawk 4.2.1)
you don't need to hard-code in 10-buckets like that :
jot -r 300 1 169 | mawk '
BEGIN { _+=(_+=_^=_<_)*_*_ } { ++___[_<(__=int(($!!_)/_))?_:__] }
END {
____ = sprintf("%*s", NR, _)
gsub(".","+",____)
for(__=_-_;__<=_;__++) {
printf(" [%3.f %-6s] : %5.f %.*s\n",__*_,+__==+_?"+ "\
: " , " __*_--+_++, ___[__], ___[__], ____) } }'
[ 0 , 9 ] : 16 ++++++++++++++++
[ 10 , 19 ] : 17 +++++++++++++++++
[ 20 , 29 ] : 16 ++++++++++++++++
[ 30 , 39 ] : 19 +++++++++++++++++++
[ 40 , 49 ] : 14 ++++++++++++++
[ 50 , 59 ] : 18 ++++++++++++++++++
[ 60 , 69 ] : 18 ++++++++++++++++++
[ 70 , 79 ] : 16 ++++++++++++++++
[ 80 , 89 ] : 20 ++++++++++++++++++++
[ 90 , 99 ] : 19 +++++++++++++++++++
[100 + ] : 127 ++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++

Why do Perl 6 state variable behave differently for different files?

I have 2 test files. In one file, I want to extract the middle section using a state variable as a switch, and in the other file, I want to use a state variable to hold the sum of numbers seen.
File one:
section 0; state 0; not needed
= start section 1 =
state 1; needed
= end section 1 =
section 2; state 2; not needed
File two:
1
2
3
4
5
Code to process file one:
cat file1 | perl6 -ne 'state $x = 0; say " x is ", $x; if $_ ~~ m/ start / { $x = 1; }; .say if $x == 1; if $_ ~~ m/ end / { $x = 2; }'
and the result is with errors:
x is (Any)
Use of uninitialized value of type Any in numeric context
in block at -e line 1
x is (Any)
= start section 1 =
x is 1
state 1; needed
x is 1
= end section 1 =
x is 2
x is 2
And the code to process file two is
cat file2 | perl6 -ne 'state $x=0; if $_ ~~ m/ \d+ / { $x += $/.Str; } ; say $x; '
and the results are as expected:
1
3
6
10
15
What make the state variable fail to initialize in the first code, but okay in the second code?
I found that in the first code, if I make the state variable do something, such as addition, then it works. Why so?
cat file1 | perl6 -ne 'state $x += 0; say " x is ", $x; if $_ ~~ m/ start / { $x = 1; }; .say if $x == 1; if $_ ~~ m/ end / { $x = 2; }'
# here, $x += 0 instead of $x = 0; and the results have no errors:
x is 0
x is 0
= start section 1 =
x is 1
state 1; needed
x is 1
= end section 1 =
x is 2
x is 2
Thanks for any help.
This was answered in smls's comment:
Looks like a Rakudo bug. Simpler test-case:
echo Hello | perl6 -ne 'state $x = 42; dd $x'.
It seems that top-level state variables are
not initialized when the -n or -p switch is used. As a work-around, you can manually initialize the variable in a separate statement, using the //= (assign if undefined) operator:
state $x; $x //= 42;

How to print lines of text with date older than two days

I have the following text file that I am working with and must be able to parse only the object name value when the creationdatetime is older than two days.
objectname ...........................: \Path\to\file\hpvss-LUN-22May12 22.24.11\hpVSS-LUN-29Aug12 22.39.15
creationdatetime .....................: 01-Sep-2012 02:17:43
objectname ...........................: \Path\to\file\hpVSS-LUN-22May12 22.24.11\hpVSS-LUN-28Aug12 22.16.19
creationdatetime .....................: 03-Sep-2012 10:18:09
objectname ...........................: \Path\to\file\hpVSS-LUN-22May-12 22.24.11\hpVSS-LUN-27Aug12 22.01.52
creationdatetime .....................: 03-Sep-2012 10:18:33
An output of the command for the above would be:
\Path\to\file\hpvss-LUN-22May12 22.24.11\hpVSS-LUN-29Aug12 22.39.15
Any help will be greatly appreciated.
Prem
Date parsing in awk is a bit tricky but it can be done using mktime. To convert the month name to numeric, an associative translation array is defined.
The path names have space in them so the best choice for field separator is probably : (colon followed by space). Here's a working awk script:
older_than_two_days.awk
BEGIN {
months2num["Jan"] = 1; months2num["Feb"] = 2; months2num["Mar"] = 3; months2num["Apr"] = 4;
months2num["May"] = 5; months2num["Jun"] = 6; months2num["Jul"] = 7; months2num["Aug"] = 8;
months2num["Sep"] = 9; months2num["Oct"] = 10; months2num["Nov"] = 11; months2num["Dec"] = 12;
now = systime()
two_days = 2 * 24 * 3600
FS = ": "
}
$1 ~ /objectname/ {
path = $2
}
$1 ~ /creationdatetime/ {
split($2, ds, " ")
split(ds[1], d, "-")
split(ds[2], t, ":")
date = d[3] " " months2num[d[2]] " " d[1] " " t[1] " " t[2] " " t[3]
age_in_seconds = mktime(date)
if(now - age_in_seconds > two_days)
print path
}
All the splitting in the last block is to pick out the date bits and convert them into a format that mktime accepts.
Run it like this:
awk -f older_than_two_days.awk infile
Output:
\Path\to\file\hpvss-LUN-22May12 22.24.11\hpVSS-LUN-29Aug12 22.39.15
I would do it in 2 phases:
1) reformat you input file
awk '/objectname/{$1=$2="";file=$0;getline;$1=$2="";print $0" |"file}' inputfile > inputfile2
This way you would deal with
01-Sep-2012 02:17:43 | \Path\to\file\hpvss-LUN-22May12 22.24.11\hpVSS-LUN-29Aug12 22.39.15
03-Sep-2012 10:18:09 | \Path\to\file\hpVSS-LUN-22May12 22.24.11\hpVSS-LUN-28Aug12 22.16.19
03-Sep-2012 10:18:33 | \Path\to\file\hpVSS-LUN-22May-12 22.24.11\hpVSS-LUN-27Aug12 22.01.52
2) filter on dates:
COMPARDATE=$(($(date +%s)-2*24*3600)) # 2 days off
IFS='|'
while read d f
do
[[ $(date -d "$d" +%s) < $COMPARDATE ]] && printf "%s\n" "$f"
done < inputfile2

How substract millisecond with AWK - script

I'm trying to create an awk script to subtract milliseconds between 2 records joined-up for example:
By command line I might do this:
Input:
06:20:00.120
06:20:00.361
06:20:15.205
06:20:15.431
06:20:35.073
06:20:36.190
06:20:59.604
06:21:00.514
06:21:25.145
06:21:26.125
Command:
awk '{ if ( ( NR % 2 ) == 0 ) { printf("%s\n",$0) } else { printf("%s ",$0) } }' input
I'll obtain this:
06:20:00.120 06:20:00.361
06:20:15.205 06:20:15.431
06:20:35.073 06:20:36.190
06:20:59.604 06:21:00.514
06:21:25.145 06:21:26.125
To substract milliseconds properly:
awk '{ if ( ( NR % 2 ) == 0 ) { printf("%s\n",$0) } else { printf("%s ",$0) } }' input| awk -F':| ' '{print $3, $6}'
And to avoid negative numbers:
awk '{if ($2<$1) sub(/00/, "60",$2); print $0}'
awk '{$3=($2-$1); print $3}'
The goal is get this:
Call 1 0.241 ms
Call 2 0.226 ms
Call 3 1.117 ms
Call 4 0.91 ms
Call 5 0.98 ms
And finally and average.
I might perform this but command by command. I dunno how to place this into a script.
Please need help.
Using awk:
awk '
BEGIN { cmd = "date +%s.%N -d " }
NR%2 {
cmd $0 | getline var1;
next
}
{
cmd $0 | getline var2;
var3 = var2 - var1;
print "Call " ++i, var3 " ms"
}
' file
Call 1 0.241 ms
Call 2 0.226 ms
Call 3 1.117 ms
Call 4 0.91 ms
Call 5 0.98 ms
One way using awk:
Content of script.awk:
## For every input line.
{
## Convert formatted dates to time in miliseconds.
t1 = to_ms( $0 )
getline
t2 = to_ms( $0 )
## Calculate difference between both dates in miliseconds.
tr = (t1 >= t2) ? t1 - t2 : t2 - t1
## Print to output with time converted to a readable format.
printf "Call %d %s ms\n", ++cont, to_time( tr )
}
## Convert a date in format hh:mm:ss:mmm to miliseconds.
function to_ms(time, time_ms, time_arr)
{
split( time, time_arr, /:|\./ )
time_ms = ( time_arr[1] * 3600 + time_arr[2] * 60 + time_arr[3] ) * 1000 + time_arr[4]
return time_ms
}
## Convert a time in miliseconds to format hh:mm:ss:mmm. In case of 'hours' or 'minutes'
## with a value of 0, don't print them.
function to_time(i_ms, time)
{
ms = int( i_ms % 1000 )
s = int( i_ms / 1000 )
h = int( s / 3600 )
s = s % 3600
m = int( s / 60 )
s = s % 60
# time = (h != 0 ? h ":" : "") (m != 0 ? m ":" : "") s "." ms
time = (h != 0 ? h ":" : "") (m != 0 ? m ":" : "") s "." sprintf( "%03d", ms )
return time
}
Run the script:
awk -f script.awk infile
Result:
Call 1 0.241 ms
Call 2 0.226 ms
Call 3 1.117 ms
Call 4 0.910 ms
Call 5 0.980 ms
If you're not tied to awk:
to_epoch() { date -d "$1" "+%s.%N"; }
count=0
paste - - < input |
while read t1 t2; do
((count++))
diff=$(printf "%s-%s\n" $(to_epoch "$t2") $(to_epoch "$t1") | bc -l)
printf "Call %d %5.3f ms\n" $count $diff
done

Union "tables" with awk

I have multiple "tables" in a file, such as:
col1, col2, col3, col4
1, 2, 3, 4
5, 6, 7, 8
col2, col3, col5
10, 11, 12
13, 14, 15
And I would like to collapse these 2 tables to:
col1, col2, col3, col4, col5
1 , 2 , 3 , 4 ,
5 , 6 , 7 , 8 ,
, 10 , 11 , , 12
, 13 , 14 , , 15
(Note: extra whitespace left just to make things easier to understand)
This would seem to require at least 2 passes, one to collect the full list of columns, and another one to create the output table. Is it possible to do this with awk? If not, what other tool would you recommend?
give this a try:
Code:
$ cat s.awk
NR==FNR{
if (match($1, /^col/))
maxIndex=(substr($NF,4,1)>maxIndex)?substr($NF,4,1):maxColumn
next
}
FNR==1{
for (i=1;i<=maxIndex;i++)
header=(i==maxIndex)?header "col"i:header "col" i ", "
print header
}
/^col[1-9]/{
for (i in places)
delete places[i]
for (i=1;i<=NF;i++){
n=substr($i,4,1)
places[n]=i
}
}
/^[0-9]/{
s=""
for (i=1;i<=maxIndex;i++)
s=(i in places)? s $places[i] " " : s ", "
print s
}
Call with:
awk -f s.awk file file | column -t
Output:
col1, col2, col3, col4, col5
1, 2, 3, 4 ,
5, 6, 7, 8 ,
, 10, 11, , 12
, 13, 14, , 15
HTH Chris
The code assumes that the tables are separated by empty lines:
awk -F', *' 'END {
for (i = 0; ++i <= c;)
printf "%s", (cols[i] (i < c ? OFS : RS))
for (i = 0; ++i <= n;)
for (j = 0; ++j <= c;)
printf "%s", (vals[i, cols[j]] (j < c ? OFS : RS))
}
!NF {
fnr = NR + 1; next
}
NR == 1 || NR == fnr {
for (i = 0; ++i <= NF;) {
_[$i]++ || cols[++c] = $i
idx[i] = $i
}
next
}
{
++n; for (i = 0; ++i <= NF;)
vals[n, idx[i]] = $i
}' OFS=', ' tables
If you have the tables in separate files:
awk -F', *' 'END {
for (i = 0; ++i <= c;)
printf "%s", (cols[i] (i < c ? OFS : RS))
for (i = 0; ++i <= n;)
for (j = 0; ++j <= c;)
printf "%s", (vals[i, cols[j]] (j < c ? OFS : RS))
}
FNR == 1 {
for (i = 0; ++i <= NF;) {
_[$i]++ || cols[++c] = $i
idx[i] = $i
}
next
}
{
++n; for (i = 0; ++i <= NF;)
vals[n, idx[i]] = $i
}' OFS=', ' file1 file2 [.. filen]
Here's a one-pass perl solution. It assumes there is at least one blank line between each table in the file.
perl -00 -ne '
BEGIN {
%column2idx = ();
#idx2column = ();
$lineno = 0;
#lines = ();
}
chomp;
#rows = split /\n/;
#field_map = ();
#F = split /, /, $rows[0];
for ($i=0; $i < #F; $i++) {
if (not exists $column2idx{$F[$i]}) {
$idx = #idx2column;
$column2idx{$F[$i]} = $idx;
$idx2column[$idx] = $F[$i];
}
$field_map[$i] = $column2idx{$F[$i]};
}
for ($i=1; $i < #rows; $i++) {
#{$lines[$lineno]} = ();
#F = split /, /, $rows[$i];
for ($j=0; $j < #F; $j++) {
$lines[$lineno][$field_map[$j]] = $F[$j];
}
$lineno++;
}
END {
$ncols = #idx2column;
print join(", ", #idx2column), "\n";
foreach $row (#lines) {
#row = ();
for ($i=0; $i < $ncols; $i++) {
push #row, $row->[$i];
}
print join(", ", #row), "\n";
}
}
' tables | column -t
output
col1, col2, col3, col4, col5
1, 2, 3, 4,
5, 6, 7, 8,
, 10, 11, , 12
, 13, 14, , 15