I have the following text file that I am working with and must be able to parse only the object name value when the creationdatetime is older than two days.
objectname ...........................: \Path\to\file\hpvss-LUN-22May12 22.24.11\hpVSS-LUN-29Aug12 22.39.15
creationdatetime .....................: 01-Sep-2012 02:17:43
objectname ...........................: \Path\to\file\hpVSS-LUN-22May12 22.24.11\hpVSS-LUN-28Aug12 22.16.19
creationdatetime .....................: 03-Sep-2012 10:18:09
objectname ...........................: \Path\to\file\hpVSS-LUN-22May-12 22.24.11\hpVSS-LUN-27Aug12 22.01.52
creationdatetime .....................: 03-Sep-2012 10:18:33
An output of the command for the above would be:
\Path\to\file\hpvss-LUN-22May12 22.24.11\hpVSS-LUN-29Aug12 22.39.15
Any help will be greatly appreciated.
Prem
Date parsing in awk is a bit tricky but it can be done using mktime. To convert the month name to numeric, an associative translation array is defined.
The path names have space in them so the best choice for field separator is probably : (colon followed by space). Here's a working awk script:
older_than_two_days.awk
BEGIN {
months2num["Jan"] = 1; months2num["Feb"] = 2; months2num["Mar"] = 3; months2num["Apr"] = 4;
months2num["May"] = 5; months2num["Jun"] = 6; months2num["Jul"] = 7; months2num["Aug"] = 8;
months2num["Sep"] = 9; months2num["Oct"] = 10; months2num["Nov"] = 11; months2num["Dec"] = 12;
now = systime()
two_days = 2 * 24 * 3600
FS = ": "
}
$1 ~ /objectname/ {
path = $2
}
$1 ~ /creationdatetime/ {
split($2, ds, " ")
split(ds[1], d, "-")
split(ds[2], t, ":")
date = d[3] " " months2num[d[2]] " " d[1] " " t[1] " " t[2] " " t[3]
age_in_seconds = mktime(date)
if(now - age_in_seconds > two_days)
print path
}
All the splitting in the last block is to pick out the date bits and convert them into a format that mktime accepts.
Run it like this:
awk -f older_than_two_days.awk infile
Output:
\Path\to\file\hpvss-LUN-22May12 22.24.11\hpVSS-LUN-29Aug12 22.39.15
I would do it in 2 phases:
1) reformat you input file
awk '/objectname/{$1=$2="";file=$0;getline;$1=$2="";print $0" |"file}' inputfile > inputfile2
This way you would deal with
01-Sep-2012 02:17:43 | \Path\to\file\hpvss-LUN-22May12 22.24.11\hpVSS-LUN-29Aug12 22.39.15
03-Sep-2012 10:18:09 | \Path\to\file\hpVSS-LUN-22May12 22.24.11\hpVSS-LUN-28Aug12 22.16.19
03-Sep-2012 10:18:33 | \Path\to\file\hpVSS-LUN-22May-12 22.24.11\hpVSS-LUN-27Aug12 22.01.52
2) filter on dates:
COMPARDATE=$(($(date +%s)-2*24*3600)) # 2 days off
IFS='|'
while read d f
do
[[ $(date -d "$d" +%s) < $COMPARDATE ]] && printf "%s\n" "$f"
done < inputfile2
Related
I have some CSV files where a certain column is actually supposed to be an array, but ALL fields are separated by commas. I need to convert the file to where every value is quoted, and the array column is a quoted, comma-delimited list. I do know the column index for each file.
I wrote the script below to handle this. However, I get each line printed as hoped for, but followed by the raw line.
desired output:
A,B,C,D
"1","","a,b,c","2"
"3","4","","5"
"","5","d,e","6"
"7","8","f","9"
(base) balter#winmac:~/winhome/CancerGraph$ cat testfile
A,B,C,D
1,,a,b,c,2
3,4,,5
,5,d,e,6
7,8,f,9
(base) balter#winmac:~/winhome/CancerGraph$ ./fix_array_cols.awk FS="," array_col=3 testfile
A,B,C,D
"1","","a,b,c","2"
1,,a,b,c,2
"3","4","","5"
3,4,,5
"","5","d,e","6"
,5,d,e,6
"7","8","f","9"
7,8,f,9
(base) balter#winmac:~/winhome/CancerGraph$ cat fix_array_cols.awk
#!/bin/awk -f
BEGIN {
getline;
print $0;
num_cols = NF;
#printf("num_cols: %s, array_col: %s\n\n", num_cols, array_col);
}
NR>1 {
total_fields = NF;
# fields_before_array = (array_col - 1)
# fields_before_array + array_length + fields_after_array = NF
# fields_before_array + fields_after_array + 1 = num_cols
# array_length - 1 = total_fields - num_cols
# array_length = total_fields - num_cols + 1
# fields_after_array = total_fields - array_length - fields_before_array
# = total_fields - (total_fields - num_cols + 1) - (array_col - 1)
# = num_cols - array_col
fields_before_array = (array_col - 1);
array_length = total_fields - num_cols + 1;
fields_after_array = num_cols - array_col;
first_array_position = array_col;
last_array_position = array_col + array_length-1;
#printf("array_col: %s, fields_before_array: %s, array_length: %s, fields_after_array: %s, total_fields: %s, num_cols: %s", array_col, fields_before_array, array_length, fields_after_array, total_fields, num_cols)
### loop through fields before array column
### remove whitespace, and print surround with ""
for (i=1; i<array_col; i++)
{
gsub(/ /,"",$i);
printf("\"%s\",", $i);
}
### Collect array surrounded by ""
array_data = "";
### Loop through array
for (i=array_col ; i<array_col+array_length-1 ; i++)
{
gsub(/ /, "", $i);
array_data = array_data $i ",";
}
### collect last array element with no trailing ,
array_data = array_data $i
### print array surrounded by quotes
printf("\"%s\",", array_data);
### loop through remaining fields, remove whitespace, surround with ""
for (i=last_array_position+1 ; i<total_fields ; i++)
{
gsub(/ /,"",$i);
printf("\"%s\",", $i);
}
### finish line with \n
printf("\"%s\"\n", $total_fields);
} FILENAME
Remove FILENAME from your script.
I have a csv file:
number1;number2;min_length;max_length
"40";"1801";8;8
"40";"182";8;8
"42";"32";6;8
"42";"4";6;6
"43";"691";9;9
I want the output be:
4018010000;4018019999
4018200000;4018299999
42320000;42329999
423200000;423299999
4232000000;4232999999
42400000;42499999
43691000000;43691999999
So the new file will be consisting of:
column_1 = a concatenation of old_column_1 + old_column_2 + a number
of "0" equal to (old_column_3 - length of the old_column_2)
column_2 = a concatenation of old_column_1 + old_column_2 + a number of "9" equal
to (old_column_3 - length of the old_column_2) , when min_length = max_length. And when min_length is not equal with max_length , I need to take into account all the possible lengths. So for the line "42";"32";6;8 , all the lengths are: 6,7 and 8.
Also, i need to delete the quotation mark everywhere.
I tried with paste and cut like that:
paste -d ";" <(cut -f1,2 -d ";" < file1) > file2
for the concatenation of the first 2 columns, but i think with awk its easier. However, i can't figure out how to do it. Any help it's apreciated. Thanks!
Edit: Actually, added column 4 in input.
You may use this awk:
awk 'function padstr(ch, len, s) {
s = sprintf("%*s", len, "")
gsub(/ /, ch, s)
return s
}
BEGIN {
FS=OFS=";"
}
{
gsub(/"/, "");
for (i=0; i<=($4-$3); i++) {
d = $3 - length($2) + i
print $1 $2 padstr("0", d), $1 $2 padstr("9", d)
}
}' file
4018010000;4018019999
4018200000;4018299999
42320000;42329999
423200000;423299999
4232000000;4232999999
42400000;42499999
43691000000;43691999999
With awk:
awk '
BEGIN{FS = OFS = ";"} # set field separator and output field separator to be ";"
{
$0 = gensub("\"", "", "g"); # Drop double quotes
s = $1$2; # The range header number
l = $3-length($2); # Number of zeros or 9s to be appended
l = 10^l; # Get 10 raised to that number
print s*l, (s+1)*l-1; # Adding n zeros is multiplication by 10^n
# Adding n nines is multipliaction by 10^n + (10^n - 1)
}' input.txt
Explanation inline as comments.
I have data which looks like this
1 3
1 2
1 9
5 4
4 6
5 6
5 8
5 9
4 2
I would like the output to be
1 3,2,9
5 4,6,8,9
4 6,2
This is just sample data but my original one has lots more values.
So this worked
So this basically creates a hash table, using the first column as a key and the second column of the line as the value:
awk '{line="";for (i = 2; i <= NF; i++) line = line $i ", "; table[$1]=table[$1] line;} END {for (key in table) print key " => " table[key];}' trial.txt
OUTPUT
4 => 6, 2
5 => 4, 6, 8, 9
1 => 3, 2, 9
I'd write
awk -v OFS=, '
{
key = $1
$1 = ""
values[key] = values[key] $0
}
END {
for (key in values) {
sub(/^,/, "", values[key])
print key " " values[key]
}
}
' file
If you want only the unique values for each key (requires GNU awk for multi-dimensional arrays)
gawk -v OFS=, '
{ for (i=2; i<=NF; i++) values[$1][$i] = i }
END {
for (key in values) {
printf "%s ", key
sep = ""
for (val in values[key]) {
printf "%s%s", sep, val
sep = ","
}
print ""
}
}
' file
or perl
perl -lane '
$key = shift #F;
$values{$key}{$_} = 1 for #F;
} END {
$, = " ";
print $_, join(",", keys %{$values{$_}}) for keys %values;
' file
if not concerned with the order of the keys, I think this is the idiomatic awk solution.
$ awk '{a[$1]=($1 in a?a[$1]",":"") $2}
END{for(k in a) print k,a[k]}' file |
column -t
4 6,2
5 4,6,8,9
1 3,2,9
awk '{for (i = 1; i <= NF; i++) {gsub(/[^[:alnum:]]/, " "); print tolower($i)": "NR | "sort -V | uniq";}}' input.txt
With above code, I get output as:
line1: 2
line1: 3
line1: 5
line2: 1
line2: 2
line3: 10
I want it like below:
line1: 2, 3, 5
line2: 1, 2
lin23: 10
How to achieve it?
Use gawk's array features. I'll provide actual code once I hack it up.
awk '{for (i = 1; i <= NF; i++) {
gsub(/[^[:alnum:]]/, " ");
arr[tolower($i)] = arr[tolower($i)]NR", "}
}
END {
for (x in arr) {
print x": "substr(arr[x], 1, length(arr[x])-2);
}}' input.txt | sort
Note that this includes duplicate line numbers if a word appears multiple times on the same lines.
using perl...
#!/usr/bin/perl
while(<>){
if( /(\w+):\s*(\d+)/){ # extract the parts
$arr{lc($1)}{$2} ++ # count them
};
}
for my $k (sort keys %arr){ # print sorted alpha
print "$k: ";
$lines=$arr{$k};
print join(", ",(sort {$a<=>$b} keys %$lines)),"\n"; print sorted numerically
}
This solution removes ans sorts the duplicated numbers. Is this what you needed?
I'm trying to create an awk script to subtract milliseconds between 2 records joined-up for example:
By command line I might do this:
Input:
06:20:00.120
06:20:00.361
06:20:15.205
06:20:15.431
06:20:35.073
06:20:36.190
06:20:59.604
06:21:00.514
06:21:25.145
06:21:26.125
Command:
awk '{ if ( ( NR % 2 ) == 0 ) { printf("%s\n",$0) } else { printf("%s ",$0) } }' input
I'll obtain this:
06:20:00.120 06:20:00.361
06:20:15.205 06:20:15.431
06:20:35.073 06:20:36.190
06:20:59.604 06:21:00.514
06:21:25.145 06:21:26.125
To substract milliseconds properly:
awk '{ if ( ( NR % 2 ) == 0 ) { printf("%s\n",$0) } else { printf("%s ",$0) } }' input| awk -F':| ' '{print $3, $6}'
And to avoid negative numbers:
awk '{if ($2<$1) sub(/00/, "60",$2); print $0}'
awk '{$3=($2-$1); print $3}'
The goal is get this:
Call 1 0.241 ms
Call 2 0.226 ms
Call 3 1.117 ms
Call 4 0.91 ms
Call 5 0.98 ms
And finally and average.
I might perform this but command by command. I dunno how to place this into a script.
Please need help.
Using awk:
awk '
BEGIN { cmd = "date +%s.%N -d " }
NR%2 {
cmd $0 | getline var1;
next
}
{
cmd $0 | getline var2;
var3 = var2 - var1;
print "Call " ++i, var3 " ms"
}
' file
Call 1 0.241 ms
Call 2 0.226 ms
Call 3 1.117 ms
Call 4 0.91 ms
Call 5 0.98 ms
One way using awk:
Content of script.awk:
## For every input line.
{
## Convert formatted dates to time in miliseconds.
t1 = to_ms( $0 )
getline
t2 = to_ms( $0 )
## Calculate difference between both dates in miliseconds.
tr = (t1 >= t2) ? t1 - t2 : t2 - t1
## Print to output with time converted to a readable format.
printf "Call %d %s ms\n", ++cont, to_time( tr )
}
## Convert a date in format hh:mm:ss:mmm to miliseconds.
function to_ms(time, time_ms, time_arr)
{
split( time, time_arr, /:|\./ )
time_ms = ( time_arr[1] * 3600 + time_arr[2] * 60 + time_arr[3] ) * 1000 + time_arr[4]
return time_ms
}
## Convert a time in miliseconds to format hh:mm:ss:mmm. In case of 'hours' or 'minutes'
## with a value of 0, don't print them.
function to_time(i_ms, time)
{
ms = int( i_ms % 1000 )
s = int( i_ms / 1000 )
h = int( s / 3600 )
s = s % 3600
m = int( s / 60 )
s = s % 60
# time = (h != 0 ? h ":" : "") (m != 0 ? m ":" : "") s "." ms
time = (h != 0 ? h ":" : "") (m != 0 ? m ":" : "") s "." sprintf( "%03d", ms )
return time
}
Run the script:
awk -f script.awk infile
Result:
Call 1 0.241 ms
Call 2 0.226 ms
Call 3 1.117 ms
Call 4 0.910 ms
Call 5 0.980 ms
If you're not tied to awk:
to_epoch() { date -d "$1" "+%s.%N"; }
count=0
paste - - < input |
while read t1 t2; do
((count++))
diff=$(printf "%s-%s\n" $(to_epoch "$t2") $(to_epoch "$t1") | bc -l)
printf "Call %d %5.3f ms\n" $count $diff
done