I have two times : actual_arr and sched_arr. Both times are in the format char (YYYYMMDDHH24MISS).
Now I've to calculate the punctuality of each movement. The rows are stored as below :
MovtName Actual_Arr Sched_Arr
mvt1 20140206215900 20140206210000
Now my definition of punctuality is (in percentage) : (actual time - sched time)/sched time*100.
I know how to calculate the difference for each movement. The code snippet I used is :
Trunc((To_Date(actual_arr,'YYYYMMDDHH24MISS')-To_Date(sched_arr,'YYYYMMDDHH24MISS'))*24*60,2)
This gives the delay in minutes.
Now what do I divide this value with? This is what I cannot wrap my head around. How do I convert the sched_arr into minutes? Or in other words what is the valid denominator for the equation for punctuality?
If anybody has a more correct definition for punctuality and how to calculate it, I'm all ears.
Thanks in advance.
For a single meeting, just use the lateness in minutes as a measurement of punctuality.
When you have multiple meetings you can start to define punctuality in non-dimensional terms, such as the percentage of meetings which started more then five minutes late.
Related
I am applying the VRP example of optaplanner with time windows and I get feasible solutions whenever I define time windows in a range of 24 hours (00:00 to 23:59). But I am needing:
Manage long trips, where I know that the duration between leaving the depot to the first visit, or durations between visits, will be more than 24 hours. So currently it does not give me workable solutions, because the TW format is in 24 hour format. It happens that when applying the scoring rule "arrivalAfterDueTime", always the "arrivalTime" is higher than the "dueTime", because the "dueTime" is in a range of (00:00 to 23:59) and the "arrivalTime" is the next day.
I have thought that I should take each TW of each Customer and add more TW to it, one for each day that is planned.
Example, if I am planning a trip for 3 days, then I would have 3 time windows in each Customer. Something like this: if Customer 1 is available from [08:00-10:00], then say it will also be available from [32:00-34:00] and [56:00-58:00] which are the equivalent of the same TW for the following days.
Likewise I handle the times with long, converted to milliseconds.
I don't know if this is the right way, my consultation would be more about some ideas to approach this constraint, maybe you have a similar problematic and any idea for me would be very appreciated.
Sorry for the wording, I am a Spanish speaker. Thank you.
Without having checked the example, handing multiple days shouldn't be complicated. It all depends on how you model your time variable.
For example, you could:
model the time stamps as a long value denoted as seconds since epoch. This is how most of the examples are model if I remember correctly. Note that this is not very human-readable, but is the fastest to compute with
you could use a time data type, e.g. LocalTime, this is a human-readable time format but will work in the 24-hour range and will be slower than using a primitive data type
you could use a date time data tpe, e.g LocalDateTime, this is also human-readable and will work in any time range and will also be slower than using a primitive data type.
I would strongly encourage to not simply map the current day or current hour to a zero value and start counting from there. So, in your example you denote the times as [32:00-34:00]. This makes it appear as you are using the current day midnight as the 0th hour and start counting from there. While you can do this it will affect debugging and maintainability of your code. That is just my general advice, you don't have to follow it.
What I would advise is to have your own domain models and map them to Optaplanner models where you use a long value for any time stamp that is denoted as seconds since epoch.
I need to add decorators that will represent from 6 days ago till now.
how should I do it?
lets say the date is realative 604800000 millis from now and it's absolute is 1427061600000
#-604800000
#1427061600000
#now in millis - 1427061600000
#1427061600000 - now in millis
Is there a difference by using relative or absolute times?
Thanks
#-518400000--1
Will give you data for the last 6 days (or last 144 hours).
I think all you need is to read this.
Basically, you have the choice of #time, which is time since Epoch (your #1427061600000). You can also express it as a negative number, which the system will interpret as NOW - time (your #-604800000). These both work, but they don't give the result you want. Instead of returning all that was added in that time range, it will return a snapshot of your table from 6 days ago....
Although you COULD use that snapshot, eliminate all duplicates between that snapshot and your current table, and then take THOSE results as what was added during your 6 days, you're better off with :
Using time ranges directly, which you cover with your 3rd and 4th lines. I don't know if the order makes a difference, but I've always used #time1-time2 with time1<time2 (in your case, #1427061600000 - now in millis).
Assume a computer operating at 1GHz speeds — i.e. it executes 10^9 instructions/second. For each of the following time complexities, what is the largest size input n that could be completely processed in 1 week?
a) n²
b) n³
c) 2^n
This is homework. I don't need the answer I just don't know how to start the problem. Can someone please show me how to solve the first one. I could then figure out the rest. Thank you!
The way I see it is take 10^9 and subtract 10² to get the maximum input but that seems too easy.
60 seconds in a minute, 60 minutes in an hour, 24 hours in a day, 7 days in a week. That's 604800 seconds.
If you can execute 10^9 instructions per second, you can execute 604800*10^9 instructions per week - that's 6.048*10^14.
The square root of 6.048*10^14 is 24,592,681, i.e. we can process 24,592,681^2 instructions in a week, so we can process 24,592,681 sized input if it is n^2 time complexity.
The rest are pretty similar.
I'm looking for recommendations on a best practice here.
I have a requirement where on a given day I must have an arbitrary number of intervals (think buckets of time which are composed of transactions) where I can have at most N intervals per day. These intervals are like time but can be arbitrary lengths i.e. some are seconds, others are minutes.
How the intervals should be formed is based on my source data. On any given day, we always start with interval 1 and it is unknown the total number of intervals we will have by EOD, each interval is defined by a fixed number of transactions. For every interval I am going to need to know the end time as well.
What is the best approach here? Should I be bucketing my fact table and connecting to a standard hour/minute/second dimension or should I be using my transactional data to be making a dimension that accommodates it?
I appreciate your feedback.
If the buckets are on time, you probably have to do it on one of your dimensions. There is a property on the attributes called bucket that can do that for you
I have a time stored as a decimal(9,2) column in an sql-server 2005 database.
The time is represented like
Time timeInDecimal
1H 20Min 1.33
1H 30Min 1.50
and so on
I´m looking for an easy way to check whether the number of minutes except whole hours is not evenly divided by 5.
The value I'm hoping to find is where the time is 1H:23Min but not 1H:25MIN.
I just wan´t to compare the minute part of the time.
The way I do now is:
RIGHT(CONVERT(varchar(5),DATEADD(minute,ROUND(timeInDecimal * 60,0),0),108),1) not in ('0','5')
But it does hardly seems to be the ideal way to deal with this.
Feels like I can use the modulo operator for this, but how?
Or is there an even better way?
Hope for a quick answer.
Kind Regards
Andreas
Using the modulus operator, twice:
ROUND((timeInDecimal % 1) * 60, 0) % 5 <> 0
That will:
Get the fractional part and convert it to minutes.
Round it to the nearest minute (.33 hours -> 20 minutes, not 19.80).
Check whether that's divisible by 5.