I am trying to calculate resolution times in excel based off of a data pull from HP ALM QC.
We have separate statuses for time spent with QA, Dev, and Business. I want to produce a report showing the resolution time for each item using their Defect ID, by department.
Resolution time recording hours (24 hours a day):-
Start: 10:00 PM PST Sunday
End: 5:00 PM PST Friday
Statuses:
New (Starting point)
Open - Dev
Open - QA
Open - Bus
Open - Admin
Fixed (Stops Clock)
Failed (Starts Clock against Dev)
Closed (Stops Clock)
Cancelled (Stops Clock)
These are the columns from the data pull:
ID#
TIMSESTAMP
STATUS
VERSION
LEAD
SEVERITY
CAUSE
PARENT
FAILS
COMPANY
I would also like to conditionally format the report to highlight any resolution times in red that exceed these goals:
Resolution Time SLAs:
A: 8 hours
B: 24 hours
C: 32 hours
D: 48 hours
E: 72 hours
F: 72 hours
I don't know where to start. We were using VBA for the way we used to calculate resolution time, but the error rate was over 20%. I'm very new to this, so I apologize if I am leaving anything out.
This was much easier to figure out than I thought.
This can be done with a formula and a pivot table.
The data must be in order from oldest timestamp to newest, then sorted by the item ID #.
You subtract the newer timestamp from the older, and apply MAX(0 to the formula to prevent negative numbers appearing when switching defects.
Then pivot the data for totals.
Related
I’m a data analyst in the insurance industry and we currently have a program in SAS EG that tracks catastrophe development week by week since the start of the event for all of the catastrophic events that are reported.
(I.E week 1 is catastrophe start date + 7 days, week 2 would be end of week 1 + 7 days and so on) then all transaction amounts (dollars) for the specific catastrophes would be grouped into the respective weeks based on the date each transaction was made.
Problem that we’re faced with is we are moving away from SAS EG to GCP big query and the current process of calculating those weeks is a manually read in list which isn’t very efficient and not easily translated to BigQuery.
Curious if anybody has an idea that would allow me to calculate each week number in periods of 7 days since the start of an event in SQL or has an idea specific for BigQuery? There would be different start dates for each event.
It is complex, I know and I’m willing to give more explanation as needed. Open to any ideas for this as I haven’t been able to find anything.
We're having trouble understanding the numbers in the daily summary emails we get from Fabric.
A search on SO only shows one somewhat-related question/answer.
Here are the emails from 2 consecutive days:
Our questions are:
Does “Monthly Active” mean over the last 30 days? If so, how can there be a 36% drop in 1 day if the counts went from 101 to 93 (an 8% drop)?
Why does “Daily Active” show a 75% drop if the current day is 1 and the previous day was 0?
Why does “Total Sessions” show a 94% drop if the current day is 1 and the previous day was 0?
Does the “Time in App per User” mean the average for the month or for the prior day? If it's for the month, why would 1 extra session cause the value to change so much? If it's for the day, why does it show “11:33m” even though the Total Sessions was 0?
Sometimes the “Time in App per User” ends in an “m” and sometimes it ends in an “s”. For example, “11:33m” and “0:44s”. Does that mean that “11:33m” is “11 hours and 33 minutes” and “0:44s” is “0 minutes and 44 seconds”? Or does the “11:33m” still mean “11 minutes and 33 seconds” and I should ignore the suffix?
Thanks for reaching out. Todd from Fabric here. The % change is actually % difference vs. what we expected based on the previous behavior of your app. This compensates for day of week etc.
The long session when getting zero, suggests that either the session was live/not reported to us at UTC midnight. The session gets created on session start and the duration gets set at the end.
Thanks!
For years I've been using webpage requests like the following to retrieve 20 days at a time of minutewise stock data from Google:
http://www.google.com/finance/getprices?q=.INX&i=60&p=20d&f=d,c,h,l,o,v
= Retrieve for .INX (S&P 500 index) 60-second interval data for the last 20 days, with format Datetime(in Unix format), Close, High, Low, Open, Volume.
The Datetime is in Unix format (seconds since 1/1/1970, prefixed with an "A") for the first entry of each day, and subsequent entries show the intervals that have passed (so 1 = 60 seconds after the opening of the market that day).
That worked up until 9/10/2017, but today (9/17) it only returns day-end data (it even reports the "interval" between samples as 86400). Pooey! I can get that anywhere, in bulk.
But if I ask for fewer days, or broader intervals, it seems to return data - but weird data. Asking for data every 120 seconds returns exactly that - but only for every other market day. Weird!
Has anyone got a clue what might have happened?
Whoa! I think I figured it out.
Google still returns minutewise data for the same approximate limitations (up to 20 calendar days), but instead of d=10 returning all the market data for the last 10 calendar days, it return the data for the last 10 market days. Previously, to get the last 10 market days you would ask for d=14 (M-Fx2, plus two weekends). Now, Google interprets the d variable as market days, and asking for d=20 exceeds the limits on what they will deliver.
It now appears that d=15 is the limit (three weeks of market days). No clue on why I got the very weird every-other-day data for a while... but maybe if you exceed their d-limits the intervals get screwy. Dunno. Don't care. Easy fix.
I need to create a T-SQL (I`m using SQL Server 2008 Express) script (procedure/function) that takes entries from a "Tasks" table and generates only the tasks planned to occur in the next 5 hours.
In the "Tasks" table I should have tasks with job like properties :
Occurrence : One time or Recursive
Frequency : every x Days, Weeks, Months
Depending on the Frequency :
For Daily : every x hours,minutes
For Weekly -Days of the week when the task should occur
For Monthly - The first,second,last -day,week day,weekend -
day of week OR The 1st, 2nd, 3rd..31st of the month.
When I run the script I should get only the tasks that will occur in the next 5 hours or less in the case if task occurs every x minutes.
So what this script should actually do is use all the options from a SQL Server Job for a task.
Example1: I create a task "Check email" that occurs every day, between 14:00 and 15:00 . When i run the script at 09:00 I would get Task:"Check email" Time:5 hours left until task, because it occurs every day, i should be able to get this result every day when I roll the script ,only if that 5 hours or less range .
Example2: A task "Send hours report" that occurs Monthly, on the 1st week day (not weekend), at 01:00 PM . Running the script on the first week day of the month at 08:00 AM I should get Task:"Send hours report" Time:5 hours until task
I know it's a quite big request but hopefully someone will find it easy what for me seems pretty hard.
You should also store somewhere the date/time of the last execution of each job (at least for jobs that are configured for every x hours/minutes). The idea is to calculate the next execution date/time for each job, and return it if it's less than 5 hours away. If the job is configured for every x hours/minutes, use DATEADD and check if the result is within the configured timeframe. If the job is configured for daily (at a specified time), just check if the specified time is less than 5 hours away (but still in the future). I have done a similar feature and it's not an easy thing to check for all cases (it took me a few days of coding).
I've started thinking about an employee shift management application to handle the shifts (who works when, trading, etc) at my current workplace (that uses pen and paper and hasn't got anyway for us employees to communicate about changes without going through the boss and be on site).
Currently the shifts are modeled loosely as:
There is a recurring 4 week period (from Monday week 1 to Sunday week 4)
There is a template for placing employees in this 4 week period
Every 4 months (ie 3 times a year) the 4 week template is projected over the next 4 month period
The shifts have been the same for a long time and it seems many employees would prefer to have them changed (I can say this by the requests for change that come in every time a new 4 month is set).
What I'm aiming at are the models:
Shift_group_tpl (the 4 week period above)
Shift_tpl (a single shift in the 4 week period, including info on who defaults to work this shift)
Shift_group (a set period of time whit actual shifts)
Shift (a set shift whit a real time period and an employee - and the possibility to be changed both in start_time, end_time and employee)
I've thought of a way to do this with recurring iCalendar events: Creating RRULE's (without an endtime) and then calculate (using temporary start and end times) if that specific Shift_group_tpl could be used within a real Shift_group. (The problem with this approach is that I can't figure out how to trim the Shift_group_tpl's to fit into the start or end of a Shift_group.)
What I'm looking for are some other perspectives or ways of doing it or even just a pat on the shoulder letting me know that I'm on the right track (and then giving advice on the trimming problem).
/iole1
What I'm aiming at are the models:
Shift_group_tpl (the 4 week period above)
Shift_tpl (a single shift in the 4 week period, including info on who defaults to work this shift)
Shift_group (a set period of time whit actual shifts)
Shift (a set shift whit a real time period and an employee - and the possibility to be changed both in start_time, end_time and employee)
You have "sql" as a tag for this post? So im guessing you want these as SQL tables?
By the sounds, the problem is that your considering the data you have, rather than the abstract concepts you need to store that data. Which is what you'd need to do to create an application. (Most likely a "Shifts" table, rather than the four tables above).
There is little information here to help, Consider refining your thoughts and ask another question.