Report to see finish date of a plan - rtc

Assuming that we have a plan with all workitems properly estimated with story points. And we have a known team velocity.
Using that information, I want to be able to get an overview of when all workitems will be finished, either by which sprint or by date. Goal would be to use that report to plan the project and to see how changes (remove/add workitems, remove/add people) will effect the finish date.
It feels like something like this should exist, but I haven't found it yet.

I don't see this feature (end date) just for any plan.
I only know about Proposed End date provided for Formal plan with plan snapshot:
A plan snapshot captures the current state of the plan, the work items in the plan and their schedules, the team area that owns the plan, and the iteration that the plan is created for. You can manage snapshots on the Snapshots tab of the Plan view. Three types of snapshots are available:
...
Proposed: This type of snapshot is available for traditional projects, such as projects created by using the Formal Project Management process template.
Typically, a Proposed snapshot is created before resources are allocated to the project.
The start and end dates captured in this type of snapshot populate the Proposed Start and Proposed End dates, which are displayed by default in the Schedule Variance view of the plan. Only one Proposed snapshot can exist at a time.
But that might not be what you are looking for, since you have resources already allocated, with a know team (i.e. group of resources) velocity

Related

Is there a way to track delta changes for each plan per tab within a team in MS Teams?

I am trying to get delta changes of a plan per tab within teams by making a API call.
Get Delta for planner
URL GET : https://graph.microsoft.com/beta/users/{id}/planner/all/delta
I tried passing plan id in place of id in URL. This request returns a delta link to track changes.
After making some changes in a plan, I can able to track those changes using a delta URL.
But the problem here is the number of changes that we can track from a delta URL is limited.
When I tried this, I can able to track 32 recent changes only with that plan.
The other previous change details are not able to track with this delta URL.
Even I tried passing team id and user id in place of id in URL. Here also I found the same limited changes tracking issue and it returns the changes respect to all plans.
we don't get the details of the plan to which the particular changes are made.
And If I made some changes to a task item, we don't get the details of bucket or plan to which that task is associated to. These are some limitations I found regarding a Planner get delta query.
Can someone please suggest a fix or workaround for this issue?
Is there a way to track delta changes for each plan per tab within a team in MS Teams ?
or Is there any other way you suggest me to try ?
Tracking changes within the scope of a plan is a feature we're currently working on, but it isn't available yet.
The delta mechanic requires the client to keep a cache of items, so that the client can know the associations and the final state of the changed item. Currently there are no plans to include info about buckets and plans with the delta.

Can we track Microsoft Delta changes on the Users, Group, ChatMessage or Planner Objects based on time. "/me/planner/all/delta"

Basically does any one know how to ask for delta changes that happened after certain time. I am saving all the changes that user has done to planner objects to the database, but I know eventually delta changes for 100 of plans will go insanely huge. GET /me/planner/all/delta GET /users/{id}/planner/all/delta. Does any one knows how to filter delta response with given time. My plan is to query delta after certain time.
It could be in any object that delta works. Right now I can bring all the delta changes but I do not see how I can ask for changes that happend after certain time.
Delta only works with the tokens presented in the links, it is not time based (we do not store it based on time internally). It is also best-effort, which means at certain time the delta changes will be cleared and the clients will be forced to read objects again to be in sync. So even if there was a time based query, there wouldn't be a guarantee that you can access older data.
What is your scenario? Some kind of history tracking or auditing?
As far as I know, nope. I have to cycle on all Planner plans and tasks in them to get the details. I am currently saving the planner task details to sharepoint and instead of updating it I am just deleting all old records and recreating them.
That makes sense, I was saving the deltas so that in future I could tell which user modified what planner objects. Since Microsoft has not implemented an audit trail for planner objects yet. Storing delta Link was just for my possible future rollback processes.
I realized deltaLink does not expire it is just using delta token to find the future changes from the time the delta was queried. Basically, I am requesting Microsoft teams to have some kind of audit trail for Planner objects changes(at least for who changed at what time) so we can query those activities and have those specific individuals held responsible for unwanted changes that they made. For instance, changing the due date of planner tasks

How to get SQL executed or transaction history on a Table (AS400) DB2 for IBM i

I have an issue in our database(AS400- DB2) in one of our tables all the rows were deleted. I do not know if it was a program or SQL that a user executed. All I know it hapend +- 3am in the morning. I did check for any scheduled jobs at that time.
We managed to get the data back from backups but I want to investigate what deleted the records or what user.
Are there any logs on die as400 on physical tables to check what SQL executed and when on a specified table? This will help me determine what caused this.
I tried checking I systems navigator but could not find any logs... Is there a way of getting transnational data on a table using i system navigator or green screen? And If I can get the SQL that executed in the timeline.
Any help would be appreciated.
There was no mention of how the time was inferred\determined, but for lack of journaling, I would suggest a good approach is immediately to gather information about the file and member; DSPOBJD for both *SERVICE and *FULL, DSPFD for *ALL, DMPOBJ, and perhaps even a copy of the row for the TABLE from the catalog [to include the LAST_ALTERED_TIMESTAMP for ALTEREDTS column of SYSTABLES or the based-on field DBXATS from the QADBXREF]. Gathering those, worthwhile almost only if done before any other activity [esp. before any recovery activity], can help establish the time of the event and perhaps allude to what was the event; most timestamps are reflective of only of the most recent activity against the object [rather than as a historical log], so any recovery activity is likely to effect loss of any timestamps that would be reflective of the prior event\activity.
Even if there was no journal for the file and nothing in the plan cache, there may have been [albeit unlikely] an active SQL Monitor. An active monitor should be available visible somewhere in the iNav GUI as well. I am not sure of the visibility of a monitor that may have been active in a prior time-frame.
Similarly despite lack of journaling, there may be some system-level object or user auditing in effect for which the event was tracked either as a command-string or as an action on the file.member; combined with the inferred timing, all audit records spanning just before until just after can be reviewed.
Although there may have been nothing in the scheduled jobs, the History Log (DSPLOG) since that time may show jobs that ended, or [perhaps soon] prior to that time show jobs that started, which are more likely to have been responsible. In my experience, often the name of the job may be indicative; for example the name of the job as the name of the file, perhaps due only to the request having been submitted from PDM. Any spooled [or otherwise still available] joblogs could be reviewed for possible reference to the file and\or member name; perhaps a completion message for a CLRPFM request.
If the action may have been from a program, the file may be recorded as a reference-object such that output from DSPPGMREF may reveal programs with the reference, and any [service] program that is an SQL program could have their embedded SQL statements revealed with PRTSQLINF; the last-used for those programs could be reviewed for possible matches. Note: module and program sources can also be searched, but there is no way to know into what name they were compiled or into what they may have been bound if created only temporarily for the purpose of binding.
Using System i Navigator, expand Databases. Right click on your system database. Select SQL Plan Cache-> Show Statements. From here, you can filter based on a variety of criteria.
This is not sure-fire, but often saves me some time. Using System i Navigator, right-click on the table and choose Index Advisor. If you're lucky, one or more indexes are advised. If so, sort by date last advised and right click on the index with the newest date and select Show Statements... In that dialog box, either sort by date to help narrow things down or just scroll through the statements to find the one you're interested in. Right-click it and select Work with SQL Statement and there you go.

How to store complex records for referencing historical revisions?

I have a table on my database that outlines complex processes in a work breakdown structure (similar to what's used to create Gantt charts). There are multiple rows for a particular process, each row outlining a hierarchical step of a particular process.
I then have a table with some product types, each being linked to a particular process. When an order for a particular product is placed - it is to be manufactured with the associated process.
In my situation, the processes can be dynamic (steps added or removed, for example).
I'm curious as to what the best way to capture current and historical revisions of each process is, such that even though a process may have evolved over time - I can historically go back to a particular order and determine what the process looked like at that time.
I'm sure there are multiple ways to go about this, using logging or triggers with a new history table - but I've had no experience doing something like this and I'd like to know what worked well for others.

Automating problem query identification in Oracle 11g

In our test bed, a number of test suites will be run in a row (unattended), producing reports for later consumption. I want to include in those reports queries which are candidates for further investigation, along with the data that justifies their inclusion in that list. We should be able to associate any query identified this way with the test suite that exposed it as a concern.
When we use SQL Server, this is relatively straight forward - a call to DBCC FREEPROCCACHE clears all of the counters before a suite begins, then at test end we run a query against sys.dm_exec_query_stats, which gives us access to the execution counts and min/max/total time(s) of each cached query plan, with hooks available to retrieve the parameterized SQL statement (we use FORCED parametrization in our mssql instances) and the query plan.
Ref: http://msdn.microsoft.com/en-us/library/ms189741%28SQL.90%29.aspx
My question: how do I implement an approximation for this when my target app has been connected to Oracle 11g? My reading thus far suggests that everything I'm after is available via the AWR, and that it should be possible to access the supporting views directly, but I haven't been able to close the circle on my own.
Why do you need to access the supporting views directly? It would seem to me that the simplest solution would be
Each test suite starts and ends by explicitly generating an AWR snapshot so it knows the starting and ending snapshot ID and so that you can generate AWR reports for each suite individually.
You run AWR reports for each test suite
You review the AWR reports looking in particular at the various Top SQL sections
It's absolutely possible to get all the information from the underlying views directly, but if you don't need to do so, that's obviously easier.
Just for sanity, I should point out that I am assuming you are licensed to use AWR. Technically, even querying the AWR views requires that you have licensed the Performance and Tuning Pack. If you want to hit the views directly rather than generating full AWR reports because of licensing concerns, you're not saving yourself any license headaches by hitting the views.
The Oracle equivalent of DBCC FREEPROCCACHE is
SQL> alter system flush shared_pool;
The closest to the SQL Server counters are V$SYSSTAT and V$SYSTEM_EVENT.
However Oracle actually tracks these at the session level too in v$SESSION_WAIT, V$SESSION_WAIT_CLASS and V$SESSION_EVENT so you don't need to reset them at the system level.
And you don't need the Diagnostic/Tuning pack licenses to access them.
They don't go down to the SQL level. That is available in V$SQL, though would not be specific to that session. You can use session level tracing to track down individual SQLs that may be causing problems
Justin's answer had the correct outline, but I needed more details about the implementation.
Each test suite starts and ends by explicitly generating an AWR snapshot so it knows the starting and ending snapshot ID and so that you can generate AWR reports for each suite individually.
You run AWR reports for each test suite
You review the AWR reports looking in particular at the various Top SQL sections
I explicitly generate the snapshots by calling dbms_workload_repository.create_snapshot, the result gets saved off for later.
select dbms_workload_repository.create_snapshot() as snap_id from dual
In order to get the report, I need the database id and the instance number. Those are easily obtained from v$database and v$instance.
select d.DBID, i.instance_number as inst_num from v$database d, v$instance i
The report is available as text (dbms_workload_repository.awr_report_text) or html (dbms_workload_repository.awr_report_html). The arguments are the same in both cases, including a options flag which will include information from the Automatic Diagnostic Database Monitor (ADDM). It wasn't immediately obvious to me that I could leverage the ADDM results, so I turn that off. The return value is a column of varchar, so the function call gets wrapped
select output from table(dbms_workload_repository.awr_report_html(1043611354,1,5539,5544,0))
This result is easily written to a file, which is assembled with the other artifacts of the test.
Documentation of these methods is available online