Time spent on HierarchicalRequirement in Rally Rest API - rally

I have a report that hooks into the Rally API web service. It lists the user stories and defects for presentation to an external client.
The developers are filling in the time they spend on their tasks in the time sheet, but when I try to get the actual time spent using the 'TaskActualTotal' value, it always come back as 0.
The values are definately recorded as my internal reports on the timesheet produce these values.
Do I have to call for the time spent using a different method?
Thanks

Do your developers enter time in the Time Tracker module? There is no connection between Actuals and Time Tracker module. Actuals also predates Time Tracker.
The Actuals field is designed to be used during retrospectives to provide insight on root causes for missed commitments, while the Time Tracker module is designed to help report on development costs.
We generally only recommend using Actuals values by teams new to Scrum or Agile who are still working on providing good estimates. Comparing Estimates to Actuals can be valuable during retrospectives to help identify where the larger gaps in estimating might be occurring.
For more established teams, we recommend the Actuals field remain hidden as these values can seem to draw focus on the amount of time or resources spent on particular functionality rather than highlighting whether commitments were made by the team as a whole.
Of course, all teams are very different in the processes they use and the development cycle followed.
The intent of the timesheet values was more for capturing and reporting on development cost for billing and capitalization rather than for assisting with completion or estimation charting. Actuals however, were designed to assist in this regard and live on Tasks just as Estimates and ToDo values do and roll up at the story level for easy comparisons.
You may still query on Actuals in WS API. For example, I have a story with two tasks, each with Estimate set to 2, and Actuals set to 3. If I query on user stories by the specific iteration this story is scheduled for, I get the TaskEstimateTotal and TaskAcutalTotal as long as I fetch them. Here is my query:
https://rally1.rallydev.com/slm/webservice/v2.0/hierarchicalrequirement?workspace=https://rally1.rallydev.com/slm/webservice/v2.0/workspace/1111&query=(Iteration.Name = i5)&start=1&pagesize=20&fetch=TaskEstimateTotal,TaskActualTotal
And here is the relevant part of the return:
{
"_rallyAPIMajor": "2",
"_rallyAPIMinor": "0",
"_ref": "https://rally1.rallydev.com/slm/webservice/v2.0/hierarchicalrequirement/22222",
"_objectVersion": "9",
"_refObjectName": "my story",
"TaskActualTotal": 6,
"TaskEstimateTotal": 4,
"_type": "HierarchicalRequirement"
}
However this query will only return results if Estimate and Actuals values were entered on the Details page of tasks, and not in the Time Tracker.
There are two objects in our WS API that are relevant to Time Tracker:
TimeEntryItem and TimeEntryValue.
Here is an example of a query on TimeEntryItem based ona Worproduct.Name:
https://rally1.rallydev.com/slm/webservice/v2.0/timeentryitem?workspace=https://rally1.rallydev.com/slm/webservice/v2.0/workspace/11111&query=(WorkProduct.Name = us1)&start=1&pagesize=20&fetch=WorkProductDisplayString,TaskDisplayString,Values
and the relevant part of a result:
{
"_rallyAPIMajor": "2",
"_rallyAPIMinor": "0",
"_ref": "https://rally1.rallydev.com/slm/webservice/v2.0/timeentryitem/77777",
"_objectVersion": "3",
"TaskDisplayString": "TA1: ta1",
"Values": {
"_rallyAPIMajor": "2",
"_rallyAPIMinor": "0",
"_ref": "https://rally1.rallydev.com/slm/webservice/v2.0/TimeEntryItem/77777/Values",
"_type": "TimeEntryValue",
"Count": 2
},
"WorkProductDisplayString": "US1: us1",
"_type": "TimeEntryItem"
},

Related

SQL Database for Magic Cardgame

For school I am creating a deckbuilder website based on Magic the gathering. It's the project that decides if I get my degree or not. Trough the website from Deckbrew I have been able to get data like the following:
[
{
"name": "About Face",
"id": "about-face",
"url": "https://api.deckbrew.com/mtg/cards/about-face",
"store_url": "http://store.tcgplayer.com/magic/urzas-legacy/about-face",
"types": [
"instant"
],
"colors": [
"red"
],
"cmc": 1,
"cost": "{R}",
"text": "Switch target creature's power and toughness until end of turn.",
"formats": {
"commander": "legal",
"legacy": "legal",
"vintage": "legal"
},
"editions": [
{
"set": "Urza's Legacy",
"rarity": "common",
"artist": "Melissa A. Benson",
"multiverse_id": 12414,
"flavor": "The overconfident are the most vulnerable.",
"number": "73",
"layout": "normal",
"price": {
"low": 0,
"average": 0,
"high": 0
},
"url": "https://api.deckbrew.com/mtg/cards?multiverseid=12414",
"image_url": "http://mtgimage.com/multiverseid/12414.jpg",
"set_url": "https://api.deckbrew.com/mtg/sets/ULG",
"store_url": "http://store.tcgplayer.com/magic/urzas-legacy/about-face"
}
]
}
]
It's obvious that it's in jSon format. I have found the way to turn this into objects and the structure of the project is 4-layer MVC with entity framework and C#, which is working (kinda)...The problem is the database. I have been working on it for 2 months now and I am not getting any further. The thing I get stuck on is the database. I have not seen much on how to create databases and that's where it goes wrong. I don't get how to build the database. The creation itself would work if I figured out how to include certain things...
1) Formats: if the card is legal in a format, Formats is filled with: "legacy": "legal", "commander":"legal", ... so only the legal formats are included.
2) Types and colors are just plain arrays of words, but since I'm very bad with databases I don't even know how to figure this one out.
3) Editions is something completely different. It's an array of the object Edition which I believe has to have a table of its own. The problem here is that I thought I needed to use a foreign key but since it's an array of Editions I don't really know how to start doing that either.
4) and then there's Price: It always has 3 values: low, average and high which can be 0 if there's no price known.
So here you have it. To me this database is very complex or maybe I am making it too complex. Is there anybody who can help me to get this database organized so I can get on with my project, because I'm so lost at the moment that I feel I am not going to get this ready by the end of next month and that would be awful.
1: No, you should include all.
2: Table with colors, standard m:n binding table in between mapping the card table with the color table. Not knowing how to make a m:n relationship thing makes me thing you skipped all classes... this is fundamental and basic.
3: Seems like "cardedition" is the main table actually, and everything before is a master type table. Not sure- I don't really do magic at all, so I lack what is called domain knowledge. Are cards changed so multiple editions exist? Why is that an array in json?
3: magic values, 0,1,2,3. What is the question?
To me this database is very complex
I suggest you start from scratch (making things easier) and just have maybe 10 or so tables. Go step by step. Follow what you learned, go to 3rd of 4th normal form and go relational.

Asana integration with Slack

I am looking to implement a solution where when I create a project in Asana it will create a room in Slack with all the same members.I was planning on writing a script to run every couple of minutes to look for either new projects or changes in membership of current projects and then call out to slack to make the changes. This, however, would be a lot of chatter so I was hoping someone might know of and be able to recommend another way that will make these changes on an as needed basis.
It sounds like you have the best solution outlined for this use case.
In order to get a list of new projects in a workspaces you should query the projects endpoint and check for newly created projects based on the created_at field, using opt_fields field selector to have that returned in your query. I strongly suggest that you scope this query to a single workspace and use pagination.
GET 'https://api.asana.com/api/1.0/workspaces/5233820891524/projects?opt_fields=name,created_at&limit=2' | j
{
"data": [
{
"id": 23154287843671,
"created_at": "2014-12-31T18:35:49.695Z",
"name": "Ninja Things"
},
{
"id": 23154287843675,
"created_at": "2014-12-31T18:35:59.174Z",
"name": "Unicorns"
}
],
"next_page": {
"offset": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJib3JkZXJfcmFuayI6ImRTbm5ZaGNOOWFFIiwiaWF0IjoxNDM4ODE0MzY0LCJleHAiOjE0Mzg4MTUyNjR9.82zecHAT51-GSrL6FdcrRdMs45U7PZ3g-d4Zuo_B8UA",
"uri": "https://api.asana.com/api/1.0/workspaces/5233820891524/projects?limit=2&opt_output=json&opt_fields=name%2Ccreated_at&offset=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJib3JkZXJfcmFuayI6ImRTbm5ZaGNOOWFFIiwiaWF0IjoxNDM4ODE0MzY0LCJleHAiOjE0Mzg4MTUyNjR9.82zecHAT51-GSrL6FdcrRdMs45U7PZ3g-d4Zuo_B8UA",
"path": "/workspaces/5233820891524/projects?limit=2&opt_output=json&opt_fields=name%2Ccreated_at&offset=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJib3JkZXJfcmFuayI6ImRTbm5ZaGNOOWFFIiwiaWF0IjoxNDM4ODE0MzY0LCJleHAiOjE0Mzg4MTUyNjR9.82zecHAT51-GSrL6FdcrRdMs45U7PZ3g-d4Zuo_B8UA"
}
}
For new members of current projects you would need to query individual projects and check the memberships property.
I would have suggested using the Events api to check for new members but tested and determined that new members are not considered an event on the project, something that we will consider changing.

RESTful API for large "mash up" data pulls?

I have recently been tasked with a small project of setting up a periodic (somewhere between daily & weekly) data dump from an internal database to a 3rd party product. This project dovetails nicely with my company's desire (one which I share) to start standing up a formal service layer/API over the top of our data.
My personal preference is that those APIs should take on the form of RESTful endpoints - however, now I have what I think is a big design question - let me explain...
Looking at the data pull in question, it's hardly complicated. If I were just going to construct a one-off query, it would conceptually look a little something like:
select o.order_num, o.order_date, p.product_description, sr.sales_rep_name
from order o, line_item li, product p, sales_rep sr
where li.order_num = o.order_num
and li.product_id = p.product_id
and sr.sales_rep_id = o.sales_rep_id
and o.order_date >= [some arbitrary date]
Flipping my brain into "Resource Mode", I can think about how to convert this basic data model into URI's/payloads without too much trouble:
GET /orders/123
{
"order_num": 59324,
"order_date": "2014-07-07",
"sales_rep_uri": "/salesRep/34",
"line_items_uri": "/order/123/lineItems"
}
Getting more information about the sales rep:
GET /salesRep/34
{
"sales_rep_name": "Jane Doe",
"open_orders_uri": "/salesRep/34/orders"
}
Getting more information about line items:
GET /orders/123/lineItems
{
"line_items": [
{"order_uri": "/order/123", "product_uri": "/products/68"},
{"order_uri": "/order/123", "product_uri": "/products/99"}
]
}
And so on. I'm not saying it's a perfect API, I'm just trying to demonstrate it's not exactly rocket science to go about thinking how you might express the data model in a nicely normalized, resource-oriented type of way via RESTful URIs. But that is exactly where the design question comes into play...
On one hand, I can crank out a query to solve the problem very easily, but the very nature of queries requires the various domain concepts to be tightly coupled (in other words, utilizing joins to bring all of the normalized data together into one nice, custom-purpose de-normalized structure).
On the other hand, going through the mental process of thinking out a RESTful API leads me right back down that road of keeping things nicely compartmentalized - e.g. asking for "Order 123" shouldn't send me back this huge graph where I can see the full product description, the sales rep's phone number, etc, etc. The concept of a full blown HATEOAS-level RESTful API dictates consumers should be making subsequent GETs to drill down for that kind of detail only as-needed.
My question boils down to this: solving this use case seems really easy to do with a direct query and really difficult to do against a nice & tidy RESTful API (I'm picturing the literally 1000's of individual GETs it would take for me to assemble a weeks worth of data vs the few seconds it would take for the query to run). Is there some elegant subtlety of good RESTful design that I don't understand that would prevent me from seeing a good solution, or am I trying to fit a round peg into a square hole (i.e. REST is not good at pulling big data batches across multiple resources)?
I'm just going to throw this out there as a potential solution:
Conceptually, I treat the results of this query as a resource unto itself - like "orderReport".
Treating this as it's own resource, the API could behave something like:
GET /orderReport/[some arbitrary date]
You could then send back either a 201 Created (if the query is relatively quick running) with a location header like Location: /orderReport/[GUID]. Alternatively, if the query takes a while to run (I honestly don't know if it does or not off the top of my head), you could send back a 202 Accepted with a location header of Location: /orderReport/[GUID]/status.
You could then do follow up GETs against those URLs to get either the report status (200 OK if still processing without error, 201 Created with location header pointing to the report URL if its done) or the report itself.
There's nothing to say the report data couldn't also incorporate HATEOAS in addition to the data strictly needed to fulfill the use case requirements, like:
{
[
{
"order_num": 123,
"order_uri": "/orders/123",
"order_date": 2014-07-03,
"product_description": "widget",
"sales_rep_name": "Jane Doe",
"sales_rep_uri": "/salesRep/34"
},
{
"order_num": 456,
"order_uri": "/orders/456",
"order_date": 2014-07-04,
"product_description": "gadget",
"sales_rep_name": "Frank Smith",
"sales_rep_uri": "/salesRep/53"
}
]
}

Google Search API Results Completely Different from Google.com Results

Below is one Json item returned from this query and this is the query:
https://www.googleapis.com/customsearch/v1?key={key}&cx={key}&q=Action+Motivation%2c+Inc.&alt=json
The "dc.type" in the Json is "Patent" and this is obviously patent data BUT I didn't specify that search engine. I've googled this to death but can't find anything re why patent data would be returned from a simple query like this. If Google "Action Motivation, Inc." on the regular google.com page, I get completely different (normal) results. Has anyone had this problem?
"items": [
{
"kind": "customsearch#result",
"title": "Patent US5622527 - Independent action stepper - Google Patents",
"htmlTitle": "Patent US5622527 - Independent \u003cb\u003eaction\u003c/b\u003e stepper - Google Patents",
"link": "https://www.google.com/patents/US5622527",
"displayLink": "www.google.com",
"snippet": "Apr 22, 1997 ... Original Assignee, Icon Health & Fitness, Inc., Proform Fitness ....",
"htmlSnippet": "Apr 22, 1997 \u003cb\u003e...\u003c/b\u003e Original Assignee, Icon Health & Fitness..."
"formattedUrl": "https://www.google.com/patents/US5622527",
"htmlFormattedUrl": "https://www.google.com/patents/US5622527",
"pagemap": {
"book": [
{
"description": "A motivational exercise stepping machine has a pair of independently operable pivoting treadles for operation..."
"url": "https://www.google.com/patents/US5622527?utm_source=gb-gplus-share",
"name": "Patent US5622527 - Independent action stepper",
"image": "https://www.google.com/patents?id=&printsec=frontcover&img=1&zoom=1"
}
],
"metatags": [
{
***"dc.type": "Patent"***,
"dc.title": "Independent action stepper",
"dc.contributor": "William T. Dalebout",
"dc.date": "1994-3-23",
"dc.description": "A motivational exercise stepping machine has a pair of independently operable pivoting treadles for operation by a user's feet. Each treadle..."
"dc.relation": "JP:S5110842"
}
]
}
},
{
When using their API, you can issue around 40 requests per hour. The results you see on the API is not what the real user sees. You are limited to what they give you, it's not really useful if you want to track ranking positions or what a real user would see. That's something you are not allowed to gather.
If you want a higher amount of API requests you need to pay.
60 requests per hour cost 2000 USD per year, more queries require a custom deal.

Creating a custom defect trend chart using App SDK 2.0

I've been tasked with plotting some defect charts using Rally historical data. Right now I'm using a simple REST client to pull data at certain points in time and plot the count on a spreadsheet. What I'm doing right now is:
{
find : {
"_ProjectHierarchy": <projectId>,
"_TypeHierarchy": -51006,
"FoundInBuild" : {$regex: "3\\.3\\."},
"State" : {$in : ["Submitted","Open"] },
$or: [
{"Severity" : { $in : ["Catastrophic","Severe"] }},
{"Priority" : "showstopper"}
],
"__At" : "<date>"
},
pagesize : 1,false
}
I just run this once for every date I need the data for. That's a lot of queries! What I'm looking for is a way to run a single query using _ValidFrom and _ValidTo to enclose a time range, then pass it on to a SnapshotStore, then plot that on a Chart? I'm certain there's a way to do it, but I can't figure it out from the docs. Any help much appreciated.
Unfortunately, the example space for AppSDK2 and Lookback API is presently a bit thin. There are some cool apps out there, for instance, you may wish to check out David Thomas' cool Hackathon app:
Defect Re-work Trend
As a starting point. It queries LBAPI for Defects and stores the resulting data in a SnapshotStore. The App itself measures Defect "thrash" or the trending around how many times a Defect is re-opened during a particular development cycle.
In reviewing Hackathon apps, just be aware that certain methods and syntax for the SnapshotStore may change slightly in a future release of AppSDK2.