Cumulative Flow Data for Unscheduled Items? - rally

I understand how to get cumulative flow data on releases with the ReleaseCumulativeFlowData object - however this requires a ReleaseObjectID. I am looking for a way to get the same data for all the items that are not scheduled in a release, and it does not appear that I can query for where the ReleaseObjectID is null.
Is there any way using CumulativeFlow data to get the number of story points for unscheduled stories on a given day- or is my best bet to either parse the revision history logs using the 1.x API, or use the Lookback API?
Basically, what I am trying to get to is to be able to represent how the total scope of a project has changed over time, including items that are scheduled, as well as items that are estimated in the backlog but are not yet scheduled. - As far as I can tell, there is not an out-of-the-box way to get this information (without revision logs or diving into learning the Lookback API right now), but I am crossing my fingers that I am wrong.

I recommend learning the Lookback API, as this is exactly the sort of question it was designed to answer.
You can find the docs here: https://rally1.rallydev.com/analytics/doc/
For example, if you say:
find:{
_ProjectHierarchy:279050021,
_TypeHierarchy: "HierarchicalRequirement",
Children: null,
ScheduleState:{$lt:"In-Progress"},
__At:"current"
},
fields:["ObjectID", "Name", "PlanEstimate"]
You're looking for snapshots for items under the project with OID 279050021, that are Stories (HierarchicalRequirements), with no children (so are leaf stories), that are in a schedule state earlier than "In-Progress" and we should look for snapshots that are valid today ("current"), but you could have put any ISO 8601 date in there as a string. The fields parameter then specifies which fields of the snapshots to return. While you're learning what's there, it can be helpful to use fields=true and use this Chrome plugin for pretty-printing the JSON response: https://chrome.google.com/webstore/detail/jsonview-and-jsonlint-for/mfjgkleajnieiaonjglfmanlmibchpam However, you should specify the exact list of fields you want when going to production, since fields=true is limited to 200 results.
As a full URL this looks like:
https://rally1.rallydev.com/analytics/v2.0/service/rally/workspace/41529001/artifact/snapshot/query.js?find={_ProjectHierarchy:279050021, _TypeHierarchy: "HierarchicalRequirement", Children: null, ScheduleState:{$lt:"In-Progress"}, __At:"current"}&fields=["ObjectID", "Name", "PlanEstimate"]
But make sure to swap in your own workspace OID (for 41529001) and project OID (for 279050021) or the above URL won't work for you.

Related

Version one API - get sprint (timebox) wise counts of issues, stories, defects, backlog items

My goal is to fetch just the counts of stories, issues, backlogs, defects associated with respect to a particular timebox and project.
I know I can fetch an asset's details using using this (for defects)
<Server Base URI>/rest-1.v1/Defect/?sel=Number,Name,CreateDate&where=Timebox.Name=<sprint name>;Scope.Name=<project>
But doing it this way I'd have to do query for different assets. Is it possible to get this sprint wise info with a single query.
Any help would be greatly appreciated.
Answering my own question.
Using query.v1 read-only API, I could achieve writing multiple queries. Also this repository here grammars, contains documentation of the syntax for tokens which can used within queries
{
"from":"Timebox",
"where":{
"Schedule.ScheduledScopes":"<SCOPE_ID>"
},
"select":[
"Name",
"BeginDate",
"EndDate",
"Workitems:<ASSET_TYPE>[AssetState!='Closed';Scope='<SCOPE_ID>'].#Count"
...
]
}
Replace SCOPE_ID (Project), ASSET_TYPE (Defect, Story etc.) to get the values

Problems loading a series of snapshots by date

I have been running into a consistent problem using the LBAPI which I feel is probably a common use case given its purpose. I am generating a chart which uses LBAPI snapshots of a group of Portfolio Items to calculate the chart series. I know the minimum and maximum snapshot dates, and need to query once a day in between these two dates. There are two main ways I have found to accomplish this, both of which are not ideal:
Use the _ValidFrom and _ValidTo filter properties to limit the results to snapshots within the selected timeframe. This is bad because it will also load snapshots which I don't particularly care about. For instance if a PI is revised several times throughout the day, I'm really only concerned with the last valid snapshot of that day. Because some of the PIs I'm looking for have been revised several thousand times, this method requires pulling mostly data I'm not interested in, which results in unnecessarily long load times.
Use the __At filter property and send a separate request for each query date. This method is not ideal because some charts would require several hundred requests, with many requests returning redundant results. For example if a PI wasn't modified for several days, each request within that time frame would return a separate instance of the same snapshot.
My workaround for this was to simulate the effect of __At, but with several filters per request. To do this, I added this filter to my request:
Rally.data.lookback.QueryFilter.or(_.map(queryDates, function(queryDate) {
return Rally.data.lookback.QueryFilter.and([{
property : '_ValidFrom',
operator : '<=',
value : queryDate
},{
property : '_ValidTo',
operator : '>=',
value : queryDate
}]);
}))
But of course, a new problem arises... Adding this filter results in much too large of a request to be sent via the LBAPI, unless querying for less than ~20 dates. Is there a way I can send larger filters to the LBAPI? Or will I need to break theis up into several requests, which only makes this solution slightly better than the second of the latter.
Any help would be much appreciated. Thanks!
Conner, my recommendation is to download all of the snapshots even the ones you don't want and marshal them on the client side. There is functionality in the Lumenize library that's bundled with the App SDK that makes this relatively easy and the TimeSeriesCalculator will also accomplish this for you with even more features like aggregating the data into series.

Rally: getting the total story points, task hours, etc

I am utilizing the Rally 2.0p4 API and attempting to aggregate the data to get a list of iterations with sums of the story points per iteration. The only way I have found at present to do this is just query the HierarchicalRequirement model and loop all the data and populate it to an array. This seems less then ideal, is there not a way to just get back totals from the call from the server.
If you are wanting this data summarized by Iteration and/or Release, check out the:
IterationCumulativeFlowData
ReleaseCumulativeFlowData
Objects in the Webservices API documentation:
https://rally1.rallydev.com/slm/doc/webservice/
These objects will provide a daily summary of:
CardCount (# Stories/Defects)
TaskEstimateTotal
CardEstimateTotal
CardToDoTotal
By State, within each Iteration or Release as specified by OID.
In case anyone else comes looking, this is one way that api call can appear:
https://rally1.rallydev.com/slm/webservice/1.30/iterationcumulativeflowdata.js?query=(%20IterationObjectID%20=%20%2211203475854%22%20)&fetch=CardCount,CardToDoTotal,CardEstimateTotal,IterationObjectID
That call syntax is delicate (requires a space before the closing paren on 'query', for example).

CategoryId in venues search not working correctly

In foursquare Api documentation for "Search venues" https://developer.foursquare.com/docs/venues/search it states
"categoryId - A comma separated list of categories to limit results to. This is an experimental feature and subject to change or may be unavailable. If you specify categoryId you may also specify a radius. If specifying a top-level category, all sub-categories will also match the query."
Realise its supposed to be experimental, but when I provide Food category i.e. 4d4b7105d754a06374d81259, it only returns a few local results, the rest are miles away. However if I execute same search on website sing Food category, it returns correctly lots of results, assuming its the last bit "If specifying a top-level category, all sub-categories will also match the query" is not working , i.e. its not searching sub-categories ?
Any fix work around for this ?
Thanks,
Neil Pepper
You're making a /venues/search request with its default intent of intent=checkin. This returns a filter on nearby results, heavily biased by distance since it's trying to guess where the user might be checking in.
Foursquare Explore uses the /venues/explore endpoint and attempts to return recommended results for a query. If you want to get the sorts of results you get in that tool, call /venues/explore?section=food

Flickr Geo queries not returning any data

I cannot get the Flickr API to return any data for lat/lon queries.
view-source:http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&has_geo=1&extras=geo&bbox=0,0,180,90
This should return something, anything. Doesn't work if I use lat/lng either. I can get some photos returned if I lookup a place_id first and then use that in the query, except then all the photos returned are from anywhere and not the place id
Eg,
http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&placeId=8iTLPoGcB5yNDA19yw
I deleted out my key obviously, replace with yours to test.
Any help appreciated, I am going mad over this.
I believe that the Flickr API won't return any results if you don't put additional search terms in your query. If I recall from the documentation, this is treated as an unbounded search. Here is a quote from the documentation:
Geo queries require some sort of limiting agent in order to prevent the database from crying. This is basically like the check against "parameterless searches" for queries without a geo component.
A tag, for instance, is considered a limiting agent as are user defined min_date_taken and min_date_upload parameters — If no limiting factor is passed we return only photos added in the last 12 hours (though we may extend the limit in the future).
My app uses the same kind of geo searching so what I do is put in an additional search term of the minimum date taken, like so:
http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&has_geo=1&extras=geo&bbox=0,0,180,90&min_taken_date=2005-01-01 00:00:00
Oh, and don't forget to sign your request and fill in the api_sig field. My experience is that the geo based searches don't behave consistently unless you attach your api_key and sign your search. For example, I would sometimes get search results and then later with the same search get no images when I didn't sign my query.