Rally Lookback, Snapshot with empty custom field - rally

I am trying to get a snapshot of deleted userstory to get value for a custom field(c_Dep). I get the snapshot but the custom field is empty. It had value in it. Does lookback not save value for cutomer created cutom field?
findConfig: {
_TypeHierarchy: 'HierarchicalRequirement',
"ObjectID": 12345,
"_ValidFrom": {
"$lte": "2017-01-25T19:00:57.475Z"
}

Sarita, It is hard to tell from the information you have given what is going on precisely. However, I can give you some pointers
The Lookback API will store changes in values for custom fields. The selection you have shown is valid from 24thJan to 25thJan. During this period was the custom field set? Probably not, because the array is only one long and I think it is showing the creation event.
Was the custom field updated to contain something after this time period?
The reason for asking is that a common misunderstanding is that the records stored in the lookback database will hold the current value of fields - it doesn't. It holds the changes in fields. If c_Dependencies didn't change during that time period, you may not see an entry returned in the array. The next entry in the database might be the record where the c_Dependencies field was set (changed from null to something) and that might be 'after' your time period filter.

It looks like your query is requesting snapshots earlier than 2017/1/25 ($lte). Since there's only one, it's probably the creation snapshot. If you get all snapshots for the ObjectID by removing the _ValidFrom parameter, you should see the changes made to c_Dep after artifact creation.

As I am not allowed to comment, I have to post a new answer.
I think William Scott meant remove the ValidTo filter. The one you have is the creation change. The update will be afterwards.

Related

React Admin - Make input for filter based on other resource

I am using React Admin to make a dashboard and I have this Lead resource with the status field, that is computed based on another resource, Call, and wanted to make a filter component for Lead's list. The way it works is that for each lead, I query the last call (sorted by a date field) associated with this lead and get its status. The lead status is the status for the last call.
{ filter: { lead }, sort: { date: -1 }, limit: 1 }
the lead status query
I use this query to make a field (that appear in the list in the row of a single lead), and wanted to know how I can make an input component to use as a filter in the list. I know this pattern is weird, but it's hard to change it in the backend because of how it's structured. I am open to suggestions concerning how to change this messy computed field situation, but as I said, I would be satisfied with knowing how I can create the input component.
The solution I'm going with is a computed field. In my case, as I use MongoDB, it will be done through an aggregation pipeline. As I'm using REST instead of GraphQL, I cannot use a resolver that would only be called in the need of the status field, sometimes resulting in an uneeded aggregation (getting the last Call for a given Lead). However, it won't incur in an additional roundtrip - and instead only consume more processing time in the DB - which would be necessary for react-admin to compute this field in through a reference. And status is an important field that will usually be needed anyways.

Heat map visualization issue Microstrategy VI

I am creating a Dashboard using as a visualization a heat map. Everything was OK until I changed the parameters of my metric, the chart disappeared and I've got this message: 'Filter excludes all data'
The only modification that I've done is to set the Include Distinct Elements to true within the Count Parameter option of the metric.
What could be happening?. Do I need to set another parameter to get the count of distinct elements that I need?
Regards.
surely the metric is level with some attribute that is not inside the visualization, if the filter has date for example, include it in the visualization.
"Filter Excludes All data" is a default warning message that you will get in MicroStrategy when a reports/visuals/dashboards does not return any data.
https://community.microstrategy.com/s/article/KB47557-How-to-Properly-Suppress-the-Message-Filter-excludes-all?r=1&Component.reportDeprecationUsages=1&Headline.getInitData=1&ArticleView.getArticleHeaderDetail=1&Quarterback.validateRoute=1&RecordGvp.getRecord=1&ArticleRichContent.getArticleAuthor=1&ArticleTopicList.getTopics=1&ArticleRichContent.hasArticleAccess=1&ForceCommunityFeed.getModel=1&ArticleRichContent.getTopicsAssigned=1
There are n number of reasons for a report which cannot return data, please check the following steps to debug the issue,
As per your first image it shows a date range as filter, after you have changed to "Include only distinct" this date range might get affected so please put the objects in grid and apply the date filter and check whether it returns the correct data.
If it returns, check whether all the metrics returns a value.
Check whether candidateID, date and people attributes/metrics are properly related.
These steps will show, where the problem is, still if you could not figure it out export the dashboard and share it with MicroStrategy Tech Support team for debugging.
Hope it helps.

Keeping dValIds For auto-generated dimensions consistent

I am working with Endeca 6.4.1 and have many auto-generated dimensions present in my pipeline (mapped using Dev-studio), the application's indexing is CAS-less. So only FCM is creating Dimensions and assigning dValIds. I am using Endeca SEO, so the dVal Id directly reflects in my URL, and if an auto-gen dimension's value's Id changes, a link to that navigation State is lost.
I have a flat file as the dimension's source, for example
product.feature|neon finish
What I want is that, if the value some day changes to Neon-finish or Neon color, the dValId that was assigned to neon finish should be transferred to the new value. I can keep a custom mapping of the change to track that neon finish has been changed to a new value.
Is there any way to achieve this, may be by using some manipulators?
Please share your thoughts.
There are two basic ways to do this:
1) Update the state files when you change a dimension value (APPDIR/data/state/autogen_dimensions.xml ). This would most likely be a manual process.
2) A more robust but complex solution is to change the dimension values to be some ID number and use a synonym for the display name. Then the display name can change without a change to the id number. This may require some serious changes to your pipeline.
Good luck

Data structure for attributes changing based on datetime?

I need to write a data structure that can return a different Price for an Item, based on the day of the week and the hour of the day. Is there a clever way to store this? I can't think of where to start for something like this, other than maybe using a dictionary and using some kind of structured key format to indicating pricing for different time periods.
Item object has a number of Price objects.
When Item is asked for price it loops through it's Price object's and asks each one if they are currently valid to give a price value back.
Each Price object checks against current day/time to see if his value is valid.
Hopefully you will only get one value back but if there are multiple or no matches you need some logic to go from there.

What do I gain by adding a timestamp column called recordversion to a table in ms-sql?

What do I gain by adding a timestamp column called recordversion to a table in ms-sql?
You can use that column to make sure your users don't overwrite data from another user.
Lets say user A pulls up record 1 and at the same time user B pulls up record 1. User A edits the record and saves it. 5 minutes later, User B edits the record - but doesn't know about user A's changes. When he saves his changes, you use the recordversion column in your update where clause which will prevent User B from overwriting what User A did. You could detect this invalid condition and throw some kind of data out of date error.
Nothing that I'm aware of, or that Google seems to find quickly.
You con't get anything inherent by using that name for a column. Sure, you can create a column and do the record versioning as described in the next response, but there's nothing special about the column name. You could call the column anything you want and do versioning, and you could call any column RecordVersion and nothing special would happen.
Timestamp is mainly used for replication. I have also used it successfully to determine if the data has been updated since the last feed to the client (when I needed to send a delta feed) and thus pick out only the records which have changed since then. This does require having another table that stores the values of the timestamp (in a varbinary field) at the time you run the report so you can use it compare on the next run.
If you think that timestamp is recording the date or time of the last update, it does not do that, you would need dateTime fields and constraints (To get the orginal datetime)and triggers (to update) to store that information.
Also, keep in mind if you want to keep track of your data, it's a good idea to add these four columns to every table:
CreatedBy(varchar) | CreatedOn(date) | ModifiedBy(varchar) | ModifiedOn(date)
While it doesn't give you full history, it lets you know who and when created an entry, and who and when last modified it. Those 4 columns create pretty powerful tracking abilities without any serious overhead to your DB.
Obviously, you could create a full-blown logging system that tracks every change and gives you full-blown history, but that's not the solution for the issue I think you are proposing.