Azure Storage Explorer - Filtering on Last Modified - azure-storage

Apparently we can not sort the blobs easily if there are more than 1000 : you will need to "load more" until the complete list of blob appears and then you can sort them as you wish. Which is really not ideal.
So I want to apply a filter on the 'Last Modified' tag key. But I can't seem to be able to make it work.
I want to get all the blobs from 1st of June and later.
Here I have (at least) 2 blobs from 4th of June 2021 (the displays is set on the french format, but the timestamp shouldn't be impacted) :
But whatever filter I try I get nothing :
I tried different formats but none are working :
Last Modified >= 2021-06-01
Last Modified >= 01-06-2021
Last Modified >= 01/06/2021
Any ideas on what's going on here ?

Any ideas on what's going on here?
I believe the issue is that you're trying to filter by LastModified system property of the blob. You can also define tags for a blob and the screenshot you shared is for filtering of blobs based on the tag name/value specified for the blob.
To make the filtering work by tags, you will need to define tags for the blobs and then apply the filtering on those tag name/value.

Related

How to manage additional processed data in MarkLogic

MarkLogic 9.0.8.2
We have around 20M records in MarkLogic.
For one of the business requirement, we need to generate additional data for each xml and then need user will search this data.
As we can't change original document, so need input on what is best way to manage additional data. Following are the few which we have thought of
Create separate collection and store additional data in separate xml with same unique number i.e. same as original xml. So when user search for it, search in this collection and then retrieved original documents and send response back.
Store additional data in original document properties
We also need to create element range index to make sure it works when end user provide data in range operators.
<abc>
<xyz>
<quan>qty1</quan>
<value1>1.01325E+05</value1>
<unit>Pa</unit>
</xyz>
<xyz>
<quan>qty2</quan>
<value1>9.73E+02</value1>
<value2>1.373E+03</value2>
<unit>K</unit>
</xyz>
<xyz>
<quan>qty3</quan>
<value1>1.8E+03</value1>
<unit>s</unit>
</xyz>
<xyz>
<quan>qty4</quan>
<value1>3.6E+03</value1>
<unit>s</unit>
</xyz>
</abc>
We need to process data from value1 element. User will then search for something like
qty1 >= minvalue AND qty1<=maxvalue
qty2 >= minvalue AND qty2<=maxvalue
qty3 >= minvalue AND qty3<=maxvalue
So when user will search for qty1 then it should only get data from element where value is qty1 and so on.
So would like to know
What is best approach to store data like this
What kind of index i should create to implement this
I would recommend wrapping the original data in an envelope, which allows adding extra data in the header. It could also allow creating a canonical view on the relevant pieces of the data, and either store that as instance, and original as 'attachment' (sub-property, not an attached binary), or keep the instance as-is, and put canonical values for indexing in the header.
There is a lengthy blog article about the topic, that discusses pros and cons in high detail: https://www.marklogic.com/blog/envelope-design-pattern/
HTH!
Grtjn's answer would be the recommended solution, as it is more performant to keep all the information inside the document itself, versus having to query across both the document with the properties, but it would require changes to the document.
Option 1 & 2 could both work.
Properties documents already exist, so it doesn't add fragments, but the properties must conform to the schema.
Creating a sidecar document provides more flexibility, because you are creating new documents, it will increase number of fragments.

Is there another way in Qlik Sense to show the latest weeks data by default as well as a filter to change the week you're looking at?

I am looking to add a weekly filter onto my Qlik dashboard to allow me to change the weekly data that is displayed on my dashboard. My original idea works well as I wanted to display the latest weeks data and compare this to the previous weeks and this gives me exactly what I want.
To enhance this and give the dashboard a bit more flexibility and just in case someone wanted to look at a different week, I thought it might be a good idea to add a weekly filter but the way I have built the dashboard won't allow me to do this. The following is an example of what I have:
In my database table I have a rank column (latest_week_rank) where the latest weekending (i.e. Mon 13th to Sun 19th Jan) has a value of 1 and the 2nd latest week would be 2 etc. I have then written the following code in my data tab:
latest_week = 1;
previous_week = 2;
I have then written the following which is then called within a Multi KPI:
vOrdersWTD =Sum({<latest_week_rank = {$(latest_week)}>} total_orders)
This is obviously where the problem lies as having the weekly filter makes no difference as no other weeks show up but I am not sure how I change my code in order to make this all work.
I would really appreciate if somebody could advise on how I can change this around.
Instead of assigning your predetermined variable in set analysis, use the actual field. Something like:
$(=
'Sum({<latest_week_rank = {'
& max(latest_week_rank)+1
& '}>} total_orders'
)

Rally Lookback, Snapshot with empty custom field

I am trying to get a snapshot of deleted userstory to get value for a custom field(c_Dep). I get the snapshot but the custom field is empty. It had value in it. Does lookback not save value for cutomer created cutom field?
findConfig: {
_TypeHierarchy: 'HierarchicalRequirement',
"ObjectID": 12345,
"_ValidFrom": {
"$lte": "2017-01-25T19:00:57.475Z"
}
Sarita, It is hard to tell from the information you have given what is going on precisely. However, I can give you some pointers
The Lookback API will store changes in values for custom fields. The selection you have shown is valid from 24thJan to 25thJan. During this period was the custom field set? Probably not, because the array is only one long and I think it is showing the creation event.
Was the custom field updated to contain something after this time period?
The reason for asking is that a common misunderstanding is that the records stored in the lookback database will hold the current value of fields - it doesn't. It holds the changes in fields. If c_Dependencies didn't change during that time period, you may not see an entry returned in the array. The next entry in the database might be the record where the c_Dependencies field was set (changed from null to something) and that might be 'after' your time period filter.
It looks like your query is requesting snapshots earlier than 2017/1/25 ($lte). Since there's only one, it's probably the creation snapshot. If you get all snapshots for the ObjectID by removing the _ValidFrom parameter, you should see the changes made to c_Dep after artifact creation.
As I am not allowed to comment, I have to post a new answer.
I think William Scott meant remove the ValidTo filter. The one you have is the creation change. The update will be afterwards.

Splunk Search does not return all event data on a field

I'm facing a very strange issue in my Splunk search. I have a data input coming from a REST API that returns a multi-level (nested) JSON response:
The entity node has several nodes, each node represents one access point. Each access point contains a field called ipAddress.
This API is being called every 5 min and response stored in Splunk. When I do a search to get the list of IP Addresses from one event I don't get all of them. For some reason, is like Splunk is reading only the first seven nodes inside entity, because when I do:
source="rest://AccessPointDetailsAPI" | head 1
Splunk shows only the following values on the field (7 values although there are around 27):
I'm using demo license if that matters. Why I cannot see all values ? If I change my search to look for a specific iPAddress on the response but not on the list it won't return records.
Thanks and regards,
I think I understand the problem now. So the event is a big json and Splunk is not properly parsing all fields on the big json.
We need to tell splunk to parse the specific field we need with spath and specifying the field:
yoursearch | spath output=myIpAddress path=queryResponse.entity{}.accessPointDetailsDTO.ipAddress | table myIpAddress
http://docs.splunk.com/Documentation/Splunk/5.0.4/SearchReference/Spath
But I think also is important to analyze if maybe the data input needs to be divided in multiple events rather than a single huge event.

Restriction on retrieving file information from a folder

I am trying to get information about files in a folder using https://apis.live.net/v5.0/folderid/files?
This particular folder of mine has around 5200 files. So I am getting a readtimeout when I make the above mentioned request. Is there any restriction on the number of files that I can make the request.
Note : I am able to successfully retrieve the file information from folder if I restrict the file count to 500 say https://apis.live.net/v5.0/folderid/files?limit=500
In general it's good to page queries that could potentially return a large number of requests. You could try using the limit query parameter in combination with the offset query parameter to read sets of the children at a time and see if that works better for you.
I'll quote in the relevant information from the documentation for ease of reference:
Specify the first item to get by setting the offset parameter in the preceding code to the index of the first item that you want to get. For example, to get two items starting with the third item, use FOLDER_ID/files?limit=2&offset=3.
Note In the JavaScript Object Notation (JSON)-formatted object that's returned, you can look in the paging object for the previous and next structures, if they apply, to get the offset and limit parameter values of the previous and next entries, if they exist.
You may also want to consider swapping to the new API, which has it's own paging model (using the next links).