Is it possible to hard delete a Directus item through the API or app? - directus

Deleting an item from a collection, which has a status field, through the UI or API results in the item being soft deleted. However, in this one particular instance, we want to remove it from the database entirely.
If not possible, can it safely be done through the database by just deleting it from the table carrying the name of the collection? Any side-effects when doing it this way?

Found the answer on this page https://v8.docs.directus.io/guides/status.html#soft-delete:
When deleting an item, the API does the following:
Check if the collection has a status field
Check if the delta data has the status field (meaning the status was changed)
Check if the new status value (from delta data) has soft_delete = true
If yes, it sets the action to SOFT_DELETE
If no, it hard deletes the item (permanently removed from the database)

Related

Best practice for pagination based on item updated time

Let's consider I have 30 items in my db. And clientA will make an api call to get the first 10 records based on item updated time. And think of a use case where clientB updated the 11th record (item) by making some changes in it. But now when clientA makes an api call for next set of items based on the pagination page 2 (items from 11 to 20) It's because the clientB has updated the 11th item the pagination is going to break here (Bases on updated time 11th item will become 1 and 1 become 2, 2 become 3 ...10 becomes 11).There is a chance that clientA is will receive the duplicate data.
Is there any better approach for this kind of problem ??
Any help would be thankfull
I think you could retrieve all elements each time using no pagination at all, to prevent this kind of "false information" at your table.
If visualizing the actual values of each record is mandatory, you could always add a new function to your api working as a trigger. Each time a user modifies any record, this api's function will trigger a message for all active sessions to notify the user some data has been changed. As an example, think about something like the "twitter's live feed". In which when a new bunch of tweets are created, Twitter will notify all users to reload the page if they want to see realtime information.

Rally Lookback, Snapshot with empty custom field

I am trying to get a snapshot of deleted userstory to get value for a custom field(c_Dep). I get the snapshot but the custom field is empty. It had value in it. Does lookback not save value for cutomer created cutom field?
findConfig: {
_TypeHierarchy: 'HierarchicalRequirement',
"ObjectID": 12345,
"_ValidFrom": {
"$lte": "2017-01-25T19:00:57.475Z"
}
Sarita, It is hard to tell from the information you have given what is going on precisely. However, I can give you some pointers
The Lookback API will store changes in values for custom fields. The selection you have shown is valid from 24thJan to 25thJan. During this period was the custom field set? Probably not, because the array is only one long and I think it is showing the creation event.
Was the custom field updated to contain something after this time period?
The reason for asking is that a common misunderstanding is that the records stored in the lookback database will hold the current value of fields - it doesn't. It holds the changes in fields. If c_Dependencies didn't change during that time period, you may not see an entry returned in the array. The next entry in the database might be the record where the c_Dependencies field was set (changed from null to something) and that might be 'after' your time period filter.
It looks like your query is requesting snapshots earlier than 2017/1/25 ($lte). Since there's only one, it's probably the creation snapshot. If you get all snapshots for the ObjectID by removing the _ValidFrom parameter, you should see the changes made to c_Dep after artifact creation.
As I am not allowed to comment, I have to post a new answer.
I think William Scott meant remove the ValidTo filter. The one you have is the creation change. The update will be afterwards.

Sitecore: Programmatically trigger reindexing of related content

In my sitecore instance, I have content for 2 templates, Product and Product Category. The products have a multilist that link to the Product Category as lookups. The Products also have an indexing computed field setup that precomputes some data based on selected Product Categories. So when a user changes a Product, Sitecore's indexing strategy indexes the Product with the computed field.
My issue is, when a user changes the data in the Product Category, I want Sitecore to reindex all of the related products. I'm not sure how to do this. I do not see any hook where I could detect that a Product Category is being indexed, so I could programmatically trigger an index to products
You could achieve using the indexing.getDependencies pipeline. Add a processor to it - your custom class should derive from Sitecore.ContentSearch.Pipelines.GetDependencies.BaseProcessor and override Process(GetDependenciesArgs context).
In this function you can get your IndexedItem and add, based on this information, other items to the Dependencies. These dependencies will be indexed as well. Benefit of this way of working is that the dependent items will be indexed in the same job, instead of calling new jobs to update them.
Just be aware of the performance hit this could cause if badly written as this code will be called on all indexes. Get out of the function as soon as you can if not applicable.
Some known issues about this pipeline can be found on the kb.
One way would be to add a custom OnItemSave handler which will check if the template of the changed item has the ProductCategory template, and will problematically trigger the index update.
To only reindex the changed items, you can pickup the related Products and register them in the HistoryEngine by using the HistoryEngine.RegisterItem method:
Sitecore.Context.Database.Engines.HistoryEngine.RegisterItemSaved(myItem, new ItemChanges(myItem));
Some useful instructions how to create OnItemSave handler can be found here: https://naveedahmad.co.uk/2011/11/02/sitecore-extending-onitemsave-handler/
You could add or change one of existing index update strategies. (configuration\sitecore\contentSearch\indexConfigurations\indexUpdateStrategies configuration node)
As example you could take
Sitecore.ContentSearch.Maintenance.Strategies.SynchronousStrategy
One thing, you need to change
public void Run(EventArgs args, bool rebuildDescendants)
method. args contains changed item reference. All that you need, trigger update of index for related items.
After having custom update strategy you should add it to your index, strategies configuration node.

Optimal way to add / update EF entities if added items may or may not already exist

I need some guidance on adding / updating SQL records using EF. Lets say I am writing an application that stores info about files on a hard disk, into an EF4 database. When you press a button, it will scan all the files in a specified path (maybe the whole drive), and store information in the database like the file size, change date etc. Sometimes the file will already be recorded from a previous run, so its properties should be updated; sometimes a batch of files will be detected for the first time and will need to be added.
I am using EF4, and I am seeking the most efficient way of adding new file information and updating existing records. As I understand it, when I press the search button and files are detected, I will have to check for the presence of a file entity, retrieve its ID field, and use that to add or update related information; but if it does not exist already, I will need to create a tree that represents it and its related objects (eg. its folder path), and add that. I will also have to handle the merging of the folder path object as well.
It occurs to me that if there are many millions of files, as there might be on a server, loading the whole database into the context is not ideal or practical. So for every file, I might conceivably have to make a round trip to the database on disk to detect if the entry exists already, retrieve its ID if it exists, then another trip to update. Is there a more efficient way I can insert/update multiple file object trees in one trip to the DB? If there was an Entity context method like 'Insert If It Doesnt Exist And Update If It Does' for example, then I could wrap up multiple in a transaction?
I imagine this would be a fairly common requirement, how is it best done in EF? Any thoughts would be appreciated.(oh my DB is SQLITE if that makes a difference)
You can check if the record already exists in the DB. If not, create and add the record. You can then set the fields of the record which will be common to insert and update like the sample code below.
var strategy_property_in_db = _dbContext.ParameterValues().Where(r => r.Name == strategy_property.Name).FirstOrDefault();
if (strategy_property_in_db == null)
{
strategy_property_in_db = new ParameterValue() { Name = strategy_property.Name };
_dbContext.AddObject("ParameterValues", strategy_property_in_db);
}
strategy_property_in_db.Value = strategy_property.Value;

NHibernate: If two calls are made to CreateCriteria, which list is Get<T> going to retrieve the object from?

Under one UnitOfWork (session) I may call CreateCriteria twice. My first call is to populate a grid for data editing. Let's say the data has been edited and flushed (saved) to the database and the grid is still open. After the data is edited, I may call CreateCriteria a second time to retrieve a list of objects that are validated and found in error.
Lets say ObjectA was retrieved by both calls to session.CreateCriteria. It was edited in the grid but found in error within the second list.
The first question would be: Considering first level cache, is ObjectA--that was retrieved from the second call to CreateCriteri--represent the one retrieved from the first call? or, better yet, did NHibernate "detect and reuse" ObjectA from the first call assuming the keys did not change?
To my final point in question: I want to edit ObjectA which was found in error, and let's say it was brought up in a ListBox. Therefore, I want to highlight that object, call session.Get()(key) in order to retrieve it from cache, then bring up a change form to change ObjectA's properties. Which object am I changing? The one from the first call to CreateCriteria or the second call? Are they the same?
Thank you in advance.
Second level cache
Take a look at http://ayende.com/Blog/archive/2006/07/24/DeepDivingIntoNHibernateTheSecondLevelCache.aspx and http://www.javalobby.org/java/forums/t48846.html
From the former:
The second level cache does not hold
entities, but collections of values
So, with caching setup properly, NHibernate will be able to recreate your object without having to get the actual values from the database. In other words, the object will be created the same as when it wasn't in cache, except that since the values are cached, NHibernate won't actually query the database since it already knows what's in there.
I'm not quite sure what you mean by "validation" and "found in error". Are you validating before insert? Typically my entities are validated before the insert/update and won't actually be inserted/updated if invalid.
Validation aside, what I think you're asking is that if you:
save something
do a flush
retrieve an item (from a new session) with the same key as the one saved in step 1
will you be retrieving the same reference to the object you saved in step 1(?). And the answer is no since NHibernate does not cache the OBJECT but rather the values so it can create a new entity populated with the cached values (instead of actually performing a DB query).
However, does that really matter? If you overload Equals such that equality of 2 entities are based on their ID, then finding the same (not reference equal, but same) item in a grid (or a hash of any kind) should be a snap.
First level cache
I didn't realize you were talking about 1st level cache. 1st level cache works as an identity map and does cache the instance of the object. Therefore, if you do 2 selects from the db based on the same ID, you will retrieve the same instance of the object.