I found some information about NHb listeners online like this documentation:
http://docs.jboss.org/hibernate/orm/3.5/javadocs/org/hibernate/event/package-summary.html, but it's not definitely enough to understand in details how they work. This are questions regarding let's say PostUpdateEventListener I was unable to find answers for:
1) Does not it really run when we call Session.Save(entity) with no changes done to entity, I mean when no SQL Update script runs on database? If it does, is there any Nhb event/listener to track actual database update, with UPDATE statement running on db only? Our entity is configured with DynamicUpdate() if it makes any difference.
2) What if we have separate nested entity like [Name] (not component, stored to separate table) or list of entities, which our listened entity like [Person] references to. Now we update person's name without doing any changes to [Person] entity properties. Will PostUpdateEventListener be invoked for [Person] if we run Session.Save(person), or it runs just once for [Name]?
Maybe somebody could give me a link to a clear well-described online documentation explaining listeners in good details and answering questions like this. Thanks
I figured that out in practice, and just for the sake of posterity here are answers:
1) PostUpdateEventListener does not really run as long as no actual changes are made to database, even if you run Flush explicitly
2) PostUpdateEventListener runs for [Name] only, in case it's an entity. If [Name] is a component, PostUpdateEventListener runs for #event.Entity is [Person] obviously.
Related
I use orphan Removal to remove entities that are orphaned but I could not find a way to remove entities that never had a parent.
For example, I build a house that has rooms, I delete the house, and orphan Removal will delete the room entities, all works good.
Second scenery, I build a room first, and save it to use it in building the house later. I never used it, I did not build the house and orphan Removal will not delete this entity for me.
What can I do to remove this kind of entities on a regular basis?
I searched the internet for solutions but could not find any. I'm new to programming so I was thinking that maybe I don't search for the right thing. Any directions or tips would be amazing. Thank you
You could simply build a script that you execute everyday using cronjob on your server. If you use symfony, use console command as script to execute.
This script would retrieve orphan from database en delete it.
In local, just execute the command.
I have a cloudant database with a lot of deleted docs. Since they can't be destroyed, I would like to make a filtered copy with the non deleted items to a temporary base, destroy the original one, and copy the temporary base to a fresh database with the same name as before.
The problem is when I destroy the base, the API keys generated are also destroyed...
So the front app calling the new base can't acces it !
I would like to manually create a user/password, so I can recreate the same user each time I destroy the database.
I don't know how to do it ?
Or is there another way to achieve my goal ??
To answer your actual question, you can't add "users" to a Cloudant account, only databases. You can, however, make API-keys that span multiple databases, which sounds like it could be what you want:
https://dx13.co.uk/articles/2016/04/11/using-a-cloudant-api-key-with-multiple-cloudant-databases-and-accounts/
But as was noted by bessbd above, if your data model relies on document deletion, you're working against the grain of Cloudant, and sooner or later you'll end up with problems.
And finally -- the doc links appear to work just fine.
Maybe some useful stuff here: https://blog.cloudant.com/2019/11/21/Best-and-Worst-Practices.html
[disclaimer, I wrote that]
Can you please expand a little further on your use case? Why do you want to get rid of the deleted docs? Is there a way to avoid deleting the docs? Also, have you already read https://cloud.ibm.com/docs/services/Cloudant?topic=cloudant-documents#tombstone-documents ?
How can I make sure that specific data in the database isn't altered anymore.
We are working with TSQL. Inside the database we store contract revisions. These have a status: draft / active. When the status has become active, the revision may never be altered anymore. A revision can have 8 active modules (each with its own table), each with their own settings and sub-tables. This creates a whole tree of tables with records that may never change anymore when the contract revision has been set to active.
Ideally I would simply mark those records as read-only. But such thing does not exists as of today. The next thing that comes to mind are triggers. Thus I have to add those triggers to a lot of tables, all which are related to the contract revision.
Now maybe there are other approaches, like a database only for archiving on which the user only has insert rights. Thus when a contract revision has become active, it is moved from one DB to the archive DB (insert is allowed). And can never be altered anymore (DENY UPDATE|DELETE).
But maybe there are other more ingenious options I haven't thought of, and you did. Maybe including the CLR or what not.
So how can I make a tree-structure of records inside our TSQL database effectively readonly that is the most maintenance free, easy to understand, quickly to setup, and can be applied in a most generic way?
What ever you do (triggers, granted rights...) might be overcome by a user with higher rights, this you know for sure...
Is this just to archive this data?
One idea coming into my mind was to create a nested XML with all data within on big structure and put this somewhere into a side table. Create a INSTEAD OF UPDATE,DELETE TRIGGER where you just do nothing. Let these tables be 1:1-related.
You can still work with this data, but not quite as fast as being read from physical tables.
If you want, you even might convert the XML to a string and calculate some Hash-Code, which you store in a different place to check for manipulations.
The whole process might be done in one single Stored Procedure call.
What is the database concept equivalent to "when deleting the user, delete all his posts"?
And is this a good thing?
Another example: if your website is a programming forum, you need to find and delete comments related to that topic before deleting the topic.
Should this be handled automatically in the database layer?
cascading deletes
I would hesitate to recommend real deletion - instead using a soft deletion which marks a record as deleted - in this case, you might use cascading updates (or not, since the original topic has already been marked as deleted).
Cascading updates, usually used in conjunction with foreign key references. Different DBMS offer varying levels of support.
In the specific case of a forum or similar web site, I'd suggest using "soft" deletion - flag the rows in the databases as being deleted, which will prevent them from being viewed or returned in lists or search results, but don't remove them completely. This facilitates undeletion, etc. to counter shoddy or biased moderation.
In addition, I'd suggest that automatically deleting a user's posts when you delete their user account may not be the best behaviour in all cases - certainly, when dealing with troll/spam accounts, you may want to remove junk posts, but you don't necessarily want to blast away all the information in other cases, particularly as it introduces issues with broken references (e.g. external references, cross-linking from other posts, etc.)
The answer to your question is cascading deletes. For the record, I hate user deletion as a forum feature. If people want to leave, great ... I want to see the history of what they did while they were there.
Not sure if this is what you wanted to find out, but in MySQL the type of thing (I think) you're asking about is called a trigger. It's basically an SQL statement that you associate with a table and an action on that table; for example, you can set a statement that will execute whenever a user's record is deleted which will delete all comments/posts/whatever associated with that user.
see http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html and links therein (that's for MySQL, of course... other DBs may differ)
I feel like I have a very basic/stupid question, yet I never saw/read/heard anything in this direction.
Say I have a table users(userId, name) and a table preferences(id, userId, language). The example is trivial but could be extended to a situation with multi-level relations and way more tables..
When my UI requests to delete a user I first want to show a warning stating that also its preferences will be deleted. If at some point the database gets extended with more tables and relationships, but the software isn't adapted accordingly (the client didn't update) a generic message should be shown.
How can I implement this? The UI cannot know about the whole data structure and should not be bothered to walk down all the relations to manually delete all the depending records.
I would think this would be with constraints.
The constraint would be no action at first so the constraint will throw an error that can be caught by the UI. After the UI receives a confirmation, the constraint should become a cascade.
Somehow I'm feeling like I'm getting this all wrong..
What I would do is this:
The constraint is CASCADE
The application checks if preferences exist.
If they do, show the warning.
If no preferences exist, or the warning is accepted, delete the client.
Changing database relationships on the fly is not going to be a good idea!!
Cheers,
RB.
If you are worried about the user not realising the full impact of their delete, you might want to consider not actually deleting the data - instead you could simply set a flag on a column called say "marked_for_deletion". (the entries could then be deleted a safe time later)
The downside is that you need to remember to filter out the marked rows in other queries. This can be mitigated by creating a view on the table with the marked rows filtered out, and then always using the view in your queries.