I have a very simple doubt. I was reading up on redis and I didn't get the answer for this question.
Consider you already have created a key that is a list. After sometime, I need to replace the contents of the list with new content.
I can keep a timeout on the list key and create the same name key for the modified list or I can reuse the originally created key for the list.
So which approach is better? Creating new keys or updating the existing keys?
Related
Can I update a table in Keystone when I add data to another table?
For example: I have a table named Property where I add details of the property. As soon as I enter the data into this Property table, another table, named NewTable, should automatically get populated with the contents.
Is there a way to achieve this?
There are two ways I can see to approach this:
The afterOperation hook, which lets you configure an async function that runs after the main operation has finished
A database trigger that runs on UPDATE and INSERT
afterOperation Hook
See the docs here. There's also a hooks guide with some context on how the hooks system works.
In your case, you'll be adding a function to your Property list config.
The operation argument will tell you what type of operation just occurred ('create', 'update', or 'delete') which may be handy if you also want to reflect changes to Property items or clean up records in NewTable when a Property item is deleted.
Depending on the type of operation, the data you're interested in will be available in either the originalItem, item or resolvedData arguments:
For create operations, resolvedData will contain the values supplied but you'll probably want to reference item, it'll also contain generated and defaulted values that were applied, such as the new item's id. In this case originalItem will be null.
For update operations, resolvedData will be just the data that changed, which should have everything you need to keep the copy in sync. If you want a move compete picture originalItem and item will be the entire item before and after the update is applied.
For delete operations originalItem will be the last version of the item before it was removed from the DB. resolvedData and item will both be null.
The context argument is a reference to the Keystone context object which includes all the APIs you'll need to write to your NewTable list. You probably want the Query API, eg. context.query.NewTable.createOne(), context.query.NewTable.updateOne(), etc.
The benefits to using a Keystone hook are:
The logic is handled within the Keystone app code which may make it easier to maintain if your devs are mostly focused on JavaScript and TypeScript (and maybe not so comfortable with database functionality).
It's database-independent. That is, the code will be the same regardless of which database platform your project uses.
Database Triggers
Alternatively, I'm pretty sure it's possible to solve this problem at the database level using UPDATE and INSERT triggers.
This solution is, in a sense, "outside" of Keystone and is database specific. The exact syntax you'll need depends on the DB platform (and version) your project is built on:
PostgreSQL
MySQL
SQLite
You'll need to manually add a migration that creates the relevant database structure and add it to your Keystone migrations dir. Once created, Prisma (the DB tooling Keystone uses internally) will ignore the trigger when it's performing its schema comparisons, allowing you to continue using the automatic migrations functionality.
Note that, due to how Prisma works, the table with the copy of the data (NewTable in your example) will need to either be:
Defined as another Keystone list so Prisma can create and maintain the table, or..
Manually created in different database schema, so Prisma ignores it. (I believe this isn't possible on SQLite as it lacks the concepts of multiple schemas within a single DB)
If you try to manually create and manage a table within the default database schema, Prisma will get confused (producing a Drift detected: Your database schema is not in sync with your migration history error) and prompt you to reset your DB.
I have a productive database with clean primary key / foreign key structures. It is pretty well designed and I'd like to keep it that way.
I now need to integrate data from another database. This database is from a different application and totally messy. No conventions, no clear foreign key structure, it's a real mess. However the data I need is managed by this other application and I need to make at least a daily update of the data.
The easiest solution I found is: creating my own table, have an import script running every day. Before importing the data => truncate my table.
This does not work however, because I would kill all my foreign key references with the truncation.
Another solution I have in mind is like the following:
Do a one time import and then update / delete / insert (new) data everyday.
This might get really tricky because of the missing primary key I might lose track of data records and insert them as new instead of updating the old dataset.
I guess I'm not the first one with a problem of this kind - but unfortunately I was not able to find any good advice with googling. That's why I wanted to make this post.
Is there any other useful approach which I currently don't see? Is there any other advice?
We use Lucene as a search engine. Our Lucene index is created by a master server, which is then deployed to slave instances.
This deployment is currently done by a script that deletes the files, and copy the new ones.
We needed to know if there was any good practice to do a "hot deployment" of a Lucene index. Do we need to stop or suspend Lucene? Do we need to inform Lucene the index has changed?
Thanks
The first step is to open the index in append mode for writing. You can achieve this by calling IndexWriter with the open mode named IndexWriterConfig.OpenMode.CREATE_OR_APPEND.
Once this is done, you are ready to both update existing documents and add new documents. For updating documents, you need to provide some kind of a unique identifier for a document (could be the URL or something else that is guaranteed to be unique). Now if you want to update a document with id say "Doc001" simply call the updateDocument function of Lucene passing "Doc001" as the Term (the very first) argument.
By this you can update an existing index without deleting it.
I want to setup the salt pillar to make key value pairs available to a particular instance in a dynamic way. It appears that ext_pillar (which can be used to generate key pairs dynamically) cannot restrict access to particular minions. In my scenario minions can be destroyed and new ones can join automatically.
In this situation, one solution for my problem (which seems quite tedious and inefficient) is:
When a new minion is accepted on the Salt Master, via a script, generate the private data for that minion, and create a YAML file with this information as key-value pairs in the salt pillar directory.
Use a script to automatically edit the pillar top file to allow this minion access to the private data generated in the previous step.
Refresh the pillar data on that minion
Access the private data on the minion.
I am hoping there is a better way to do the same thing. Any ideas?
ext_pillar allows you return any data that you want. It is provided the minion id as well as the minion grains and other info. That allows you to decide what info to return from the ext_pillar. So you can, indeed, restrict access to data to particular minions.
I have a question about correctly handling a recreation of a database.
In my dev environment I often recreate the database by using
_schemaExport.Drop(true, true);
_schemaExport.Create(createResult, true);
(I have to note, that I use the hilo generator.) Right after when I recreated the database, sometimes a save of a new entity fails because of "Cannot insert duplicate key..." exception.
My question:
Would I have to reinitialize the session factory (and maybe even the session) to correctly come in sync with the new hilo-using database? Or sould it work just as is?
Any hint is appreciated!
Lg
warappa
I'd say you definitely have to create a new session after recreating a database. Another option is to clear the existing one before recreating DB.
The ID generator will start from scratch after you've recreated the DB. This can cause some generated ID to be the same as ID of another object in previously existing session. Thus you're getting duplicate key errors.