How do i generate Model & CRUD automatically - yii-extensions

i want to generate Model and CRUD automatically when my new table created.
i am creating new table(xyz_uid) dynamically after successful registration of user. and i am inserting some data related to that particular user. this thing is working fine for me.
i creating separate table for each user because for every user table attributes may different and i tried to keep one table instead of making new table for every user but it's smashing everything in my project.
i want to generate Model and CRUD of that user so i can do future transaction with that table.
i know how to generate Gii manually from ?r=gii. but here i want to generate automatically from back end.
i tried to search about this in Yii forum and in google also. but i didn't found anything.
is there any extension or anything which will generate it automatically.
will it be good idea to create model and CRUD for every table? or shall i communicate with table directly using CDbCommand?

you may use giix-core.
While You install the giix-core it's create the model and basemodel so if u have any changes in DB every time u can create the basemodel not a model.
so it's easy working and u can use method override.
Note: Please make sure you can not write any code in BaseModel
for more please refer following link...
http://www.yiiframework.com/forum/index.php/topic/13154-giix-%E2%80%94-gii-extended/

Related

Add data automatically to a table B when you add data to table A

Can I update a table in Keystone when I add data to another table?
For example: I have a table named Property where I add details of the property. As soon as I enter the data into this Property table, another table, named NewTable, should automatically get populated with the contents.
Is there a way to achieve this?
There are two ways I can see to approach this:
The afterOperation hook, which lets you configure an async function that runs after the main operation has finished
A database trigger that runs on UPDATE and INSERT
afterOperation Hook
See the docs here. There's also a hooks guide with some context on how the hooks system works.
In your case, you'll be adding a function to your Property list config.
The operation argument will tell you what type of operation just occurred ('create', 'update', or 'delete') which may be handy if you also want to reflect changes to Property items or clean up records in NewTable when a Property item is deleted.
Depending on the type of operation, the data you're interested in will be available in either the originalItem, item or resolvedData arguments:
For create operations, resolvedData will contain the values supplied but you'll probably want to reference item, it'll also contain generated and defaulted values that were applied, such as the new item's id. In this case originalItem will be null.
For update operations, resolvedData will be just the data that changed, which should have everything you need to keep the copy in sync. If you want a move compete picture originalItem and item will be the entire item before and after the update is applied.
For delete operations originalItem will be the last version of the item before it was removed from the DB. resolvedData and item will both be null.
The context argument is a reference to the Keystone context object which includes all the APIs you'll need to write to your NewTable list. You probably want the Query API, eg. context.query.NewTable.createOne(), context.query.NewTable.updateOne(), etc.
The benefits to using a Keystone hook are:
The logic is handled within the Keystone app code which may make it easier to maintain if your devs are mostly focused on JavaScript and TypeScript (and maybe not so comfortable with database functionality).
It's database-independent. That is, the code will be the same regardless of which database platform your project uses.
Database Triggers
Alternatively, I'm pretty sure it's possible to solve this problem at the database level using UPDATE and INSERT triggers.
This solution is, in a sense, "outside" of Keystone and is database specific. The exact syntax you'll need depends on the DB platform (and version) your project is built on:
PostgreSQL
MySQL
SQLite
You'll need to manually add a migration that creates the relevant database structure and add it to your Keystone migrations dir. Once created, Prisma (the DB tooling Keystone uses internally) will ignore the trigger when it's performing its schema comparisons, allowing you to continue using the automatic migrations functionality.
Note that, due to how Prisma works, the table with the copy of the data (NewTable in your example) will need to either be:
Defined as another Keystone list so Prisma can create and maintain the table, or..
Manually created in different database schema, so Prisma ignores it. (I believe this isn't possible on SQLite as it lacks the concepts of multiple schemas within a single DB)
If you try to manually create and manage a table within the default database schema, Prisma will get confused (producing a Drift detected: Your database schema is not in sync with your migration history error) and prompt you to reset your DB.

How to manually add a user in ibm cloudant?

I have a cloudant database with a lot of deleted docs. Since they can't be destroyed, I would like to make a filtered copy with the non deleted items to a temporary base, destroy the original one, and copy the temporary base to a fresh database with the same name as before.
The problem is when I destroy the base, the API keys generated are also destroyed...
So the front app calling the new base can't acces it !
I would like to manually create a user/password, so I can recreate the same user each time I destroy the database.
I don't know how to do it ?
Or is there another way to achieve my goal ??
To answer your actual question, you can't add "users" to a Cloudant account, only databases. You can, however, make API-keys that span multiple databases, which sounds like it could be what you want:
https://dx13.co.uk/articles/2016/04/11/using-a-cloudant-api-key-with-multiple-cloudant-databases-and-accounts/
But as was noted by bessbd above, if your data model relies on document deletion, you're working against the grain of Cloudant, and sooner or later you'll end up with problems.
And finally -- the doc links appear to work just fine.
Maybe some useful stuff here: https://blog.cloudant.com/2019/11/21/Best-and-Worst-Practices.html
[disclaimer, I wrote that]
Can you please expand a little further on your use case? Why do you want to get rid of the deleted docs? Is there a way to avoid deleting the docs? Also, have you already read https://cloud.ibm.com/docs/services/Cloudant?topic=cloudant-documents#tombstone-documents ?

Recommended way to handle removing/renaming a realm model

I am using realm with the react-native app I am currently working on. The problem I am facing right now is that I need to rename or delete an old model and migrate the data to a new model.
Everything works nicely after I create new models, but when I look at Realm Browser to see new data, I can still see data with the old removed model.
I tried deleteAll with realm before inserting new data, and it doesn't seem to remove data with old model.
E.g. the app used to have a model names Car, but now I want to be more specific with Truck and Sedan, etc. So I create the new models, remove the old model Car, but I can still see data with Car after launch.
I am wondering if there is a way to delete the stale data. I tried doing migration, but since the schema does not have the old model, realm cannot refer to the stale data to delete them.
Realm.compact() removes the space left behind by deletions. Compacting works if there are no open Realm instances.
This will be added to the API in the version that comes after 1.10.3.
realm-js folks have exposed a method called Realm.deleteModel on Realm to delete model during migration. This should solve the problem of having stale data after migration when removing/renaming a model.
See https://github.com/realm/realm-js/issues/1268 for more detail

Do I use Snapshot file, migration file or data annotations in my EF Core to update database?

I'm trying to understand the different types of migration paths we can choose when developing an ASP.NET Core 1.0 application with EF Core. When I created my first Core application I noticed it generated a ApplicationDbContextModelSnapshot class that uses a ModelBuilder to build the model.
Then I read that if I need to add a table to the database, I need to create the new model and run the command line to generate the migration file and update the database. Ok, I get it up to this point.
But when I do that, I notice that the ApplicationDbContextModelSnapshot class gets updated too.
1) Does that mean I cannot modify this ApplicationDbContextModelSnapshot class since it looks like it gets regenerated each time?
2) Should I use Data Annotations to build my model or should I use Fluent API which tells me to build my model in the ApplicationDbContext class? Huh? another file that builds the model?
I'm seeing three different ways of working with the database here, the snapshot class, data annotations, and fluent API. I'm confused because today, I made a mistake in my last migration file so I deleted the file, dropped the database and reran the database update.
But by doing that I got errors similar to:
The index 'IX_Transaction_GiftCardId' is dependent on column 'GiftCardId'.
ALTER TABLE ALTER COLUMN GiftCardId failed because one or more objects access this column.
So naturally I was wondering if I had to modify the ApplicationDbContextModelSnapshot class.
What is the path I should be taking when it comes to migrations or database updates because these three paths are confusing me.
I have run into this issue before when I create migrations, make model changes, create new migrations, and try to update the database. The root cause is when keys are being changed and relationships are not dropped and are not added back or do not exist.
You have two options
Easy Method
The easiest way is also the most destructive way and only possible in a dev environment.
Delete all migrations, drop the database, create new migrations and run 'update-database'.
Hard/Safest Method
This is the most time consuming method. I recommend do this in a local integration branch first, pushing it to a remote integration, and then production.
Open the migration file, ie 20160914173357_MyNewMigration.cs.
Drop all indexes in order
Drop/Add/Edit table schemas
Add all indexes back.
For either method, just be sure to test and test again.
Do not modify ApplicationDbContextModelSnapshot. It is a design-time artifact, and should only be modified in the case of a merge conflict.
To update the model, always use data annotations or the fluent API.
For more information on the EF Migrations workflow, see Code First Migrations. It's for EF6, but most of the information is still relevant.

Can I change a model from embedded to referenced without losing data?

I made a bad decision as I was designing a MongoDB database to embed a model rather than reference it in an associated model. Now I need to make the embedded model a referenced model, but there is already a healthy amount of data in the database (or document?).
I'm using Mongoid, so I reasoned I can just change embedded_in to referenced_in. Before I start, I figured I'd ask people who know better than I do. How can I transition the embedded data already in the database to the document for the associated model.
class Building
embeds_many :landlords
..
end
class Landlord
embedded_in :building
...
end
Short answer - Incrementally.
Create a copy of Landlord, name it Landlord2.
Make it referenced in Building.
Copy all data from Landlord to Landlord2.
Delete Landlord.
Rename Landlord2 to Landlord.
Users should not be able to CRUD Landlord during steps 3-5 (ideally). You still can get away with only locking CRUD on 4-5. Just make sure you make all updates that happened during copying, before removing Landlords.
Just chan ging the model like you have above will not work, the old data will still be in a different strucutre in the db.
Very similar the previous answer, one of the things I have done to do this migration before is to do it dynamically, while the system is running and being used by the users.
I had the data layer separated from the logic, so it let me add some preprocessors and inject code to do the following.
Lets say we start with the old datamodel, then release new code that does the following:
On every access to the document, you would have to check whether the embedded property exists, if it does, create a new entry associated as a reference and save to the database and delete the embedded property from the documents. Once this is done for a couple of days, a lot of my data got migrated and then I just had to run a similar script for everything that was not touched, made the job of migrating the data much easier and simpler and I did not have to run long running scripts or get the system offline to perform the conversion.
You may not ha ve that requirement, so Pick accordingly.