We're improving an old Java project which heavily uses SharedPreferences, for some new functions the need of saving small pieces of data arised again, so I thought about creating a DataStore instance to handle those new needs.
Will that create a conflict with existing SharePreferences instance?
This question isn't about migration, it's about if both tools use the same files and thus, creating a using Datastore for the first time without the migration option, will replace existing Sharedpreferences.
Thank you
Related
We are currently doing an architectural change that requires the local PouchDB to be renamed. As some of the databases are quite big and the indices take quite long to build, it would be great to instead just rename or copy the data as well as the indices from an existing local PouchDB.
PouchDB creates one main IndexedDB (_pouch_<db name>) that holds all data and then it creates further IndexedDBs (_pouch_<db name>-mrview-<some hash>) that hold the created indices. The names of these DBs can be found in two local docs: _local/_pouch_dependentDbs and _local/mrviews.
My ideas so far were:
Use this IndexedDB backup script which does a JSON backup of the whole IndexedDB and also allows to write a JSON string into a IndexedDB. This script did not work however, maybe due to the complex keyPath values. I am not an expert with IndexedDBs so I can't say for sure.
Sync the old PouchDB to a new PouchDB with the new name and destroy the old one afterwards. This works fine but leaves the problem that the indices have to be re-created (which I want to avoid).
Fetch the _local/_pouch_dependentDbs and _local/mrviews objects and use this information to also sync the DBs used for the map-reduce-views (_pouch_<dbname>-mrview-<some hash>) to new DBs with the new name and then also update the _local/... docs and write them to the new database. The process works but the indices still have to be built afterwards. This is probably because the contents of the by-sequence and local-store stores are not synchronized but I could also not find more information on the contents of these stores.
I was wondering if anyone here knows a good solution that makes this whole process easier (as its just about renaming a DB).
I'm trying to understand the different types of migration paths we can choose when developing an ASP.NET Core 1.0 application with EF Core. When I created my first Core application I noticed it generated a ApplicationDbContextModelSnapshot class that uses a ModelBuilder to build the model.
Then I read that if I need to add a table to the database, I need to create the new model and run the command line to generate the migration file and update the database. Ok, I get it up to this point.
But when I do that, I notice that the ApplicationDbContextModelSnapshot class gets updated too.
1) Does that mean I cannot modify this ApplicationDbContextModelSnapshot class since it looks like it gets regenerated each time?
2) Should I use Data Annotations to build my model or should I use Fluent API which tells me to build my model in the ApplicationDbContext class? Huh? another file that builds the model?
I'm seeing three different ways of working with the database here, the snapshot class, data annotations, and fluent API. I'm confused because today, I made a mistake in my last migration file so I deleted the file, dropped the database and reran the database update.
But by doing that I got errors similar to:
The index 'IX_Transaction_GiftCardId' is dependent on column 'GiftCardId'.
ALTER TABLE ALTER COLUMN GiftCardId failed because one or more objects access this column.
So naturally I was wondering if I had to modify the ApplicationDbContextModelSnapshot class.
What is the path I should be taking when it comes to migrations or database updates because these three paths are confusing me.
I have run into this issue before when I create migrations, make model changes, create new migrations, and try to update the database. The root cause is when keys are being changed and relationships are not dropped and are not added back or do not exist.
You have two options
Easy Method
The easiest way is also the most destructive way and only possible in a dev environment.
Delete all migrations, drop the database, create new migrations and run 'update-database'.
Hard/Safest Method
This is the most time consuming method. I recommend do this in a local integration branch first, pushing it to a remote integration, and then production.
Open the migration file, ie 20160914173357_MyNewMigration.cs.
Drop all indexes in order
Drop/Add/Edit table schemas
Add all indexes back.
For either method, just be sure to test and test again.
Do not modify ApplicationDbContextModelSnapshot. It is a design-time artifact, and should only be modified in the case of a merge conflict.
To update the model, always use data annotations or the fluent API.
For more information on the EF Migrations workflow, see Code First Migrations. It's for EF6, but most of the information is still relevant.
I want to have single FakeApplication for all my test.
My final goal is to set up database and use it in all test. They should access single database and share data in it. I can not use H2, because I use some MySQL features(fulltest search, for example). But if there is no started application, I can't call "DB.withTransaction" because there is started application yet. But it should start once, because it drops all tables and create new ones.
How can I do it?
I am using scala and JUnit. I solved my problem next way: I just created singleton for my fake application, which is retrieved as an implicit val. So, all work about creating and cleaning database is done on first fetch.
On play 1.0 when we change a variable type or for example changing from #OneToMany to #ManyToMany in a Model Play was handling the change automatically but with play 2.0 evolution script drop the database. There is any way to make Play 2.0 apply the change without dropping the DB ?
Yes, there is a way. You need to disable automatic re-creation of 1.sql file and start to write own evolutions containing ALTERS instead of CREATES - numbering them with 2.sql, 3.sql etc.
In practice that means, that if you're working with single database you can also just... manage the database's tables and columns using your favorite DB GUI. The evolutions are useful only when you can't use GUI (host doesn't allow external connections and hasn't any GUI) or when you are planning to run many instances of the app on the separate databases. Otherwise manual writing the statement will be probably more complicated than using GUI.
Tip: sometimes, if I'm not sure if I added all required relations and constraints into my manual evolutions I'm deleting them (under git controlled folder!) and running the app with Ebean plugin enabled and saving proposed 1.sql but I'm NOT applying the changes. Later using git I'm reverting my evolutions and compare with saved auto-generated file and convert CREATE statements to the ALTERs. There's no better option for managing changes without using 3-rd part software.
I wonder what is the best way to implement global data version for database. I want for any modification that is done to the database to incerease the version in "global version table" by one. I need this so that when I talk to application users I know what version of data we are talking about.
Should I store this information in table?
Should I use triggers for this?
This version number can be stored in a configuration table or in a dedicated table (with one field).
This parameter should not be automatically updated because you are the owner of the schema and you are responsible for knowing when you need to update it. Basically, you need to update this number every time you deploy a new application package (regardless of the reason for the package: code or database change).
Each and every deployment package should take care of updating the schema version number and the database schema (if necessary)
I tend to have a globals or settings table with various pseudo-static values stored.
- Just one row
- Many fields
This can include version numbers.
In terms of maintaining the version number you refer to, would this change when the data content changes? If so, the a trigger would be useful. If you mean for the version number to relate to table structures, etc, I'd be more inclined to manage this by hand. (Some changes may be irrelevant as far as teh applications are concerned, or there maybe several changes wrapped up into a single version upgrade.)
The best way to implement a "global data version for database" is via your source control system and build process. When all the changes have been submitted and passed testing your build process will increment your versioning number schema.
The version number could be implemented in a stored procedure. The result of the call to the stored proc could be added to a screen in your app so you can avoid users directly accessing a table.
To complete the previous answers, I came across the concept of "Migrations" (from the Ruby on Rails world apparently) today, and there was already a question on SO that covered existing frameworks in .Net.
The concept is still to store DB versioning information as data in a table somewhere, but for that versioning information to be managed automatically by a framework, rather than manually by your custom deployment processes:
previous SO question with overview of options: https://stackoverflow.com/questions/313/net-migrations-engine