I'm using NHibernate Fluent Code First for c# desktop app. Is there a way to update db schema without removing existing data.
In my case I need simply add a new column with no constraints, allows nulls, and not a foreign/primary key, but I need to save all the existing data in db.
The db is Postgre 9.2 if it matters
fluentConfiguration.ExposeConfiguration(config => new SchemaUpdate(config).Execute(false, true)) updates database schema automatically and doesn't change existing data. It can only add tables or columns.
Rename and delete can be executed with FluentMigrator but then you have to manually write data migrations if you need to save your data.
Related
I have my application and I install a .sdf file and I add columns to my database. So how do I update my database without losing my Data?
You can execute standard data definition language (DDL) commands in code at application startup.
For example:
myCommand.ExecuteNonQuery("ALTER TABLE MyTable ADD NewColumn1 INT NULL")
We have done this for years in devices ranging from PocketPC to Tablets.
We used to check whether or not a database table or column existed before modifying the DB structure, but we have found it is much easier to record the current database version in a table and then just check the version on startup then make the modifications that we know are necessary between the current version in the database and the application's database version.
You can extend/update a database schema by choosing Default Values for new columns or allow new columns to accept Nulls.
I am using a database "abc"and a edmx to work with the database tables. Now i want to add a new database "efg". is it better to add a new edmx file for the second database "efg" or use single edmx for both database? how can i achieve this?
No, you can only have a single database in each edmx file. You will have to create a new edmx file for the second database.
I'm using Fluent NHibernate (and I'm a newbie). I have mapped a read-only table that already exists in the database (it's actually a view in the db). In addition, I have mapped new classes for which I want to create tables using SchemaExport.Create().
In my fluent mapping, I have specified "ReadOnly()" to mark the view as immutable. However, when I execute SchemaExport.Create(), it still tries to create the table so I get the error "There is already an object named 'vw_Existing'".
Is there a way to prevent NHibernate from trying to create that specific table?
I supposed I could export and modify the sql (SetOutputFile), but it would be nice to use SchemaExport.Create().
Thanks.
You're looking for
SchemaAction.None();
We're developing a Doctrine backed website using YAML to define our schema. Our schema changes regularly (including fk relations) so we need to do a lot of:
Doctrine::generateModelsFromYaml(APPPATH . 'models/yaml', APPPATH . 'models', array('generateTableClasses' => true));
Doctrine::dropDatabases();
Doctrine::createDatabases();
Doctrine::createTablesFromModels();
We would like to keep existing data and store it back in the re-created database. So I copy the data into a temporary database before the main db is dropped.
How do I get the data from the "old-scheme DB copy" to the "new-scheme DB"? (the new scheme only contains NEW columns, NO COLUMNS ARE REMOVED)
NOTE:
This obviously doesn't work because the column count doesn't match.
SELECT * FROM copy.Table INTO newscheme.Table
This obviously does work, however this is consuming too much time to write for every table:
SELECT old.col, old.col2, old.col3,'somenewdefaultvalue' FROM copy.Table as old INTO newscheme.Table
Have you looked into Migrations? They allow you to alter your database schema in programmatical way. WIthout losing data (unless you remove colums, of course)
How about writing a script (using the Doctrine classes for example) which parses the yaml schema files (both the previous version and the "next" version) and generates the sql scripts to run? It would be a one-time job and not require that much work. The benefit of generating manual migration scripts is that you can easily store them in the version control system and replay version steps later on. If that's not something you need, you can just gather up changes in the code and do it directly through the database driver.
Of course, the more fancy your schema changes becomes, the harder the maintenance will get i.e. column name changes, null to not null etc.
I would like to know which one is the best approach for migrating existing DB data to another new DB with entirely different structure. I want to copy the data from my old DB and need to insert the data in new DB. For me the table names and column names of new DB is entirely different. I am using SQL Server 2008.
You should treat this as an ETL problem, not a migration, as the two schemas are entirely different. The proper tool for this is SSIS. SSIS allows you to create dataflows that map columns from one table to another, add derived sources, perform splits, merges, etc. If possible you should create source queries that return results close to the schema of the target database so you need fewer transformations.
In this you have to migrate most of the parts manually by running scripts. AFAIK automatically it will not synchronize. But using SSMS you Map tables of two different db's. hope that will help.