Adding a migration in the middle of previous migrations safe? - ruby-on-rails-3

I have a minor issue which force me to introduce a new migration in between two migrations.
Short version of my question is: whether it is safe to introduce a new migration between previous two.
What I did
I need to have a table which will be filled from a file.
I added a table then imported data on the table by two migrations:
A migration which create a table with named ID column by using self.primary_key = some_id
A migration to import text data onto the table
The issue was, I forgot to add :id => false to the first migration. This cause id column to be created but haven't set correctly. Since I have primary key in some_id it does not cause problem up to now.
Rails 3.2.4
Now, I upgraded to Rails 3.2.4. Due to the change, it look like I need to set unique id before save. Which cause migration 2 above to fail.
The easiest fix is removing id column between above two migrations because I need test suite to build the database from scratch time to time. To make the import to work, I need to fix it before 2nd migration run.
Question
Now the question.
Since above migrations are deployed already, The migration will run after all of the migrations other than the one.
In my case, it looks like Ok to create such a migration (with timestamp in between above two).
Is it okay to do add migration like this way, in this case?

Related

django main.app_table__old Error deleting item

I am facing this problem and I tried many solutions but no one works for me or it is not convinient of my case.
First I use Django==2.0 (can't change to version due to problems).
I have I model table named table and my app is appwhen developing it happend that I can't delete any element in that table (the other works fine). and I get this problem.
OperationalError at /fieldsdetails/25/delete/
no such table: main.app_table__old
I tried to delete all migrations history so the db.sqlite3 and run:
python manage.py makemigrations
python manage.py migrate
Then I tried to delete the table as
python manage.py dbshell
SELECT * FROM sqlite_master WHERE type='table';
then I found a table named app_table__old and deleting it using this:
DROP TABLE app_table__old
.exit
but nothing works?
is there any solution I don't want to upgrade Django version or lose data.
I just discovered a tricky way to solve this without losing or changing or upgrading Django.
The problem happens when the sqlite lost some information in the data base.
To solve the problem follow those steps:
1- go to your models tables and search for other tables that have ForeignKey or any relation to the table for example:
class table_Perimeter(models.Model):
Perimeter = models.ForeignKey( table, on_delete=models.CASCADE)
then when you define all tables that have relation with table. you need to fix the problem in the database itself db.sqlite3.
install the SQLiteStudio browser from this site https://sqlitestudio.pl/
then go to table_Perimeter and double click on it. you will see something like this.
double click to re-configuration ForeignKey and you will see a red icon:
Here enter to configure it again
from this window select the foreig table and Foreig id and click apply
click to commit the changes
So everything will work perfectly just run your server :)

Controlling the updates in my Database

I came here today to see if someone could give me a suggestion to improve the way I update my database.
Here is the problem, I have one file that I store new scripts every time that I need to change something. For instance, let's say I need to add a new column in a table. I would add the following line in my file called script1.sql:
alter table CLIENTS
add AGE integer
After doing that, I am going to send it to a client with an updated application, and ask him to run script1.sql on his database. That works just fine for me.
The problem shows up when this file starts to get bigger, and the client needs to receive the new updates.
The client would run the script1.sql file again, but now with more updates. He will get errors indicating that a column named AGE already exists in the database.
The biggest problem is when I change the version of my application. If I update my application from Application1 to Application2, I also change the script from script1.sql to script2.sql.
Now, my client will need to run both to get to the correct version without conflicts. He will also get lots of errors, since almost everything from script1.sql was already processed in his database.
What I want is to eliminate the chance to face conflicts. This process has been working for me, but always causing some sort of trouble. Therefore, if anyone has any idea about how I could make it work better, please help me out.
Usually SQL provides something called IF EXISTS ( also IF NOT EXISTS) so eg you can write a statement such as:
CREATE TABLE IF NOT EXISTS users ...
Which will only create the users table if it hasn't already been created.
There is usually a variant of this that can be added to all your statements (including updates such as renaming columns etc).
Then if the table has already been added (or column updated etc) then it won't try to run that SQL command again - which means you can run the same file over and over as many times as you like.
(Note: this is called idempotency)
You will need to google for the details on how to use EXISTS for sql-server

Liferay ServiceBuilder doesn't alter tables

Short story
When I modify the column withs in tables.sql (VARCHAR(4000)) generated by the service builder, redeploying the portlet does not cause Liferay to alter the db tables. How can I make sure that the column withs get expanded?
Long story
I have to make some changes to a Liferay 6.1.20 EE GA2 project developed by another contractor. The project uses maven as a build tool.
After adding some columns to the service.xml and running mvn liferay:build-service, I noticed, that the portlet-model-hints.xmlgot overriden (see https://issues.liferay.com/browse/MAVEN-37) and resettet to the default column width.
There's alot of data in the tables (it is running in production mode), so I cannot simply drop and recreate the tables.
So I manually modified the column width in the generated tables.sql and redeployed the portlet. The new columns are now present in the db tables, but the column widths were not altered.
Does Liferay alter column width or do I have to fire some sql statements against the database manually?
(We are working with an oracle 10g database)
If you want to change the column withs, you need to write in the portlet-model-hints.xml.
For instance, to increase a field until 255 you will do:(Its important running the build service after that change.)
ServiceBuilder doesn't do ALTER TABLE by itself - you'll have to write an UpgradeProcess for this yourself. Check this blog post or the underlying documentation.
In short: The update that can always be done automatically is of the type "DROP TABLE - CREATE TABLE", but, as you say, this is typically not desirable. Any more fancy way needs to be done manually, and that's exactly what this mechanism is for.

dotConnect Oracle - migrations - Initial migration in two identical branches says differing model

I have just started testing migrations in several different team scenarios to make sure migrations will work as expected with git / multiple users / multiple branches. But I have run into an issue right off the bat. On branch 1 I added my Initial migration (on an existing project with 165 entities), deleting the code in Up/Down (just uses the model snapshot), then update-databased (creates the __MigrationHistory table just fine). I merge this to branch 2 (EXACTLY the same model – exact replica of branch 1), ran update-database with my newly merged migrations and it says Unable to update database to match the current model because there are pending changes. There aren’t pending changes, both models are exactly the same. Is there something I am missing here? I thought I should only run into this issue once migrations are out of whack (merge, model changes from different users).
So why must I do add-migration Initial on both branch 1 and branch 2? They are merged and exactly the same.
Notes: EF 5 (technically 4.4) with .NET 4.0. DevArt dotConnect for Oracle v 8.1.55.0
EDIT: I have read this post but I am not on different platforms, I'm on the same computer - just different branches.
I figured it out, in my initial testing of moving off of EDMX to dotConnect code-first + migrations, I added the schema to the _Mapping files for my fluent mappings. I had to remove this schema. Example:
Instead of:
this.ToTable("ADDRESS", "SCHEMA");
I had to use:
this.ToTable("ADDRESS");
Also I use these options in OnModelCreating:
var config = Devart.Data.Oracle.Entity.Configuration.OracleEntityProviderConfig.Instance;
config.Workarounds.IgnoreDboSchemaName = true;
config.Workarounds.IgnoreSchemaName = true;

Doctrine schema changes while keeping data?

We're developing a Doctrine backed website using YAML to define our schema. Our schema changes regularly (including fk relations) so we need to do a lot of:
Doctrine::generateModelsFromYaml(APPPATH . 'models/yaml', APPPATH . 'models', array('generateTableClasses' => true));
Doctrine::dropDatabases();
Doctrine::createDatabases();
Doctrine::createTablesFromModels();
We would like to keep existing data and store it back in the re-created database. So I copy the data into a temporary database before the main db is dropped.
How do I get the data from the "old-scheme DB copy" to the "new-scheme DB"? (the new scheme only contains NEW columns, NO COLUMNS ARE REMOVED)
NOTE:
This obviously doesn't work because the column count doesn't match.
SELECT * FROM copy.Table INTO newscheme.Table
This obviously does work, however this is consuming too much time to write for every table:
SELECT old.col, old.col2, old.col3,'somenewdefaultvalue' FROM copy.Table as old INTO newscheme.Table
Have you looked into Migrations? They allow you to alter your database schema in programmatical way. WIthout losing data (unless you remove colums, of course)
How about writing a script (using the Doctrine classes for example) which parses the yaml schema files (both the previous version and the "next" version) and generates the sql scripts to run? It would be a one-time job and not require that much work. The benefit of generating manual migration scripts is that you can easily store them in the version control system and replay version steps later on. If that's not something you need, you can just gather up changes in the code and do it directly through the database driver.
Of course, the more fancy your schema changes becomes, the harder the maintenance will get i.e. column name changes, null to not null etc.