We're using Golang Migrate which I generally love but recently it's just seemed like I can never actually run the migrations so I end up manually applying the migrations and "fixing" the schema_migrations table. There is always some error. Here's the example that's happening now as I understand it.
Dev tries to roll forward and gets: no change. So dev tries to roll back N migrations and gets:
error: migration failed in line 0: ALTER TABLE `table_name` DROP `column`;
(details: Error 1091: Can't DROP 'column'; check that column/key exists)
I believe the issue is there are two branches one we'll call "slow" and one we'll call "quick". So "slow" gets started but as it's name implies it just takes a long time to get to production. Meanwhile quick has no such problem and gets merged weeks before "slow". When dev goes to roll forward because the "quick" branch was started after "slow" they get no change. To compensate dev attempts to roll back a few migrations so the new migrations can be applied but bumps into SQL errors because the column never got created because that specific migration never rolled forward.
Is the "fix" for this that the migrations are written more defensively (ex: using IF EXISTS)? Are other people not bumping into these issues? TIA.
golang-migrate: v4.15.2
MySQL: 5.7.34-log
Related
I am using Django as my web framework with Django REST API. Time and time again, when I try to migrate the table on production, I get a litany of errors. I believe my migrations on development are out of sync with production, and as a result, chaos. Thus each time I attempt major migrations on production I end up needing to use the nuclear option - delete all migrations, and if that fails, nuke the database. (Are migrations even supposed to be committed?)
This time however, I have too much data to lose. I would like to preserve the data. I would like to construct a new database with the new schema, and then manually transfer the old database to the new one. I am not exactly sure how to go about this. Does anyone have any suggestions? Additionally, how can I prevent this from occurring in the future?
From what you're saying, it sounds like you have migration files that are out of wack and you're constantly running into issues relating to database migrations. I would recommend you just remove all of your migration files and start with a new initial migration after you make all the necessary model changes and restructuring of the schema.
When it comes time to make the migration on your production server, it might make the most sense to --fake-initial and manually making the database changes outside of Django so it matches your schema.
I might get a lot of backlash about this and obviously use your best judgement, but from my experience it was much easier to go about this problem this way and not wasting time making custom migration files that try to fix all of your problems.
Addressing your other questions
Time and time again, when I try to migrate the table on production, I get a litany of errors.
I highly recommend you take the time to get acquainted with how to make migrations by reading the official Django docs, you will save yourself a LOT of headache.
... each time I attempt major migrations on production I end up needing to use the nuclear option - delete all migrations
You shouldn't be deleting your migration files every time there's an issue.
Are migrations even supposed to be committed?
You should definitely be committing your migrations. If you're working on a team, they would be using the migration files you created to make the necessary changes on their local DB as well as any dev/prod server you may have.
I'm working on a migration using Sequelize. If the migration up method throws an error, the migration is not logged in the database as having completed. So, if I run db:migrate:undo, it instead runs down on the previous (and working) migration. As a result, I have a half executed migration where the schema for it remains in the database because the corresponding down method is never run by Sequelize. So, I need to either somehow force a single down method to run (which I'm not seeing an option for). Or, I need to manually clean up my database every time I run a failing migration, which can be a real pain for complicated migrations where I'm constantly going through trial and error. Is there an easier way to doing this?
sequelize db:migrate:undo --name 20200409091823-example_table.js
Use this command to undo any particular migration
manually insert the migration into your migrations table so that sequelize will think it has completed.
To verify check the status of your migrations before and after you edit the table.
db:migrate:status
Everything listed as "up" is something that can go "down" and vice versa.
There is no way to do it as of now... There is an open issue for this in Sequelize cli repo
I tried something and it worked for me:
rename your migration file and make sure it comes first alphabetically (make it the first one in migration files)
comment code in the second migration file
run sequelize db:migrate
This will run only the first migration file.
Don't forget to uncomment the migration file you commented before
We have a small development team of 5 developers working on a large enterprise level web based asp.net/c# system.
We do a lot of database updates which include stored procedure creations and alters as well as new table creation, column creation, record inserts, record updates and so on and so forth.
Today all of the developers place all change scripts in one large sql change script file that gets ran on our Test and Production environments. So this single file contains stored proc alters and record inserts, updates etc etc. The file can end up being quite lengthy as we may only do a test or production release every 1 to 2 months.
The problem that I am currently facing is this:
Once in a while there is a script error that may occur at any given location in this large "batch change script". Perhaps an insert fails or perhaps an alter fails for a proc for instance.
When this occurs, it is very difficult to tell what changes succeeded and what failed on the database.
Sometimes even if one alter fails for instance code will continue to execute throughout the script and sometimes it will stop execution and nothing further gets ran.
So I end up manually checking procs and records today to see what actually worked and what actually did not and this is a bit painstaking.
I was hoping I could roll up this entire change script into one big transaction so that if any problem occurred I could just roll every change back, but that does not appear to be possible with batch scripts like this in sql server.
So, then I tried to backup the databases before I ran the scripts so that if an error occurred I could simply restore the db, fix the problem and then re-run the fixed script. However in order to restore a database I have to turn off our database mirroring so this is also not totally ideal.
So my question is, what is the safest way to run batch scripts on a production database?
Is there some way that I can wrap the entire script in a transaction that i can roll back that I am not seeing?
Would it possibly be better for us to track and run separate script files so that if 1 file fails we can just shove it off in a failed directory to be looked at and continue running all other files?
Looking for advice and expertise.
thank you for your time.
Matt
The batch script should be run on your QC database first so that any errors are picked up before production.
The QC database should be identical to production or as close as it can be to identical.
Each script should be trapping for an error and reporting the name of the script along with the location of the error using print statements, then if an error occurs when applying to production you at least have the name of the script and the location of the error within the script.
If your QC database is identical or very close, productions errors should be very rare.
Looking for some suggestions on my data/schema migration. Here is what I plan to do.
using sql 2008
Back up current databases
Restore as "_old" (to be used for data transfer later)
run my scripting changes to the target DB's
then, Run my data scripts transferrring data from the "_old" db's to the now new database.
verify everything is working (websites, applications, etc..)
delete the "_old" databases
run back up on new "changed" databases.
This is my first migration and I want some guidance if I am missing anything or if there is a better way to do this.
Thanks for the help..
You must be very perfect for your step 4. and make sure you do it through transactions. You should keep in mind the each and every step of failure and target that.
And regarding step 6. do not delete your _old. Keep it in a safe place for future use if required.
I practised the migration I did on a development stack a number of times so that I could be sure how long it would take and work out any problems with the scripts.
Verify how long you have to do the migration with how long it takes. Is there an adequate margin of error?
It would be a good idea to get some users or other staff to verify that the new application is 'working'. You are not the best person to test your own work.
I would not delete the _old database just to be sure. I have found issues with the migration months afterwards that required the old data to resolve.
Automate as much possible by using master scripts that call other scripts.
A worst case scenario assumes your scripts will fail during the migration. Build logging and progress points into your scripts so you might be able to restart mid process.
Take some performance measurements of the old database so you can show how the new database is, hopefully, improved
I have just done a quick search and nothing too relevant came up so here goes.
I have released the first version of an app. I have made a few changes to the SQLite db since then, in the next release I will need to update the DB structure but retain the user's data.
What's the best approach for this? I'm currently thinking that on app update I will never replace the user's (documents folder, not in bundle) database file but rather alter its structure using SQL queries.
This would involve tracking changes made to the database since the previous release. Script all these changes into SQL queries and run these to bring the DB to the latest revision. I will also need to keep a field in the database to track the version number (keep in line with app version for simplicity).
Unless there are specific hooks, delegate methods that are fired at first run after an update I will put calls for this logic into the very beginning of the appDelegate, before anything else is run.
While doing this I will display "Updating app" or something to the user.
Next thing, what happens if there is an error somewhere along the line and the update fails. The DB will be out of date and the app won't function properly as it expects a newer version?
Should I take it upon myself to just delete the user's DB file and replace it with the new version from the app bundle. OR, should I just test, test, test until everything is solid on my side and if an error occurs on the user's side it's something else, in which case I can't do anything about it only discard the data.
Any ideas on this would be greatly appreciated. :)
Thanks!
First of all, the approach you are considering is the correct one. This is known as database migration. Whenever you modify the database on your end, you should collect the appropriate ALTER TABLE... etc. methods into a migration script.
Then the next release of your app should run this code once (as you described) to migrate all the user's data.
As for handling errors, that's a tough one. I would be very weary of discarding the user's data. Better would be to display an error message and perhaps let the user contact you with a bug report. Then you can release an update to your app which hopefully can do the migration with no problems. But ideally you test the process well enough that there shouldn't be any problems like this. Of course it all depends on the complexity of the migration process.