We have a powershell script that runs every night and copies our production database into a test environment. When it does this, it removes the test database and then does a copy of production. Last night the script died (not exactly sure what happened) and there is no test database listed in our Azure Portal.
When I try to manually run our script, it throws an error saying that a previous database create operation for our test database is still in progress.
Normally this takes about 5 minutes but it has now been 4 hours.
Anyway to stop this creation and start over?
I tried using powershell to remove the database but it tells me the database doesn't exist. I've also tried finding it in the Azure Portal but it isn't there.
Related
We're using Golang Migrate which I generally love but recently it's just seemed like I can never actually run the migrations so I end up manually applying the migrations and "fixing" the schema_migrations table. There is always some error. Here's the example that's happening now as I understand it.
Dev tries to roll forward and gets: no change. So dev tries to roll back N migrations and gets:
error: migration failed in line 0: ALTER TABLE `table_name` DROP `column`;
(details: Error 1091: Can't DROP 'column'; check that column/key exists)
I believe the issue is there are two branches one we'll call "slow" and one we'll call "quick". So "slow" gets started but as it's name implies it just takes a long time to get to production. Meanwhile quick has no such problem and gets merged weeks before "slow". When dev goes to roll forward because the "quick" branch was started after "slow" they get no change. To compensate dev attempts to roll back a few migrations so the new migrations can be applied but bumps into SQL errors because the column never got created because that specific migration never rolled forward.
Is the "fix" for this that the migrations are written more defensively (ex: using IF EXISTS)? Are other people not bumping into these issues? TIA.
golang-migrate: v4.15.2
MySQL: 5.7.34-log
a few weeks ago my computer stopped working and I lost a lot of important data stored in a postgres DB (should have backed it up but, I didn't ** sigh ** ).
I was able to access the hard drive and extract the program files for postgres. How can I try and restore the DB?
Here is what I have tried so far:
download postgres (same version as the lost DB) on a separate computer and swap out the program files with the ones I am trying to recover. I am using a PC, so I stop service->swap out all the files-> restart service. Restarting the service is never successful.
Use PGAdmin -> creat new db (ensure path for binary files is correct) -> restore-> Here i get stuck figuring out what the correct files are
We have a small development team of 5 developers working on a large enterprise level web based asp.net/c# system.
We do a lot of database updates which include stored procedure creations and alters as well as new table creation, column creation, record inserts, record updates and so on and so forth.
Today all of the developers place all change scripts in one large sql change script file that gets ran on our Test and Production environments. So this single file contains stored proc alters and record inserts, updates etc etc. The file can end up being quite lengthy as we may only do a test or production release every 1 to 2 months.
The problem that I am currently facing is this:
Once in a while there is a script error that may occur at any given location in this large "batch change script". Perhaps an insert fails or perhaps an alter fails for a proc for instance.
When this occurs, it is very difficult to tell what changes succeeded and what failed on the database.
Sometimes even if one alter fails for instance code will continue to execute throughout the script and sometimes it will stop execution and nothing further gets ran.
So I end up manually checking procs and records today to see what actually worked and what actually did not and this is a bit painstaking.
I was hoping I could roll up this entire change script into one big transaction so that if any problem occurred I could just roll every change back, but that does not appear to be possible with batch scripts like this in sql server.
So, then I tried to backup the databases before I ran the scripts so that if an error occurred I could simply restore the db, fix the problem and then re-run the fixed script. However in order to restore a database I have to turn off our database mirroring so this is also not totally ideal.
So my question is, what is the safest way to run batch scripts on a production database?
Is there some way that I can wrap the entire script in a transaction that i can roll back that I am not seeing?
Would it possibly be better for us to track and run separate script files so that if 1 file fails we can just shove it off in a failed directory to be looked at and continue running all other files?
Looking for advice and expertise.
thank you for your time.
Matt
The batch script should be run on your QC database first so that any errors are picked up before production.
The QC database should be identical to production or as close as it can be to identical.
Each script should be trapping for an error and reporting the name of the script along with the location of the error using print statements, then if an error occurs when applying to production you at least have the name of the script and the location of the error within the script.
If your QC database is identical or very close, productions errors should be very rare.
Looking for some suggestions on my data/schema migration. Here is what I plan to do.
using sql 2008
Back up current databases
Restore as "_old" (to be used for data transfer later)
run my scripting changes to the target DB's
then, Run my data scripts transferrring data from the "_old" db's to the now new database.
verify everything is working (websites, applications, etc..)
delete the "_old" databases
run back up on new "changed" databases.
This is my first migration and I want some guidance if I am missing anything or if there is a better way to do this.
Thanks for the help..
You must be very perfect for your step 4. and make sure you do it through transactions. You should keep in mind the each and every step of failure and target that.
And regarding step 6. do not delete your _old. Keep it in a safe place for future use if required.
I practised the migration I did on a development stack a number of times so that I could be sure how long it would take and work out any problems with the scripts.
Verify how long you have to do the migration with how long it takes. Is there an adequate margin of error?
It would be a good idea to get some users or other staff to verify that the new application is 'working'. You are not the best person to test your own work.
I would not delete the _old database just to be sure. I have found issues with the migration months afterwards that required the old data to resolve.
Automate as much possible by using master scripts that call other scripts.
A worst case scenario assumes your scripts will fail during the migration. Build logging and progress points into your scripts so you might be able to restart mid process.
Take some performance measurements of the old database so you can show how the new database is, hopefully, improved
I've got a maintenance plan that executes weekly in the off hours. It's always reporting success, but the old backups don't get deleted. I don't want the drive filling up.
DB Server info: SQL Server Standard Edition 9.00.3042.00
There is a "Maintenance Cleanup Task" set to
"Search folder and delete files based on an extension"
and "Delete files based on the age of the file at task run time" is checked and set to 4 weeks.
The only thing I can see is that my backups are each given their own subfolder and that this is not recursive. Am I missing something?
Also: I have seen the issues pre-SP2, but I am running service pack 2.
If you make your backups in subfolders, you have to specify the exact subfolder for deleting.
For example:
You make the backup by choosing the option that says something like "Make one backup file for each database" and check the box that says "Create subfolder for each database".
(I work with a German version of SQL Server, so I translate everything into English myself now)
The specified folder is H:\Backup, so the backups will actually be created in the folder H:\Backup\DatabaseName.
And if you want the Maintenance Cleanup Task to delete the backups via "Delete files based on the age of the file at task run time", you have to specify the folder H:\Backup\DatabaseName, not H:\Backup !!!
This is the mistake that I made when I started using SQL Server 2005 - I put the same folder in both fields, Backup and Cleanup.
My understanding is that you can only include the first level of subfolders. I am assuming that you have that check-box checked already.
Are your backups deeper than the just one level?
Another thought is, do you have one single maintenance plan that you run to delete backups of multiple databases? The reason I ask this is because the way I could see that you would have to do that would be to point it to a folder that was one level higher meaning that your "include first-level subfolders" would not be deep enough.
The way I have mine set up is that the Maintenance Cleanup Task is part of my backup process. So once the backup completes for a specific database the Maintenance Cleanup Task runs on that same database backup files. This allows me to be more specific on the directory so I don't run into the directory structure being too deep. Since I have the criteria set the way I want, items don't get deleted till I am ready for them to be deleted either way.
Tim
Make sure your maintenance plan does not have any errors associated it with. You can check the error log under the SQL Server Agent area in the SQL Server Management Studio. If there are errors during your maintenance plans, then it is probably quitting before it starts to delete the outdated backups.
Another issue could be the "workflow" of the maintenance plan.
If your plan consists of more than one task, you have to connect the tasks with arrows to define the order in which they will run.
Possible issue #1:
You forgot to connect them with arrows. I just tested that - the job runs without any error or warning, but it executes only the first task.
Possible issue #2:
You defined the workflow in a way that the cleanup task will never run. If you connect two tasks with an arrow, you can right-click on the arrow and specify if the second task will run always or only when the first one does/does not run successful (this changes the color of the arrow, possible are red/green/blue). Maybe the backup works, and then the cleanup never runs because it will only run when the backups fails?