Drop DB but don't delete *.mdf / *.ldf - sql

I am trying to automate a process of detaching and dropping a database (via a VBS objshell.run) If I manually use SSMS to detach and drop I can then copy to database files to another location... however if I use:
sqlcmd -U sa -P MyPassword -S (local) -Q "ALTER DATABASE MyDB set single_user With rollback IMMEDIATE"
then
sqlcmd -U sa -P MyPassword -S (local) -Q "DROP DATABASE MyDB"
It detaches/drops and then deletes the files. How do I get the detach and drop without the delete?

The MSDN Documentation on DROP DATABASE has this to say about dropping the database without deleting the files (under General Remarks):
Dropping a database deletes the database from an instance of SQL Server and deletes the physical disk files used by the database. If the database or any one of its files is offline when it is dropped, the disk files are not deleted. These files can be deleted manually by using Windows Explorer. To remove a database from the current server without deleting the files from the file system, use sp_detach_db.
So in order for you to drop the database without having the files get deleted with sqlcmd you can change it to do this:
sqlcmd -U sa -P MyPassword -S (local) -Q "EXEC sp_detach_db 'MyDB', 'true'"
DISCLAIMER: I have honestly never used sqlcmd before but assuming from the syntax of how it's used I believe this should help you with your problem.

Use SET OFFLINE instead of SET SINGLE_USER
ALTER DATABASE [DonaldTrump] SET OFFLINE WITH ROLLBACK IMMEDIATE; DROP DATABASE [DonaldTrump];

Might it be best to detach the database rather than drop it?
If you drop the database, that implies delete
Note, however, that this will leave your hard disk cluttered with database files you no longer want - in a couple of years time your successor will be running out of space and wondering why the disk is full of MDF files he doesn't recognise

Related

PostgreSQL Query To Create A Directory

Files are being written to a directory using the COPY query:
Copy (SELECT * FROM animals) To '/var/lib/postgresql/data/backups/2020-01-01/animals.sql' With CSV DELIMITER ',';
However if the directory 2020-01-01 does not exist, we get the error
could not open file "/var/lib/postgresql/data/backups/2020-01-01/animals.sql" for writing: No such file or directory
PostgeSQL server is running inside a Docker container with the volume mapping /mnt/backups:/var/lib/postgresql/data/backups
The Copy query is being sent from a Node.js app outside of the Docker container.
The mapped host directory /mnt/backups was created by Docker Compose and is owned by root, so the Node.js app sending the COPY query is unable to create the missing directories due to insufficient permissions.
The backup file is meant to be transferred out of the Docker container to the Docker host.
Question: Is it possible to use an SQL query to ask PostgreSQL 11.2 to create a directory if it does not exist? If not, how will you recommend the directory creation be done?
Using Node.js 12.14.1 on Ubuntu 18.04 host. Using PostgreSQL 11.2 inside container, Docker 19.03.5
An easy way to solve it is to create the file directly into the client machine. Using STDOUT from COPY you can let the query output be redirected to the client standard output, which you can catch and save in a file. For instance, using psql in the client machine:
$ psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > file.csv
Creating an output directoy in case it does not exist:
$ mkdir -p /mnt/backups/2020-01/ && psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > /mnt/backups/2020-01/file.csv
On a side note: try to avoid exporting files into the database server. Although it is possible, I consider it a bad practice. Doing so you will either write a file into the postgres system directories or give the postgres user permission to write somewhere else, and it is something you shouldn't be comfortable with. Export data directly to the client either using COPY as I mentioned or follow the advice from #Schwern. Good luck!
Postgres has its own backup and restore utilities which are likely to be a better choice than rolling your own.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the “custom” format (-Fc) and the “directory” format (-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.
A simple backup rotation script might look like this:
#!/bin/sh
table='animals'
url='postgres://username#host:port/database_name'
date=`date -Idate`
file="/path/to/your/backups/$date/$table.sql"
mkdir -p `dirname $file`
pg_dump $url -w -Fc --table=$table -f $file
To avoid hard coding the database password, -w means it will not prompt for a password and instead look for a password file. Or you can use any of many Postgres authentication options.

Unable to run .sql file in SQL Server

I have a .sql dump file 20 gb and I am trying to run it on Mysql workbench using run script and after successful execution, using SSMA I'll migrate the data from Mysql workbench to SQL Server. I have migrated the data this way many times successfully however for 20 gb file it seems very time-consuming. Please let me know if there is any alternate way to achieve this quickly. I have followed the following link:
Steps to migrate mysql tables to sql server using SSMA!
From your Title "unable to run .sql file in SSMS" and "I have a .sql dump file 20 gb" are you trying to open a 20GB .sql in SSMS? That's never going to work. SSMS is a 32bit application, so the maximum addressable memory is 2GB. If you want to run your .sql file, I suggest using sqlcmd.
Open up Powershell, and then run the command below replacing the appropriate parts:
sqlcmd -S {Server Name/ServerIP} -U {Your Login} -i {Your full path to your script}
You'll be prompted for your password and then you the file will be run. So, as an example, you might run:
sqlcmd -S svSQL2017 -U Larnu -i \\svFileServer\SQLShare\Scripts\BigBatchFile.sql
If you are using integrated security, then don't pass the -U parameter for the command.
Edit: This answer is no relevant to the OPs question, as they were using "SSMS" as a synonym for SQL Server, which it is not. I have left this here for the moment so the OP can review my comments, and I will likely remove this answer at a later point.

Restoring Firebird 2.5 with fbsvcmgr

I'm configuring live backup and restore scripts to have "replicated" firebird dbs on main and reserve servers.
Backup doing fine:
"C:\Program Files\Firebird\Firebird_2_5\bin\nbackup" -B 0 "D:\testdb\LABORATORY_DB.FDB" D:\testdb\lab_FULL.fbk -user SYSDBA -pass masterkey -D OFF
Copying file to the remote server as well:
net use R: \\fbserv2\reserve
xcopy /Y D:\testdb\lab_FULL.fbk R:\
But restoring on remote side
"C:\Program Files\Firebird\Firebird_2_5\bin\fbsvcmgr.exe" fbserv2:service_mgr -user SYSDBA -password masterkey -action_nrest -dbname d:\reservedb\LABORATORY_DB.FDB -nbk_file d:\reserve\lab_FULL.fbk
caused an error:
Error (80) creating database file: d:\reservedb\LABORATORY_DB.FDB via copying from: d:\reserve\lab_FULL.fbk
The only way to restore database is to manually delete an old d:\reservedb\LABORATORY_DB.FDB before restoring. GBAK has the option to overwrite restorig db file, while fbsvcmgr seems to be not. Is there any other option? Did I miss something?
You can't restore over an existing database using nbackup. You either need to
delete the old database first and then restore,
or restore under a different name, delete the old database, and rename the new database to its final name.
See also the nbackup documentation, chapter Making and restoring backups:
If the specified database file already exists, the restore fails and you get an error message.
As far as I know it was a design decision to not allow overwriting an existing database. Gbak indeed has that option, but only for historic reasons; if it were built today, it would likely not have that option.

Bat File to Delete sql database

I am able to delete files/folders through the bat file fine, the problem comes when i need to delete old mdf and ldf files.
I get access denied error message.
Is there a way to overcome this in the bat file? without having to open sql managment studio 2008 and delete them there?
Things to note:
At the start I do not specificly know what the database is called, just it's location (c:\sql)
You can use sqlcmd in a batch file to drop the database. Something like this:
sqlcmd -s dbserver -u username -p password -q "DROP DATABASE databasename"
Then you can delete the related mdf and ldf files.
This batch file drops a database even it is being used.
It asks the database name to drop.
#echo off
set /p dbName= "Enter your database name to drop: "
echo Setting to single-user mode
sqlcmd -Q "ALTER DATABASE [%dbName%] SET SINGLE_USER WITH ROLLBACK IMMEDIATE"
echo Dropping...
sqlcmd -Q "drop database %dbName%"
echo Completed.
pause

Vaccum full and Reindex Heroku database

I want to perform a full vacuum and reindex on my database for my app hosted on Heroku.
I can't work out how to do it via the heroku command line remotely.
I can do it on my local Mac osx machine via the below commands in terminal...
psql database_name
>> vaccuum full;
>> \q
reindex database database_name
How can i perform a full vaccuum and reindex all my tables for my app on Heroku?
If possible I would like to do it without exporting the database.
Okay so it seems Heroku doesn't support this functionality unless you pay up. Looks like i'll have to pull the database, perform the actions and push it back upstream! Fun times.
You can use the psql interactive terminal with Heroku. From Heroku PostgreSQL:
If you have PostgreSQL installed on your system, you can open a direct psql console to your remote db:
$ heroku pg:psql
Connecting to HEROKU_POSTGRESQL_RED... done
psql (9.1.3, server 9.1.3)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
rd2lk8ev3jt5j50=>
You can also pass-in the parameters at the psql command-line, or from a batch file. The first statements gather necessary details for connecting to your database.
The final prompt asks for the constraint values, which will be used in the WHERE column IN() clause. Remember to single-quote if strings, and separate by comma:
#echo off
echo "Test for Passing Params to PGSQL"
SET server=localhost
SET /P server="Server [%server%]: "
SET database=amedatamodel
SET /P database="Database [%database%]: "
SET port=5432
SET /P port="Port [%port%]: "
SET username=postgres
SET /P username="Username [%username%]: "
"C:\Program Files\PostgreSQL\9.0\bin\psql.exe" -h %server% -U %username% -d %database% -p %port% -e -v -f cleanUp.sql
Now in your SQL code file, add the clean-up SQL, vacuum full (note the spelling). Save this as cleanUp.sql:
VACUUM FULL;
In Windows, save the whole file as a DOS BATch file (.bat), save the cleanUp.sql in the same directory, and launch the batch file. Thanks for Dave Page, of EnterpriseDB, for the original prompted script.
Also Norto, check out my other posting if you want to add parameters to your script, that can be evaluated in the SQL. Please vote it up.