When I setup my PostgreSQL server one of the first things I will do is import a database for an external source. Which of the following is the right way to do it?
Create a database called "NEWDB" on the PostgreSQL server and then
import my external "BACKUPDB" database from my pg_dump into the
"NEWDB".
Don't create a database on the PostgreSQL server, and import the
"NEWDB" database, thereby automatically creating "NEWDB" on the
postgresql server.
I guess my question is, if I want to import an existing database to the PostgreSQL server, do I first need to create a database for it to go into?
You don't have to. It depends on what you want to achieve. If you dump a single database with pg_dump, CREATE DATABASE and ALTER DATABASE commands are not included. You are expected to connect to an existing database. So you have to create it first.
I quote advice from the manual:
If your database cluster has any local additions to the template1
database, be careful to restore the output of pg_dump into a truly
empty database; otherwise you are likely to get errors due to
duplicate definitions of the added objects. To make an empty database
without any local additions, copy from template0 not template1, for
example:
CREATE DATABASE foo WITH TEMPLATE template0;
And also:
The dump file also does not contain any ALTER DATABASE ... SET
commands; these settings are dumped by pg_dumpall, along with database
users and other installation-wide settings.
pg_dumpall, on the other hand, dumps the whole DB cluster including meta-objects like users. It includes CREATE DATABASE statements and connects to each DB when restoring. You can even include DROP DATABASE statements with the -c (--clean) option. Careful with that.
Every instance of PostgreSQL has a default maintenance db named "postgres" that you can connect to - to create databases for instance or start a full restore (from pg_dumpall). But a single-DB dump (from pg_dump) has to be run against its target database.
Finally:
Once restored, it is wise to run ANALYZE on each database so the
optimizer has useful statistics. You can also run vacuumdb -a -z to
analyze all databases.
Related
In MS SQL Server, after creating database completely, we have file script.sql and anyone want to create our database, they just need to run that file script.sql
I don't know whether if we can export that kind of file in MongoDB ?
mongoDB is "schema less" database system meaning that you can insert data without the initial schema definition like in MS SQL Server. You may create your indexes at later stage and prapare a .js file to execute it every time if you want to have the same indexes every time in new deployments or you can make initial empty mongodump and do mongorestore every time you need same indices.
I did my homework on how to copy a database, some say need new name for mdf and log files, some say that REPLACE will take care of it.
I'm on same server, SQL Server 2016. I don't have freedom to create new files or do any copy or renames of them. Can not use DBCC as well (for CLONE).
SourceDB is existing db : 10 tables.
SourceDB_Data c:\SPath\SourceDB.mdf
SourceDB Log c:\SPath\SourceDB_log.ldf
NewDB is existing db : 3 tables, 2 views
NewDB_Data c:\NewDBPath\NewDB.mdf
NewDB_Log c:\NewDBPath\NewDB_log.ldf
Main thing that structure of those databases are totally different, but I need to copy structure and all content into NewDB. I can NOT drop/ recreate it, too many handles connected to NewDB and they will start ringing if it will disappear. Alas this rules I have. I can NOT drop this NewDB in any case.
Though. I can do DDL on NewDB, maybe as last resort I can recreate SourceDB Schema on it?
My plan still go with bkup just for the case. Will this work and keep same logical and physical files names on NewDB? Don't want to produce any error). Thanks
Bkup SourceDB
'c:\Path\to\SourceDB.buk'
Set NewDB into single mode
Run script:
Restore Database NewDB From Disk = 'c:\Path\to\SourceDB.bak' With Replace
Goal to get NewDB (keep old name) with NewDB.MDF names and and content of SourceDB.
I created a fresh database in phpmyadmin which does not contain any tables yet since its fresh, however I accidentally made a typo. How can I rename the database?
If this happens to me I usually just execute the SQL command:
DROP DATABASE dbname;
and create another database. But is it possible to rename it? I was already searching SO but found nothing helpful.
I found two possible solutions.
Rename it via the phpmyadmin backend UI (preferable):
Or just execute this SQL (only use it if the database is fresh and does not contain any data yet, otherwise it will be lost!)
CREATE DATABASE newname;
DROP DATABASE oldname;
ALTER DATABASE oldName MODIFY NAME = newName
I don't think you can do this. I think you'll need to dump that database, create the newly named one and then import the dump.
If this is a live system you'll need to take it down. If you cannot, then you will need to setup replication from this database to the new one.
If you want to see the commands try this link, Rename MySQL database
Try using an aux temporary db (as copy of the original)
$ mysqldump dbname > dbname_dump.sql //create a backup
$ mysqladmin create dbname_new //create your new db with desired name
$ mysql dbname_new < dbname_dump.sql //restore the backup to the new one
$ mysql drop database dbname; //drop old one
I guess I just cannot formulate the search query appropriately, but I cannot find an answer to the following simple question: how to use extracted DDL pieces to recreate tables, views etc. in a different database or a different schema?
For example, when I extract table DDL with
SELECT dbms_metadata.get_dependent_ddl ('TABLE', TABLE-NAME, SCHEMA) FROM dual
I get output with FOREIGN KEY there. If I now naively issue the resulting CREATE TABLE statements on a different database in e.g. alphabetical order of table names, I get "table or view doesn't exist" error, because constraints reference to non-yet-created tables.
What is the normal procedure of using DDL? Is it (easily) possible to recreate full scheme structure (short of full database dump) without using external tools?
You can use datapump export CONTENT option to only export the metadata for a schema:
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
ALL unloads both data and metadata. This is the default.
DATA_ONLY unloads only table row data; no database object definitions are unloaded.
METADATA_ONLY unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.
The import process will create the objects and constraints, taking the dependencies into account.
If you want to see the DDL, and optionally run it manually, you can use the datapump import SQLFILE option to put the DDL into a file instead of executing it:
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
You can do similar things through SQL Developer and other clients, but those are 'external tools', whereas datapump might not fall into that category, even if you have to run it from the command line. There is a datapump API so you can even avoid the command line if you want to, though in some ways it's more complicated than using the expdp and impdp utilities.
Is it possible to reload a schema file without having to restart mysqld? I am working in just one db in a sea of many and would like to have my changes refreshed without doing a cold-restart.
When you say "reload a schema file", I assume you're referring to a file that has all the SQL statements defining your database schema? i.e. creating tables, views, stored procecures, etc.?
The solution is fairly simple - keep a file with all the SQL that creates the tables, etc. in a file, and before all the CREATE statements, add a DELETE/DROP statement to remove what's already there. Then when you want to do a reload, just do:
cat myschemafile.sql | mysql -u userid -p databasename