Extract and recreate DDL of database schema elsewhere - sql

I guess I just cannot formulate the search query appropriately, but I cannot find an answer to the following simple question: how to use extracted DDL pieces to recreate tables, views etc. in a different database or a different schema?
For example, when I extract table DDL with
SELECT dbms_metadata.get_dependent_ddl ('TABLE', TABLE-NAME, SCHEMA) FROM dual
I get output with FOREIGN KEY there. If I now naively issue the resulting CREATE TABLE statements on a different database in e.g. alphabetical order of table names, I get "table or view doesn't exist" error, because constraints reference to non-yet-created tables.
What is the normal procedure of using DDL? Is it (easily) possible to recreate full scheme structure (short of full database dump) without using external tools?

You can use datapump export CONTENT option to only export the metadata for a schema:
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
ALL unloads both data and metadata. This is the default.
DATA_ONLY unloads only table row data; no database object definitions are unloaded.
METADATA_ONLY unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.
The import process will create the objects and constraints, taking the dependencies into account.
If you want to see the DDL, and optionally run it manually, you can use the datapump import SQLFILE option to put the DDL into a file instead of executing it:
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
You can do similar things through SQL Developer and other clients, but those are 'external tools', whereas datapump might not fall into that category, even if you have to run it from the command line. There is a datapump API so you can even avoid the command line if you want to, though in some ways it's more complicated than using the expdp and impdp utilities.

Related

how to export the schema of the database to export it to another server?

I have a database with data, but I would like to export the schema of the data base to be able to create an empty data base.
I create the script, I select only tables and views, no users, because the idea it's install the data base in many computers with different users. The permissions I will manage individualy.
Well, in the next step, in advaned options, I select that I want triggers, foreign checks and all the other options and I create the script.
However I have some problems:
When I delete my data base from the server and I use the script, I get the error that says that the data base does not exists. Is it possible in the script add the option to create the data base?
If I create the data base manually, if I use the script I get an error that says that a column name is not valid.
At this point I was wondering where is the correct way to create a script of the schema to export it to another servers?
Thanks so much.

Import dump with SQLFILE parameter not returning the data inside the table

I am trying to import the dump file to .sql file using SQLFILE parameter.
I used the command "impdp username/password DIRECTORY=dir DUMPFILE=sample.dmp SQLFILE=sample.sql LOGFILE=sample.log"
I expected this to return a sql file with contents inside the table. But it created a sql file with only DDL queries.
For export I used, "expdp username/password DIRECTORY=dir DUMPFILE=sample.dmp LOGFILE=sample.log FULL=y"
Dump file size is 130 GB. So, I believe the dump has been exported correctly.
Am I missing something in the import command? Is there any other parameter should I use to get the contents?
Thanks in advance!
Your expectation was wrong, I'm afraid. You're asking it to do something it isn't designed for.
The documentation for SQLFILE says:
Purpose
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
So it will only ever contain DDL.
There isn't a mechanism to turn a .dmp file into a .sql containing insert statements. If you need to put the data into a table, just use the native import.
Individual insert statements - if you could generate them, which SQL Developer will do as a separate task unrelated to your data pump export - would be slower, would have problems with LOBs, and would have to be careful about the order they were run unless integrity constraints were disabled. Data pump takes care of all of that for you.

Create database explicitly before restoring to it?

When I setup my PostgreSQL server one of the first things I will do is import a database for an external source. Which of the following is the right way to do it?
Create a database called "NEWDB" on the PostgreSQL server and then
import my external "BACKUPDB" database from my pg_dump into the
"NEWDB".
Don't create a database on the PostgreSQL server, and import the
"NEWDB" database, thereby automatically creating "NEWDB" on the
postgresql server.
I guess my question is, if I want to import an existing database to the PostgreSQL server, do I first need to create a database for it to go into?
You don't have to. It depends on what you want to achieve. If you dump a single database with pg_dump, CREATE DATABASE and ALTER DATABASE commands are not included. You are expected to connect to an existing database. So you have to create it first.
I quote advice from the manual:
If your database cluster has any local additions to the template1
database, be careful to restore the output of pg_dump into a truly
empty database; otherwise you are likely to get errors due to
duplicate definitions of the added objects. To make an empty database
without any local additions, copy from template0 not template1, for
example:
CREATE DATABASE foo WITH TEMPLATE template0;
And also:
The dump file also does not contain any ALTER DATABASE ... SET
commands; these settings are dumped by pg_dumpall, along with database
users and other installation-wide settings.
pg_dumpall, on the other hand, dumps the whole DB cluster including meta-objects like users. It includes CREATE DATABASE statements and connects to each DB when restoring. You can even include DROP DATABASE statements with the -c (--clean) option. Careful with that.
Every instance of PostgreSQL has a default maintenance db named "postgres" that you can connect to - to create databases for instance or start a full restore (from pg_dumpall). But a single-DB dump (from pg_dump) has to be run against its target database.
Finally:
Once restored, it is wise to run ANALYZE on each database so the
optimizer has useful statistics. You can also run vacuumdb -a -z to
analyze all databases.

Bteq Scripts to copy data between two Teradata servers

How do I copy data from multiple tables within one database to another database residing on a different server?
Is this possible through a BTEQ Script in Teradata?
If so, provide a sample.
If not, are there other options to do this other than using a flat-file?
This is not possible using BTEQ since you have mentioned both the databases are residing in different servers.
There are two solutions for this.
Arcmain - You need to use Arcmain Backup first, which creates files containing data from your tables. Then you need to use Arcmain restore which restores the data from the files
TPT - Teradata Parallel Transporter. This is a very advanced tool. This does not create any files like Arcmain. It directly moves the data between two teradata servers.(Wikipedia)
If I am understanding your question, you want to move a set of tables from one DB to another.
You can use the following syntax in a BTEQ Script to copy the tables and data:
CREATE TABLE <NewDB>.<NewTable> AS <OldDB>.<OldTable> WITH DATA AND STATS;
Or just the table structures:
CREATE TABLE <NewDB>.<NewTable> AS <OldDB>.<OldTable> WITH NO DATA AND NO STATS;
If you get real savvy you can create a BTEQ script that dynamically builds the above statement in a SELECT statement, exports the results, then in turn runs the newly exported file all within a single BTEQ script.
There are a bunch of other options that you can do with CREATE TABLE <...> AS <...>;. You would be best served reviewing the Teradata Manuals for more details.
There are a few more options which will allow you to copy from one table to another.
Possibly the simplest way would be to write a smallish program which uses one of their communication layers (ODBC, .NET Data Provider, JDBC, cli, etc.) and use that to take a select statement and an insert statement. This would require some work, but it would have less overhead than trying to learn how to write TPT scripts. You would not need any 'DBA' permissions to write your own.
Teradata also sells other applications which hides the complexity of some of the tools. Teradata Data Mover handles provides an abstraction layer between tools like arcmain and tpt. Access to this tool is most likely restricted to DBA types.
If you want to move data from one server to another server then
We can do this with the flat file.
First we have fetch data from source table to flat file through any utility such as bteq or fastexport.
then we can load this data into target table with the help of mload,fastload or bteq scripts.

Applying changes easily in Access Database

I have got a backup of a live database (A copy of an ACCDB format Access database) in which I've worked, added new fields to existing tables and whole new tables.
How do I get these changes and apply that fast in the running database?
In MS SQL Server, I'd right-click > Script Table As > Alter To, save the query and run it wherever I desire, is there an as easy way as that to do it in an Access Database ?
Details:
It's an ACCDB MS-Access database created on Access 2007, copied and edited in Access 2007, in which I need to get some "alter" scripts to run on the other database so that it has all the new columns and tables I've created on my copy.
For new tables, just import them from one database into the other. In the "External Data" section of the ribbon, choose the Access icon above "Import". That choice starts an import wizard to allow you to select which objects you want imported. You will have a choice to import just the table structure, or both structure and data.
Remou is right that you can use DDL ALTER TABLE statements to add new columns. However, DDL might not support every feature you want for your new columns. And if you want not just the empty columns added, but also also any data from those new columns, you will probably need to run UPDATE statements to get it into your new columns.
As far as "Script Table As", see if OmBelt's Export Table to SQL tool for MS Access can do what you want.
Edit: Allen Browne has sample ALTER TABLE statements. See CreateFieldDDL and the following one, CreateFieldDDL2.
You can run DDL in Access. I think it would be easiest to run the SQL with VBA, in this case.
There is a product called DbWeigher that can compare Access database schemas and synchronize them. You can get a free trial (30 days). DbWeigher will write a script of all schema differences and write it out as DDL. The script is thorough and includes relationships, indexes, validation rules, allow zero length, etc.
A free tool from the same developer, DBWConsole, will let you execute a DDL script against any Access database. If you wrote your own DDL scripts this would be an easy way to apply the changes to your live database. It even handles some DDL that I don't know how to process in VBA (so it must be magic). DBWConsole is included if you downloaded the trial version of DBWeigher. Be aware that you can't make schema changes to a table in a shared Access database if anyone has the table open.
DbWeigher creates a script of all differences between the two files. It can be a lot to manually parse through if you just want a few of the changes. I built a parser for DbWeigher script files so they could be filtered by table, to extract just the parts I wanted. I contacted the DbWeigher author about it but never heard back. It's safe to say that I have no affiliation with this developer.