Replicating tables from different databases postgresql - sql

I have tables from different databases , and i want to create a data warehouse database that contains table replicas from different tables from different databases. I want the data in the warehouse to be synced with the data from the other tables everyday.I am using postgresql
I tried to do this using psql :
pg_dump -t table_to_copy source_db | psql target_db
However it didnt work as it keeps stating errors like table does no exist.
It all worked when i dumped the whole dabatase not only a single table, but however i want the data to be synced and i want to copy tables from different databases not the whole database.
How can i do this?
Thanks!

Probably you need FDW - Foreign Data Wrapper. You can create foreign tables for different external db in different schemas on local db. All tables accessible by local queries. For storing snap you can use local tables with just INSERT INTO local_table_YYYY_MM SELECT * FROM remote_table; .

1
pg_dump -t <table name> <source DB> | psql -d <target DB>
(Check the table name correctly and it says for you , table doesn't exist)
2
pg_dump allows the dumping of only select tables:
pg_dump -Fc -f output.dump -t tablename databasename
(dump 'tablename' from database 'databasename' into file 'output.dump' in pg_dumps binary custom format)
You can restore that pg_restore:
pg_restore -d databasename output.dump
If the table itself already exists in your target database, you can import only the rows by adding the --data-only flag.
Dblink
You can not perform cross database query like SQL Server, PostgreSQL does not support this. DbLink extension of PostgreSQL which is used to connect one database to another database. You have to install and configure DbLink to execute cross database query.
Here is the step by step script and example for executing cross database query in PostgreSQL. Please visit this post:

Related

How do I copy the table structure of a postgres table into a different postgres database without the data

I need to play around with a data syncing program I wrote, but I want to copy the structure of the production database into a new table on my localhost Postgres database, without copying the data to my localhost db.
I was thinking along the lines of
CREATE TABLE new_table AS
TABLE existing_table
WITH NO DATA;
But I am unsure how to modify it to work with 2 different databases.
Any help would be appreciated
This boils down to the question "how to create the DDL script for a table" which can easily be done using pg_dump on the command line.
pg_dump -d some_db -h production_server -t existing_table --schema-only -f create.sql
The file create.sql then contains the CREATE TABLE script that you can run on your local Postgres installation.

Proper method to migrate DB2 materialized query tables (MQTs) using db2move and db2look

I'm migrating a database from DB2 10.1 for Windows x86_64 to DB2 10.1 for Linux x86_64 - this is a combination of operating systems and machine types that have incompatible backup file formats, which means I can't just do a backup and restore.
Instead, I'm using db2move to backup the database from Windows and restore it on Linux. However, db2move does not move the materialized query tables (MQTs). Instead I need to use db2look. This poses the challenge of finding a generic method to handle the process. Right now to dump the DDLs for the materialized queries I have to run the following commands:
db2 connect to MYDATABASE
db2 -x "select cast(tabschema || '.' || tabname as varchar(80)) as tablename from syscat.tables where type='S'"
This returns a list of MQTs such as:
MYSCHEMA.TABLE1
MYSCHEMA.TABLE2
MYOTHERSCHEMA.TABLE3
I can then take all those values and feed them into a db2look to generate the DDLs for each table and send the output to mqts.sql.
db2look -d MYDATABASE -e -t MYSCHEMA.TABLE1 MYSCHEMA.TABLE2 MYOTHERSCHEMA.TABLE3 -o mqts.sql
Then I copy the file mqts.sql to the target computer, which I've previously restored all the non-MQTs, and run the following command to restore the MQTs:
db2 -tvf mqts.sql
Is this the standard way to migrate a MQT? There has got to be a simpler way that I'm missing here.
db2move is mainly to migrate data, and things related to that data, for example the DDL of each table, etc. db2move does not even migrate the relation between tables, so you have to recreated them with the ddl.
Taking the previous thing into account, an MQT is just a DDL, it does not have any data. The tool to deal with DDLs is db2look, and it has so many options to extract exactly what you want.
The process you indicated is a normal process to extract that DDL. However, I have seen more difficult processes than yours, dealing with DDLs and with db2more/db2look; yours is "simple".
Another option is to use Data Studio, however you cannot script that.
I believe what you are doing is right because MQTs do not have data of their own and are populated from the base tables. So the process should be to migrate data into the base tables which the MQT is referring and then simple create/refresh the MQTs.

Fastest way to copy table contet from one server to another

Im looking for fastest way to copy some tables from one sybase server (ase 12.5) to another. Currently im using bcp tool but it takes time to create proper bcp.fmt file.
Tables have the same structure. There is about 25K rows in every table. I have to copy about 40 tables.
I tryed to use -c parameter for bcp but I get errors while importing:
CSLIB Message: - L0/O0/S0/N24/1/0:
cs_convert: cslib user api layer: common library error: The conversion/operation
was stopped due to a syntax error in the source field.
My standard bcp in/out commands:
bcp.exe SPEPL..VSoftSent out VSoftSent.csv -U%user% -P%pass% -S%srv% -c
bcp.exe SPEPL..VSoftSent in VSoftSent.csv -U%user2% -P%pass2% -S%srv2% -e import.err -c
Since you are copying from different servers, BCP is the way to go!
If it was in the same server would be different.
Are you saying it's from 1 Sybase ASE host to another Sybase ASE host?
If you don't want to mess with BCP or I/O on the file system, you could create a CIS proxy table in your destination database that references either a stored procedure with a select statement or a physical table in your source database.
Then you could just
insert into destinationtable (col1, col2...)
select
col1, col2...
from proxytablename
CIS proxy is fairly resource intensive, so I'd be very careful about how much work you're doing here.

How to read schema of a PostgreSQL database

I installed an application that uses a postgreSQL server but I don't know the name of the database and the tables it uses. Is there any command in order to see the name of the database and the tables of this application?
If you are able to view the database using the psql terminal command:
> psql -h hostname -U username dbname
...then, in the psql shell, \d ("describe") will show you a list of all the relations in the database. You can use \d on specific relations as well, e.g.
db_name=# \d table_name
Table "public.table_name"
Column | Type | Modifiers
---------------+---------+-----------
id | integer | not null
... etc ...
Using the psql on Linux you can use the \l command to list databases, \c dbname to connect to that db and the \d command to list tables in the db.
Short answer: connect to the default database with psql, and list all databases with '\l'
Then, connect to you database of interest, and list tables with '\dt'
Slightly larger answer: A Postgresql server installation usually has a "data directory" (can have more than one, if there are two server instances running, but that's quite unusual), which defines what postgresl calls "a cluster". Inside it, you can have several databases ; you usually have at least the defaults 'template0' and 'template1', plus your own database(s).

PostgreSQL how to create a copy of a database or schema?

Is there a simple way to create a copy of a database or schema in PostgreSQL 8.1?
I'm testing some software which does a lot of updates to a particular schema within a database, and I'd like to make a copy of it so I can run some comparisons against the original.
If it's on the same server, you just use the CREATE DATABASE command with the TEMPLATE parameter. For example:
CREATE DATABASE newdb WITH TEMPLATE olddb;
pg_dump with the --schema-only option.
If you have to copy the schema from the local database to a remote database, you may use one of the following two options.
Option A
Copy the schema from the local database to a dump file.
pg_dump -U postgres -Cs database > dump_file
Copy the dump file from the local server to the remote server.
scp localuser#localhost:dump_file remoteuser#remotehost:dump_file
Connect to the remote server.
ssh remoteuser#remotehost
Copy the schema from the dump file to the remote database.
psql -U postgres database < dump_file
Option B
Copy the schema directly from the local database to the remote database without using an intermediate file.
pg_dump -h localhost -U postgres -Cs database | psql -h remotehost -U postgres database
This blog post might prove helpful for you if you want to learn more about options for copying the database using pg_dump.
This can be done by running the following command:
CREATE DATABASE [Database to create] WITH TEMPLATE [Database to copy] OWNER [Your username];
Once filled in with your database names and your username, this will create a copy of the specified database. This will work as long as there are no other active connections to the database you wish to copy. If there are other active connections you can temporarily terminate the connections by using this command first:
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = '[Database to copy]'
AND pid <> pg_backend_pid();
A good article that I wrote for Chartio's Data School which goes a bit more in depth on how to do this can be found here:
https://dataschool.com/learn/how-to-create-a-copy-of-a-database-in-postgresql-using-psql