PostgreSQL replace table between databases - sql

On a website I have some scripts which work on a temporary database.
Before the scripts are starting I drop and recreate the temporary database from the production database.
On the end of the process I would like to update the production database with the results of these scripts from the temporary database, but with keeping the modifications which happened meanwhile the scripts was run (several hour).
I know the replace into is not implemented in the postgreSQL, but I found this solution which will work for me: How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
My problem is that for this solution I have to link the two database together, and as far as I found this also not implemented in the postgreSQL so I have to copy the tables from the temporary database to the live database but I have to change the name of the tables meanwhile to prevent to duplicate values or prevent unwanted changes.
I have approximately 30 table but only around half of them are modified by the scripts, so it would be great if I can limit the tables to work on.
As far as I found the only method for this to pg_dump from the temporary database into a file and insert them into the production table, but it's not too elegant solution and it would maybe too slow as well.
How can I solve this problem on an effective way?
I have postgreSQL 8.3 on Linux (Ubuntu) and root access.

Forking the database this way will always generate synchronization issues. Copying databases should be the exception, not the rule.
Instead of creating a database, which is costly, asks for root permissions and may prompt other system problems, you should have your script open a transaction with START TRANSACTION (http://www.postgresql.org/docs/8.3/static/sql-start-transaction.html), do whatever you have to do, and then COMMIT at the end if successful, or ROLLBACK if the script fails.

Related

Refreshing Oracle database tables after initial copy is made

I have a production and development database (on different systems of course). Many months ago, I copied the production database to the development system. I used exp/imp at the time. Since then there has been quite a few changes in the production database I would like to copy down to the development database. I'd rather not wipe out the development database and start over because of data I've had to add to the development database.
My original thought was to use MERGE INTO to copy the new records. But this apparently requires me to do this for tables, and list all fields of all tables. We're talking hundreds of tables and thousands of fields here. Not a pretty solution.
Is there an easier way?
Why not use the TABLE_EXISTS parameter of impdp to append the new data to the existing tables? Duplicate keys will error off but the rest of the data should still import. The results will be a bit messy. Prior to running TRUNCATE any tables in test where you can just bring the entire production table. Disable FK. Re-enable after import.
- -
Another option create a database link and generate INSERT/SELECT into all tables where data not in existing test table. You probably also want to disable FK prior to running and re-enable when done.

Can I use postgres_fdw without foreign tables defined?

I have a production database "PRODdb1", with a read-only user account. I have a need to query(select statement) this database and insert the data into a secondary database named "RPTdb1". I originally planned to just create a temp table in PRODdb1 from my select, but permissions are the issue.
I've read abut dblink & postgres_fdw, but are either of these a solution for my issue? I wouldn't be creating foreign tables because my SELECT is joining many tables from PRODdb1, so I'm unfamiliar if postgres_fdw would still be an option for my use case.
Another option would be any means of getting the results of the SELECT to a .CSV file or something. My main blocker here is that I only have a read-only user to work with, but no way around that issue.
The simple answer is no. You can not use postgres_fdw without defining a foreign table in your RPTdb1. This should not be much of an issue though, since it is quite easy to create the foreign tables.
I am in much the same boat as you. We use a 3rd party product (based on Postgres 9.3) for our production database and the user roles we have are very restrictive (i.e. read-only access, no replication, no ability to create triggers/functions/tables/etc).
I believe that postgres_fdw has the functionality you are looking for, with one caveat. Your local reporting server needs to be running PostgreSQL version 10 (or 9.6 at a minimum). We currently use 9.3 on our local server and while simple queries work beautifully, anything more complicated takes forever, because the FDW in 9.3 tries to pull all data in the table before it is able to do JOINs or even use the WHERE statement.
version 9.6: Pushes down JOIN to the remote database before returning results.
version 10: Pushes down aggregates such as COUNT and SUM to the remote database before returning results.
(I am not sure which version adds the ability to push down WHERE statements to the remote DB, but I know it was not possible in 9.5).
We are in the process of upgrading our local server to version 10 this week. I can try to keep you updated with our progress, feel free to do the same.

SQL Server 2005 - Copy differences in tables only

I've had an issue with a SQL Server database after an update from some horrible software. The software "updated" (in actuality, rolled-back) a bunch of encrypted stored procedures and user-defined functions in the database, which is now causing errors in other software.
Thankfully I took a backup from just before the update, however the error wasn't noticed until about an hour later which means records have been updated/inserted/deleted etc since the backup was taken.
Originally my idea was to simply copy the stored procedures and functions to a new database created from the backup, then backup and restore this database onto the broken database, but as they are encrypted I can't copy them.
So the next idea was to transfer the tables from the broken database to the restored database and proceed as above. However I run in to several issues with the existing tables, such as the DBTimeStamp column not allowing the copy. However copying the tables to new clean database works fine.
So here are the questions:
What's the best way to effectively merge the tables from the backup with the tables from the broken db?
Would simply truncating or dropping the existing tables in the backup avoid these validation errors? I get error messages like "VS_ISBROKEN" etc when trying to use the export data function to push the data across, with dropping the existing data set and Identity_Insert turned on etc. (Truncating)
I have yet to try dropping all the tables on the backup and going from there. Would there be an adverse effect with Metadata if I approached the problem this way?
I feel like this should be quite simple, and had the provider not locked down all the functions and stored procedures I wouldn't need to copy the tables out like this.
Thanks for reading :)

Temporary Tables Quick Guide

I have a structured database and software to handle it and I wanted to setup a demo version based off of a simple template version. I'm reading through some resources on temporary tables but I have questions.
What is the best way to go about cloning a "temporary" database while keeping a clean list of databases?
From what I've seen, there are two ways to do this - temporary local versions that are terminated at the end of the session, and tables that are stored in the database until deleted by the client or me.
I think I would prefer the 2nd option, because I would like to be able to see what they do with it. However, I do not want add a ton of throw-away databases and clutter my system.
How can I a) schedule these for deletion after say 30 days and b) if possible, keep these all under one umbrella, or in other words, is there a way to keep them out of my main list of databases and grouped by themselves.
I've thought about having one database and then serving up the information by using a unique ID for the user and 'faux indexes' so that it appears as 1,2,3 instead of 556,557,558 to solve B. I'm unsure how I could solve A, other than adding a date and protected columns and having a script that runs daily and deletes if over 30 days and not protected.
I apologize for the open-ended question, but the resources I've found are a bit ambiguous.
These aren't true temp tables in the sense that your DBMS knows them. What you're looking for is a way to have a demo copy of your database, probably with a cut-down data set. It's really no different from having any other non-production copy of your database.
Don't do this on your production database server.
Do not do this on your production database server.
Script the creation of your database schema. Depending on the DBMS you're using, this may be pretty easy. If you've got a good development/deployment/maintenance process for your system, this should already exist.
Create your database on the non-production server using the script(s) generated in the previous step. Use an easily-identifiable naming convention, like starting the database name with demo.
Load any data required into the tables.
Point the demo version of your app (that's running on your non-production servers) at this new database.
Create a script/process/job which looks at your database server and drops any databases that match your demo DB naming convention and were created more than 30 days ago.
Without details about your actual environment, people can't give concrete examples/sample code/instructions.
If you cannot run a second, independent database server for these demos, then you will have to make do with your production server. This is still a bad idea because of potential security exposures and performance impact on your production database (constrained resources).
Create a complete copy of your database (or at least the schema, with a reduced data set) for each demo.
Create a unique set of credentials for each of these demo databases. This account should have access to only its demo database.
Configure the demo instance(s) of your application to connect to the demo database
Here's why I'm pushing so hard for separate databases: If you keep copying your "demo" tables within the database, you will have to update your application code to point at those tables each time you do a new demo. Once you start doing this, you're taking a big risk with your demos - the code you keep changing isn't really the application you're running in production anymore. And if you miss one of those changes, you'll get unexpected results at best, and mangling of your production data at worst.

Best practices for writing SQL scripts for deployment

I was wondering what are the best practices in order to write SQL scripts to set up databases for production and/or development, for instance:
Should I include the CREATE DATABASE statement?
Should I create users for the database in the same script?
Is correct to disable FK check before executing the body of the script?
May I include the hole script in a transaction?
Is better to generate 1 script per database than one script for all of them?
Thanks!
The problem with your question is is hard to answer as it depends on the way the scripts are used in what you are trying to achieve. you also don't say which DB server you are using as there are tools provided which can make some tasks easier.
Taking your points in order, here are some suggestions, which will probably be very different to everyone elses :)
Should I include the CREATE DATABASE
statement?
What alternative are you thinking of using? If your question is should you put the CREATE DATABASE statement in the same script as the table creation it depends. When developing DB I use a separate create DB script as I have a script to drop all objects and so I don't need to create the database again.
Should I create users for the database in the same script?
I wouldn't, simply because the users may well change but your schema has not. Might as well manage those changes in a smaller script.
Is correct to disable FK check before executing the body of the script?
If you are importing the data in an attempt to recover the database then you may well have to if you are using auto increment IDs and want to keep the same values. Also you may end up importing the tables "out of order" an not want checks performed.
May I include the whole script in a transaction?
Yes, you can, but again it depends on the type of script you are running. If you are importing data after rebuilding a db then the whole import should work or fail. However, your transaction file is going to be huge during the import.
Is better to generate 1 script per database than one script for all of them?
Again, for maintenance purposes it's probably better to keep them separate.
This probably depends what kind of database and how it is used and deployed. I am developing a n-tier standard application that is deployed at many different customer sites.
I do not add a CREATE DATABASE statement in the script. Creating the the database is a part of the installation script which allows the user to choose server, database name and collation
I have no knowledge about the users at my customers sites so I don't add create users statements also the only user that needs access to the database is the user executing the middle tire application.
I do not disable FK checks. I need them to protect the consistency of the database, even if it is I who wrote the body scripts. I use FK to capture my errors.
I do not include the entire script in one transaction. I require from the users to take a backup of the db before they run any db upgrade scripts. For creating of a new database there is nothing to protect so running in a transaction is unnecessary. For upgrades there are sometimes extensive changes to the db. A couple of years ago we switched from varchar to nvarchar in about 250 tables. Not something you would like to do in one transaction.
I would recommend you to generate one script per database and version control the scripts separately.
Direct answers, please ask if you need to expand on any point
* Should I include the CREATE DATABASE statement?
Normally I would include it since you are creating and owning the database.
* Should I create users for the database in the same script?
This is also a good idea, especially if your application uses specific users.
* Is correct to disable FK check before executing the body of the script?
If the script includes data population, then it helps to disable it so that the order is not too important, otherwise you can get into complex scripts to insert (without fk link), create fk record, update fk column.
* May I include the hole script in a transaction?
This is normally not a good idea. Especially if data population is included as the transaction can become quite unwieldy large. Since you are creating the database, just drop it and start again if something goes awry.
* Is better to generate 1 script per database than one script for all of them?
One per database is my recommendation so that they are isolated and easier to troubleshoot if the need arises.
For development purposes it's a good idea to create one script per database object (one script for each table, stored procedure, etc). If you check them into your source control system that way then developers can check out individual objects and you can easily keep track of versions and know what changed and when.
When you deploy you may want to combine the changes for each release into one single script. Tools like Red Gate SQL compare or Visual Studio Team System will help you do that.
Should I include the CREATE DATABASE statement?
Should I create users for the database in the same script?
That depends on your DBMS and your customer.
In an Oracle environment you will probably never be allowed to do such a thing (mainly because in the Oracle world a "database" is something completely different than e.g. in the PostgreSQL or MySQL world).
Sometimes the customer will have a DBA that won't let you create databases (or schemas or users - depending on the DBMS in use). So you will need to supply that information to the DBA in order for him/her to prepare the environment for your script.
May I include the hole script in a transaction?
That totally depends on the DBMS that you are using.
Some DBMS don't support transactional DDL and will implicitely commit any transaction when you execute a DDL statement, so you need to consider the order of your installation script.
For populating the tables with data I would definitely try to do that in a single transaction, but again this depends on your DBMS.
Some DBMS are faster if you commit only once or very seldomly (Oracle and PostgreSQL fall into this category) but will slow down if you commit more often.
Other DBMS handle smaller but more transactions better and will slow down if the transactions get too big (SQL Server and MySQL tend to fall into that direction)
The best practices will differ considerably on whether it is the first time set-up or a new version being pushed. For the first time set-up yes you need create database and create table scripts. For a new version, you need to script only the changes from the previous version, so no create database and no create table unless it is a new table. Now you need alter table statements becasue you don't want to lose the existing data. I do usually write stored procs, functions and views with a drop and create statment as dropping those pbjects doesn't generally affect the underlying data.
I find it best to create all database changes with scripts that are stored in source control under the version. So if a client is new, you run the create version 1.0 scripts, then apply all the other versions in order. If a client is just upgrading from version 1.2 to version 1.3, then you run just the scripts in version 1.3 source control repository. This would also include scripts to populate or add records to lookup tables.
For transactions you may want to break them up into several chunks not to leave a prod database locked in one transaction.
We also write reversal scripts to return to the old version if need be. This makes life easier if you have a part of a change that causes unanticipated problems on prod (usually performance issues).