Backup data unlogged table - backup

I'm trying to understand if it's possible to include in database backup also unlogged table.
http://www.postgresql.org/docs/9.3/static/sql-createtable.html
Here it's explained that on crash table is truncated (as expected), but no reference to backuping or take a daily "screenshot".
Someone have some experience?
I'm using PostgreSQL 9.2.

Unlogged tables are always included in dumps (pg_dump) unless you explicitly specify the --no-unlogged-table-data command line option.
Unlogged tables are also included in file-system level backups taken when the databases server is stopped after a clean shutdown.
Unlogged tables are never included in pg_basebackup, streaming replication or WAL archiving backup and PITR. There is no option to include them, because to include them they'd have to be logged, and then they wouldn't be unlogged tables anymore.
In general, if you wish to back up unlogged tables, they probably should not be unlogged, because an unlogged table is erased completely if PostgreSQL or the server crashes or shuts down unexpectedly.

Related

Refreshing Oracle database tables after initial copy is made

I have a production and development database (on different systems of course). Many months ago, I copied the production database to the development system. I used exp/imp at the time. Since then there has been quite a few changes in the production database I would like to copy down to the development database. I'd rather not wipe out the development database and start over because of data I've had to add to the development database.
My original thought was to use MERGE INTO to copy the new records. But this apparently requires me to do this for tables, and list all fields of all tables. We're talking hundreds of tables and thousands of fields here. Not a pretty solution.
Is there an easier way?
Why not use the TABLE_EXISTS parameter of impdp to append the new data to the existing tables? Duplicate keys will error off but the rest of the data should still import. The results will be a bit messy. Prior to running TRUNCATE any tables in test where you can just bring the entire production table. Disable FK. Re-enable after import.
- -
Another option create a database link and generate INSERT/SELECT into all tables where data not in existing test table. You probably also want to disable FK prior to running and re-enable when done.

SQL Server 2005 - Copy differences in tables only

I've had an issue with a SQL Server database after an update from some horrible software. The software "updated" (in actuality, rolled-back) a bunch of encrypted stored procedures and user-defined functions in the database, which is now causing errors in other software.
Thankfully I took a backup from just before the update, however the error wasn't noticed until about an hour later which means records have been updated/inserted/deleted etc since the backup was taken.
Originally my idea was to simply copy the stored procedures and functions to a new database created from the backup, then backup and restore this database onto the broken database, but as they are encrypted I can't copy them.
So the next idea was to transfer the tables from the broken database to the restored database and proceed as above. However I run in to several issues with the existing tables, such as the DBTimeStamp column not allowing the copy. However copying the tables to new clean database works fine.
So here are the questions:
What's the best way to effectively merge the tables from the backup with the tables from the broken db?
Would simply truncating or dropping the existing tables in the backup avoid these validation errors? I get error messages like "VS_ISBROKEN" etc when trying to use the export data function to push the data across, with dropping the existing data set and Identity_Insert turned on etc. (Truncating)
I have yet to try dropping all the tables on the backup and going from there. Would there be an adverse effect with Metadata if I approached the problem this way?
I feel like this should be quite simple, and had the provider not locked down all the functions and stored procedures I wouldn't need to copy the tables out like this.
Thanks for reading :)

PostgreSQL replace table between databases

On a website I have some scripts which work on a temporary database.
Before the scripts are starting I drop and recreate the temporary database from the production database.
On the end of the process I would like to update the production database with the results of these scripts from the temporary database, but with keeping the modifications which happened meanwhile the scripts was run (several hour).
I know the replace into is not implemented in the postgreSQL, but I found this solution which will work for me: How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
My problem is that for this solution I have to link the two database together, and as far as I found this also not implemented in the postgreSQL so I have to copy the tables from the temporary database to the live database but I have to change the name of the tables meanwhile to prevent to duplicate values or prevent unwanted changes.
I have approximately 30 table but only around half of them are modified by the scripts, so it would be great if I can limit the tables to work on.
As far as I found the only method for this to pg_dump from the temporary database into a file and insert them into the production table, but it's not too elegant solution and it would maybe too slow as well.
How can I solve this problem on an effective way?
I have postgreSQL 8.3 on Linux (Ubuntu) and root access.
Forking the database this way will always generate synchronization issues. Copying databases should be the exception, not the rule.
Instead of creating a database, which is costly, asks for root permissions and may prompt other system problems, you should have your script open a transaction with START TRANSACTION (http://www.postgresql.org/docs/8.3/static/sql-start-transaction.html), do whatever you have to do, and then COMMIT at the end if successful, or ROLLBACK if the script fails.

Backup a mysql database with a large number of tables (>100,000)

I know, really bad database design, but here we are, I have some kind of forum platform (Based on PunBB) and for each forum, I've generated a new set of tables. Really bad idea.
Time has passed and now I have more than 100,000 tables (SHOW TABLES; SELECT FOUND_ROWS(); - 112965 rows in set (1.21 sec)). Performance are great though as the tables do the job of indexes and when you make a direct references to one table, it's ultrafast.
The issue is now I am trying to back everything up and to move to another server. Of course, it take forever. I've launched a mysqldump:
mysqldump --max_allowed_packet=500M --force --opt -u root -pXXXX a > fullbackup.sql
And it's still processing, a little more than 12 hours! The backup is already 546 Mb in size and Mysql is still alive and working.
I've tried to copy directly the Mysql files but I've ran into the issue that a lot of tables has been corrupted.
Any idea to speed this up?
If you are using AWS RDS take a snapshot.
If you are not, use some other snapshot based tool. Percona has one http://www.percona.com/software/percona-xtrabackup/. Using mysqldump to back up large databases is extremely slow
If your source database is already corrupt, that's an independent issue.
If you are copying the database and the copy is corrupt, that is because you are doing a "hot copy" which means that you can't copy a database while it's running without a special "snapshot tool". Even file systems have such tools. You need a consistent set of files.
I presume from the fact that your tables are corrupted when you copy the files that you are using InnoDB.
It says in the MySQL documentation here
Physical backup tools include the mysqlbackup of MySQL Enterprise Backup for InnoDB or any other tables, or file system-level commands (such as cp, scp, tar, rsync) for MyISAM tables.
You can use MySQL Enterprise Backup to perform fast, reliable, physical hot backups (i.e. while the database is running). I believe it is quiet pricey though.
At my last job we ran MySQL instances with over 160,000 tables.
With that many tables, we had to disable innodb_file_per_table and store all the tables in the central tablespace file ibdata1. If we didn't, the server couldn't function efficiently because it had too many open files. This should be easier with MySQL 8.0 but in the old version of MySQL we were using, the data dictionary couldn't scale up to so many tables.
To do backups, we used Percona XtraBackup. This is an open-source tool that works very similarly to MySQL Enterprise Backup. It performs a physical backup of the data directory, but it does it without the risk of file corruption that you caused by copying files directly. Percona XtraBackup works by copying files, but also copying the InnoDB transaction log continually, so the missing bits of the files can be restored. It's very reliable.
Backing up a database with Percona XtraBackup is a little bit faster, but the greater benefit comes when restoring the backup. Restoring a dump file from mysqldump is very slow. Restoring a physical backup like the one produced by Percona XtraBackup can be done as fast as you can copy the backup files into a new data directory, and then start up the MySQL Server.
A recent blog from Percona shows the difference:
https://www.percona.com/blog/2018/04/02/migrate-to-amazon-rds-with-percona-xtrabackup/

Database Filegroups - Only restore 1 filegroup on new server

Is there a way to backup certain tables in a SQL Database? I know I can move certain tables into different filegroups and preform a backup on these filegroup. The only issue with this is I believe you need a backup of all the filegroups and transaction logs to restore the database on a different server.
The reason why I need to restore the backup on a different server is these are backups of customers database. For example we may have a remote customer and need to get a copy of they 4GB database. 90% of this space is taken up by two tables, we don’t need these tables as they only store images. Currently we have to take a copy of the database and upload it to a FTP site…With larger databases this can take a lot of the time and we need to reduce the database size.
The other way I can think of doing this would be to take a full backup of the DB and restore it on the clients SQL server. Then connect to the new temp DB and drop the two tables. Once this is done we could take a backup of the DB. The only issue with this solution is that it could use a lot of system restores at the time of running the query so its less than ideal.
So my idea was to use two filegroups. The primary filegroup would host all of the tables except the two tables which would be in the second filegroup. Then when we need a copy of the database we just take a backup of the primary filegroup.
I have done some testing but have been unable to get it working. Any suggestions? Thanks
Basically your approach using 2 filegroups seems reasonable.
I suppose you're working with SQL Server on both ends, but you should clarify for each which whether that is truly the case as well as which editions (enterprise, standard, express etc.), and which releases 2000, 2005, 2008, (2012 ? ).
Table backup in SQL Server is here a dead horse that still gets a good whippin' now and again. Basically, that's not a feature in the built-in backup feature-set. As you rightly point out, the partial backup feature can be used as a workaround. Also, if you just want to transfer a snapshot from a subset of tables to another server, using ftp you might try working with the bcp utility as suggested by one of the answers in the above linked post, or the export/import data wizards. To round out the list of table backup solutions and workarounds for SQL Server, there is this (and possibly other ? ) third party software that claims to allow individual recovery of table objects, but unfortunately doesn't seem to offer individual object backup, "Object Level Recovery Native" by Red Gate". (I have no affiliation or experience using this particular tool).
As per your more specific concern about restore from partial database backups :
I believe you need a backup of all the filegroups and transaction logs
to restore the database on a different server
1) You might have some difficulties your first time trying to get it to work, but you can perform restores from partial backups as far back as SQL Server 2000, (as a reference see here
2) From 2005 and onward you have the option of partially restoring today, and if you need to you can later restore the remainder of your database. You don't need to include all filegroups-you always include the primary filegroup and if your database is simple recovery mode you need to add all read-write filegroups.
3) You need to apply log backups only if your db is in bulk or full recovery mode and you are restoring changes to a readonly filegroup that since last restore has become read-write. Since you are expecting changes to these tables you will likely not be concerned about read only filegroups, and so not concerned about shipping and applying log backups
You might also investigate some time whether any of the other SQL Server features, merge replication, or those mentioned above (bcp, import/export wizards) might provide a solution that is more simple or more adequately meets your needs.