Rename or copy a whole Redis database to another one? - redis

I have an application that get all its data from a Redis database (DB1), which is updated every hour by an external process. During this update, all the data in Redis is replaced.
To avoid having any errors on the main application when updating, I thought about having the updater process write to a secondary Redis database (DB2) and after finishing, switch this database with the one that the application is using.
I didn't find a way to rename or copy a whole Redis database, so the only way I can think of is to erase all keys from DB1 and than use MOVE to save all new keys from DB2 in DB1.
Is there a better way to accomplish this?

Why not simply have DB2 SLAVEOF DB1, poll it with INFO and checking for
master_sync_in_progress: 0?
When you're about to perform your updates to DB1 then SLAVEOF NO ONE on DB2 (break the replication). Perform your updates on DB1 while clients access the static (old) data on DB2; then reslave when the updates are complete on DB1.

Related

Postgres transfer data from local to remote database

I have two servers (let's say x.x.x.x - main one and y.y.y.y - secondary one)
On the main server I have a Django application running which stores its data into Postgres database, secondary one is absolutely empty.
Every minute it creates about 900+ lines in one table, so, eventually, it became over 3M lines in that table and the processing of all those objects (filtering, sorting) became really slow because of its amount. However, I need only those lines which were written within last 3 days, no more. Still, I cannot simply remove the data because I need it for analysis in the future, so I need to keep it somewhere.
What I think about is creating another database on the secondary server and keep all the extra data there. So I need to transfer all the data that is older than 3 days from local (primary) server to the remote (secondary) server.
The regularity can be achieved using cron which is a trivial task.
What's not trivial is the command I need to execute in cron. I don't think there is a built-in SQL command to do this, so I'm wondering if this is possible at all.
I think the command should look something like this
INSERT INTO remote_server:table
SELECT * FROM my_table;
Also, I think it's also worth mentioning that the table I'm having troubles with is being constantly updated as I've written above. So, may be these updates are causing speed problems when executing some filter or sorting queries.
You have several options:
If you want to stick with the manual copy, you can setup a foreign server that connects from the secondary to the main. Then create a foreign table to access the table from the main server. Maybe access through the foreign table is already fast enough so that you don't actually need to physically copy the data. But if you want to have a "disconnected" copy, you can simply run insert into local_table select * from foreign_table or create a materialized view that is refreshed through cron.
Another solution that is a bit easier to setup (but probably slower) is to use the dblink module to access the remote server.
And finally you have the option to setup logical replication for that table from the main server to the secondary. Then you don't need any cron job, as any changes on the primary will automatically be applied to the table on the secondary server.

SQL Azure Database Copy status

Copying a database in the Azure Portal is never ending.
Usually, when I copy a 250GB database, it completes in just under an hour.
Today, when I copy, it never seems to finish, it has been over two to three hours now.
And in the server activity logs, the last entry just says an update occured
Any idea on how to see more progress, percent complete, or any other way to see what might be locking it? Nothing of use can be seen in the activty log json.
You can use SYS.DM_OPERATION_STATUS to track many operations including copy in SQLAZURE..
Documentation states
To use this view, you must be connected to the master database. Use the sys.dm_operation_status view in the master database of the SQL Database server to track the status of the following operations performed on a SQL Database:
Below are the operattions that can be tracked
Create database
Copy database. Database Copy creates a record in this view on both the source and target servers.
Alter database
Change the performance level of a service tier
Change the service tier of a database, such as changing from Basic to Standard.
Setting up a Geo-Replication relationship
Terminating a Geo-Replication relationship
Restore database
Delete database
You can also try sys.dm_database_copies in master database for info about copy status ..This has percent_complete field and below is what documentation has to say about this
The percentage of bytes that have been copied. Values range from 0 to 100. SQL Database may automatically recover from some errors, such as failover, and restart the database copy. In this case, percent_complete would restart from 0.
Note:
This view has info only during the duration of copy operation..

Triggers in a Postgresql slave, not affecting primary database

What I need to do is to have a replication of my database in which to create some triggers to do stuff in certain events.
At the moment:
I have a primary database A in postgresql.
I have just set up a slave database B using WAL.
So now, B is of course a read only database
The point is that at this point, B is a readonly database, so I can't create my triggers.
Why doing it in a slave? Because I would like to affect as less as possible to the main project using the primary database.
I tried to stop the replication, to create the triggers in B and then start it again. But it never start replicating again, I guess because it detects that the schemas differ... :(
Any idea or alternative to get the same goal? Does it make sense to use a master-master replication? But in this case, will be my triggers created in my "slave" replicated to the "primary" database?
Note: I can't use BDR to replicate becase I have to provide support for postgresql 9.0

MySQL backup process slowing down inserts and updates

Currently i am using the mysqldump program to create backups, below is an example of how i run it.
mysqldump --opt --skip-lock-tables --single-transaction --add-drop-database
--no-autocommit -u user -ppassword --databases db > dbbackup.sql
I perform alot of inserts and updates on my database through out the day, but when this process starts it can really slow the inserts and updates down, does anyone see any flaw in the way i am backing it up ? (e.g. tables being locked), or is there a way i can improve the backup process so it doesn't effect my inserts and updates as much?
Thanks.
The absolutely best way of doing this without disturbing the production database is to have a master/slave replication set up, you then do the dump from the slave database.
More on MySQL replication here http://dev.mysql.com/doc/refman/5.1/en/replication-howto.html
Even with --skip-lock-tables, mysqldump will issue a READ lock on every table to keep consistency.
Using mysqldump will always lock your database. To perform hot mysql backups, you will need to either set up a slave (that implies some costs) or using some dedicated tools like Percona Xtrabackup (http://www.percona.com/software/percona-xtrabackup), and that is if your database is innoDB (we use xtrabackup for terabytes of data without an issue, on slaves. If your database is not as big, having a slave and locking it to perform the backup shouldn't be that big of a deal :) )

SQL Server replication using triggers

I have two databases (db1,db2) that reside on different servers, db1 resides on dbserver1, db2 on dbserver2.
Now I want to replicate data from db1 (old schema) to the new schema in db2 in REAL TIME. What is the best/most efficient approach here?
The first thing comes to my mind are triggers, is it possible to have trigger in db1 that inserts/updates record to db2? Is there any other approach? thanks..
[db1.OldSchema] => [db2.NewSchema]
ADDITIONAL: this only one way sync, because db2 will be used only in reporting..
This question is probably best for Database Administrators, but short answer is that there's a variety of methods you can use:
Scheduled backup/restores (if you're
happy to blow away your 2nd DB on
each restore)
Log shipping (passes over changes since last updated)
SSIS package (if you need to change the structure of the database ie: transform, then this is a good method, if the structure is the same use one of the other methods)
Replication (as you seem to want 1 way, I'd suggest transactional replication, this is the closest you'll probably get to real time, but should not be entered into lightly as it will have impacts on how you work with both databases)