What I need to do is to have a replication of my database in which to create some triggers to do stuff in certain events.
At the moment:
I have a primary database A in postgresql.
I have just set up a slave database B using WAL.
So now, B is of course a read only database
The point is that at this point, B is a readonly database, so I can't create my triggers.
Why doing it in a slave? Because I would like to affect as less as possible to the main project using the primary database.
I tried to stop the replication, to create the triggers in B and then start it again. But it never start replicating again, I guess because it detects that the schemas differ... :(
Any idea or alternative to get the same goal? Does it make sense to use a master-master replication? But in this case, will be my triggers created in my "slave" replicated to the "primary" database?
Note: I can't use BDR to replicate becase I have to provide support for postgresql 9.0
Related
I have two servers (let's say x.x.x.x - main one and y.y.y.y - secondary one)
On the main server I have a Django application running which stores its data into Postgres database, secondary one is absolutely empty.
Every minute it creates about 900+ lines in one table, so, eventually, it became over 3M lines in that table and the processing of all those objects (filtering, sorting) became really slow because of its amount. However, I need only those lines which were written within last 3 days, no more. Still, I cannot simply remove the data because I need it for analysis in the future, so I need to keep it somewhere.
What I think about is creating another database on the secondary server and keep all the extra data there. So I need to transfer all the data that is older than 3 days from local (primary) server to the remote (secondary) server.
The regularity can be achieved using cron which is a trivial task.
What's not trivial is the command I need to execute in cron. I don't think there is a built-in SQL command to do this, so I'm wondering if this is possible at all.
I think the command should look something like this
INSERT INTO remote_server:table
SELECT * FROM my_table;
Also, I think it's also worth mentioning that the table I'm having troubles with is being constantly updated as I've written above. So, may be these updates are causing speed problems when executing some filter or sorting queries.
You have several options:
If you want to stick with the manual copy, you can setup a foreign server that connects from the secondary to the main. Then create a foreign table to access the table from the main server. Maybe access through the foreign table is already fast enough so that you don't actually need to physically copy the data. But if you want to have a "disconnected" copy, you can simply run insert into local_table select * from foreign_table or create a materialized view that is refreshed through cron.
Another solution that is a bit easier to setup (but probably slower) is to use the dblink module to access the remote server.
And finally you have the option to setup logical replication for that table from the main server to the secondary. Then you don't need any cron job, as any changes on the primary will automatically be applied to the table on the secondary server.
I have a production database "PRODdb1", with a read-only user account. I have a need to query(select statement) this database and insert the data into a secondary database named "RPTdb1". I originally planned to just create a temp table in PRODdb1 from my select, but permissions are the issue.
I've read abut dblink & postgres_fdw, but are either of these a solution for my issue? I wouldn't be creating foreign tables because my SELECT is joining many tables from PRODdb1, so I'm unfamiliar if postgres_fdw would still be an option for my use case.
Another option would be any means of getting the results of the SELECT to a .CSV file or something. My main blocker here is that I only have a read-only user to work with, but no way around that issue.
The simple answer is no. You can not use postgres_fdw without defining a foreign table in your RPTdb1. This should not be much of an issue though, since it is quite easy to create the foreign tables.
I am in much the same boat as you. We use a 3rd party product (based on Postgres 9.3) for our production database and the user roles we have are very restrictive (i.e. read-only access, no replication, no ability to create triggers/functions/tables/etc).
I believe that postgres_fdw has the functionality you are looking for, with one caveat. Your local reporting server needs to be running PostgreSQL version 10 (or 9.6 at a minimum). We currently use 9.3 on our local server and while simple queries work beautifully, anything more complicated takes forever, because the FDW in 9.3 tries to pull all data in the table before it is able to do JOINs or even use the WHERE statement.
version 9.6: Pushes down JOIN to the remote database before returning results.
version 10: Pushes down aggregates such as COUNT and SUM to the remote database before returning results.
(I am not sure which version adds the ability to push down WHERE statements to the remote DB, but I know it was not possible in 9.5).
We are in the process of upgrading our local server to version 10 this week. I can try to keep you updated with our progress, feel free to do the same.
I have two databases (db1,db2) that reside on different servers, db1 resides on dbserver1, db2 on dbserver2.
Now I want to replicate data from db1 (old schema) to the new schema in db2 in REAL TIME. What is the best/most efficient approach here?
The first thing comes to my mind are triggers, is it possible to have trigger in db1 that inserts/updates record to db2? Is there any other approach? thanks..
[db1.OldSchema] => [db2.NewSchema]
ADDITIONAL: this only one way sync, because db2 will be used only in reporting..
This question is probably best for Database Administrators, but short answer is that there's a variety of methods you can use:
Scheduled backup/restores (if you're
happy to blow away your 2nd DB on
each restore)
Log shipping (passes over changes since last updated)
SSIS package (if you need to change the structure of the database ie: transform, then this is a good method, if the structure is the same use one of the other methods)
Replication (as you seem to want 1 way, I'd suggest transactional replication, this is the closest you'll probably get to real time, but should not be entered into lightly as it will have impacts on how you work with both databases)
I have two applications using two nearly identical MySQL databases within the same cluster. Some tables must contain separate data, but others should hold identical contents (i.e. all writes and rows in db1.tbl should be accessible in db2.tbl and vice versa).
What's the proper way to go about this? Note that the applications use hardcoded table (but not database) names, so simply telling application 2 to access db1.tbl is not an option.
What you need to do is set up replication for the tables that you need. See http://dev.mysql.com/doc/refman/5.0/en/replication.html for the documentation on setting up replication in MySQL.
For databases on different mysqld processes
You should check the official manual for replicating individual tables:
http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_replicate-do-table
You can setup an Master-Master relation between the two mysql processes just keep in mind to be carefull and have uniqueness on your Primary Key.
For databases residing on the same server & mysqld service
IMHO design wise you should consider the idea of moving all your shared tables under a different DB.
This way you will avoid all the overkill of triggers for updating them.
I know SQL Server 2008 can do this, but essentially I need a way to log all the changes made to a database. I don't need to log selects, and I don't need to log the user, the only important data is what has been added or changed, both with regard to data and structural changes like columns, tables, and indices.
What are my options?
I've used AutoAudit quite a bit, you simply apply it to whatever tables you wish to audit.
Main drawback is that it requires a single column PK. But most of my tables have surrogate identity PKs, so it's fine for that design philosophy.
Event Notifications can be deployed to monitor all schema changes at the database and even entire isntance level.
Global gata changes is not possible to monitor. You can select specific tables to monitor and deploy a trigger based monitoring. There are also low-impact log bassed solutions, but not out-of-the-box, they all need third party tools.