I have two databases that I have connected using linked servers.I have DB1 and DB2 which I only have read access to. I'm using DB1 for my application and have linked DB2 so I can combine queries. Is it possible to have foreign keys in DB1 that are linked to DB2?
No, it is not possible to create foreign keys between objects in different databases (even if they are on the same server). The official documentation is pretty clear about that:
FOREIGN KEY constraints can reference only tables within the same database on the same server. Cross-database referential integrity must be implemented through triggers. For more information, see CREATE TRIGGER (Transact-SQL).
It even points you to the possible workaround, i.e. to try to implement some kind of referential integrity checks using triggers. You can add after insert/update triggers on both sides to validate the data changes, and after delete triggers on the primary table to check are there child records. If the validation fails, you will raise an error. You can also use instead of triggers.
But the solution with triggers will not guarantee the referential integrity anyway. You can lose connectivity between databases. You can restore one of the databases from older backup. All kind of things can go wrong. You better try to reconsider your database design. Is it possible to combine these two databases into one? Is it possible to maintain copies of both tables into each of the databases and try to replicate stuff?
Related
What are some ways that these two databases can [INSERT INTO] Between each other or use commands or display the data in the table of the two databases ?
Not directly, but there are ways...
What you're asking about is called "cross database queries".
Each database in PostgreSQL has its own system tables and ways of keeping itself organized. Queries between two databases can break this, even for databases hosted by the same database server.
But there are ways to achieve what you want
Single database, multiple schemas
Instead of two databases, you can run one database with two schemas. The keeps the tables, views, etc separated and easier to maintain, and allows queries between the two. It also allows security and data isolation for users who are only allowed to access one of the schemas.
You're actually already using a schema in PostgreSQL called "public"; adding more schemas simply extends this.
See the documentation.
Foreign Data Wrappers
Foreign Data Wrappers (fdw) allow you to "link" a schema (or just tables, if you prefer) in another database. See the documentation for CREATE SERVER, and this seems like a pretty clear blog post on the subject.
Note that Foreign Data Wrappers will allow you to link to other databases than just PostgreSQL e.g. Oracle, SQL Server, MySQL, and lots more. See here.
What I need to do is to have a replication of my database in which to create some triggers to do stuff in certain events.
At the moment:
I have a primary database A in postgresql.
I have just set up a slave database B using WAL.
So now, B is of course a read only database
The point is that at this point, B is a readonly database, so I can't create my triggers.
Why doing it in a slave? Because I would like to affect as less as possible to the main project using the primary database.
I tried to stop the replication, to create the triggers in B and then start it again. But it never start replicating again, I guess because it detects that the schemas differ... :(
Any idea or alternative to get the same goal? Does it make sense to use a master-master replication? But in this case, will be my triggers created in my "slave" replicated to the "primary" database?
Note: I can't use BDR to replicate becase I have to provide support for postgresql 9.0
I have a separate script that creates the database and tables for each database that we are supporting. I am using JPA to manipulate the data in the database, but JPA does not create the database or the tables.
I want to add a foreign key to a new table with a cascade property so that when a row is deleted in the parent table, the corresponding rows in the child table are also deleted.
I am aware of the annotations necessary to do this in JPA, however I can create the foreign keys and the cascade statements in the script I am using to create the databases.
My question is, since I am using a separate script to create the database tables, can I just add the foreign key / cascade statements in the script and then ignore all of the JPA relationship annotations? Is there advantages/disadvantages to adding this information in both the database script as well as in the JPA code?
You should always have a 2 level check. if you do not use the features of JPA, then it's a big waste of the functionality JPA provides. you should actually make sure that you JPA relations match your DB relations as closely as possible. It will help you a lot as JPA can cache data and even prevent unnecessary calls to DB.
eg if u have a not null constraint and you persist with no JPA constraint, your DB has to do all the work and throw the exception back.
normally in an application, the network and DB are the slowest factors in the app. so you should try mimicking the constraints in JPA to avoid unnecessary overhead.
also using such constraints you can form bidirectional relationships and have collections of associated entities and many more such advantages.
I am writing an SSIS package where in a SQL task, I have to delete a record from a table. This record is linked to some tables and these related tables may be related to some other tables. So when I attempt to delete a record, I should remove all the references of it in other tables first.
I know that setting Cascaded delete is the best option to achieve this. However, it’s a legacy database where this change is not allowed. Moreover, it’s a transactional database where any accidental deletes from the application should be avoided.
Is there any way that SQL Server offers to frame such cascaded delete queries? Or writing the list of deletes manually is the only option?
The way that SQL Server offers to frame cascaded deletes is to use ON DELETE CASCADE which you have said you can't use.
It's possible to query to metadata to get a list of affected records in other tables, but it would be complicated since you want to remove the constraint (and therefore the metadata reference) before the delete.
You would need to, in a single transaction:
Query the metadata to get a list of affected tables. This would need to be recursive so you can get tables affected by the first tier, then those affected by those affected by the first tier, and so on.
Drop the constraint. This will also need to be recursive for the same reasons as listed above.
Delete the record(s) in all affected tables
Re-enable the constraints
Someone else may have a more elegant solution but I think this is probably it.
It could be easier to do in .NET with SQL Management Objects as well, if that's an option.
I should clarify too that I'm not endorsing this as the potential for issues is very very high.
I think your safest course of action is to manually write out the deletes.
I have two applications using two nearly identical MySQL databases within the same cluster. Some tables must contain separate data, but others should hold identical contents (i.e. all writes and rows in db1.tbl should be accessible in db2.tbl and vice versa).
What's the proper way to go about this? Note that the applications use hardcoded table (but not database) names, so simply telling application 2 to access db1.tbl is not an option.
What you need to do is set up replication for the tables that you need. See http://dev.mysql.com/doc/refman/5.0/en/replication.html for the documentation on setting up replication in MySQL.
For databases on different mysqld processes
You should check the official manual for replicating individual tables:
http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_replicate-do-table
You can setup an Master-Master relation between the two mysql processes just keep in mind to be carefull and have uniqueness on your Primary Key.
For databases residing on the same server & mysqld service
IMHO design wise you should consider the idea of moving all your shared tables under a different DB.
This way you will avoid all the overkill of triggers for updating them.