Cross Database Stored Procedure performance considerations - sql

In my project I have two separate DBs on the same server. The DBs are self-sufficient except for three columns in DB "B" that need to be accessed in DB "A".
Are there performance considerations if I were to have a stored proc in A that accessed three columns from B directly?
Currently, we run a nightly job to import the needed data from table B to table A, so that the stored proc isn't going out of the scope of A.
Is that the best method?
Are cross DB stored procs within best practices?

There should be no problem since the databases are on the same server. Usually problems occur when you do this with linked servers and you could run into network latency

To clarify the other posters comments.
There is no "direct" negative performance impact when using cross database access via stored procedures. Performance will be determined by the underlying architecture of the individual databases, i.e. indexes available, physical storage locations etc.
This is actually quite a common practice and so long as you follow standard query tuning principals you will be just fine.

Yes, what you're currently doing - i.e. replication - is the "correct" thing to do.
When referencing data in another database, you can't use referential integrity, data constraints and lots of the other good stuff that make an RDBMS a good tool to use.
Accessing the other database direcly ties the databases together - they MUST exist on the same server, for all time. You may run into badwidth issues using linked servers and executing the proc on demand.
Replication gives you far more flexibility.

It all depends on the data you are referencing and whether or not indexes are set up to support this direct access. The best thing I can tell you is to create the query, run it, and see if the performance is good enough.
If the performance is not satisfactory then run the query again and have your management tool generate a execution plan so you can identify the bottleneck.

We do this all the time. As long as they are on the same server there is no problem. If you have a requirement that the data must be in table a on database a before it can go into table B on database b, you will need to write a trigger to check as foreign key relationships can't be set across databases.

Related

connecting to remote oracle database in SQL

I need to do some data migration between two oracle databases that in different servers. I've thought of some ways to do it like writing a jdbc program but i think the best way is to do it in SQL itself. I can also copy the entire table over to the database I am migrating to but these tables are big and doesnt seem like a "elegant" solution.
Is it possible to open a connection to one DB in SQL developer then connect to the other one using SQL and writing update/insert functions on tables as if they were both in the same connection?
I have read some examples on creating linked tables but none seem to be oracle specific or tell me how to open the external connection by supplying it the server hostname/port/SID/user credentials.
thanks for the help!
If you create a Database Link, you can just select a from different database by querying TABLENAME#dblink.
You can create such a link using the CREATE DATABASE LINK statement.
It depends if its a one time thing or a normal process and if you need to do ETL (Extract, Transform and Load) or not, but ill help you out based on what you explained.
From what i can gather from your explanation, what you attempt to accomplish is to copy a couple of tables from one db to another, if they can reach one another then its really simple, you could just create a DBLINK (http://www.dba-oracle.com/t_how_create_database_link.htm) and then do a SELECT AS INSERT from either side using the DBLINK for one of the tables and the local table as the receiver or sender. Its pretty straight forward.
But if its a one time thing i would just move the table with expdp and impdp since that will be a lot faster and a lot less strain on the DB.
If its something you need to maintain and keep updated, why not just add the DBLINK and use that on both sides, this will be dependent on network performance though.
If this is a bit out of you depth or you cant create dblinks due to restrictions, SQL Developer has had a database copy option for a while and you can go as far a copying individual tables, but its very heavy on the system where its being run (http://deepak-sharma.net/2014/01/12/copy-database-objects-between-two-databases-in-oracle-using-sql-developer/).

SQL Server and Oracle terminology

SQL Server and Oracle terminology -
In SQL Server If I have two applications and want to keep the database completely separate, I could simply create 1 database for each application therefore I end up with 2 databases.
If I wanted to do the same thing in oracle, what do I need to create?
- create a new "Databases"? "Instance", "Schema", or "Tablespace" per application?
(Note, these two applications is the same application used by two different companies, that do not share data!)
Reference: http://www.codeproject.com/Tips/492342/Concept-mapping-between-SQL-Server-and-Oracle
Having worked with SQL Server a lot in the past, I have sympathy with trying to figure out how Oracle organizes things as I struggled with the same thing. My comments below are from SQL Server 2000 and 2003 so forgive me if things have changed since then.
Previous responders have been helpful. I think one problematic assumption here is that there is an exact "level" equivalency between SQL Server and Oracle. What I mean by "level" is something that occupies the same space in the hierarchies that you have diagrammed above (and which, btw, I think are a good place to start but might need a bit of editing in a couple of places, for example how you have diagrammed "user" and "schema" in the Oracle hierarchy, I might put them side-by-side.) I do not think these concept "levels" match exactly between the DB platforms.
A schema in Oracle is somewhat equivalent to a separate database in SQL Server but not entirely.
I would say that the "walls" -- not an exact technical term but oh well -- between databases in SQL server are a bit higher than the "walls" between schemas in Oracle. Others might disagree but here is my reasoning:
a. A schema in Oracle is a purely logical construct. It denotes who has ownership of objects. It has nothing to do with the physical location or layout of the objects. A tablespace (orthagonal concept, as noted by a previous poster) indicates the physical location of objects. A tablespace can hold objects that are in multiple schemas and vice versa. In SQL Server these two concepts are sort of merged into one -- a database is both tablespace and schema, more or less, although in some respects within a DB in SQL Server you then have multiple owners with various object ownership. This can get a bit confusing because as I remember (it's been a couple of years) if not using NT Authentication the users are defined at the server level and then have to "link" to the users in the individual DBs.
b. I remember finding it easier, or at least a bit simpler, to assure myself that users to two separate DBs in SQL Server had no access to the relative other user's DB than I have found it in Oracle.
c. Because a DB in SQL server represents both physical storage and logical ownership, you can detach the DB and move it to another SQL Server Instance and attach it. You can't do this with a schema in Oracle. I mean, you can datapump the data out or back it up or whatever to another server and another schema, but that all takes at least some scripting and such or at least a fair amount of clicking in Enterprise Manager. It doesn't give you the one-click "Detach DB" option that you have in SQL Server which makes it a lot easier to get the idea that SQL Server DBs are units that you can more-or-less move back and forth between databases.
To sum things up, I think either option would work. That is, 1) Create two separate instances of Oracle with one schema in each instance for each application, or 2) Create two separate schemas in one Oracle instance.
There are pros and cons for each option. Option 1 is probably going to be more work to set up and configure but will also give you more separation, independence, ability to have separate hardware, etc., for each DB. Option 2 will be quite a bit simpler but gives you less separation between the data and greater risk of configuration screw-ups or other things allowing users of one schema to access the other. It also means you have to be a bit more careful that someone writing a query accessing data in one schema doesn't use all the CPU and IO resources and starve a user on the other schema.
Also, yes, you could use pluggable databases in 12c. However, given the fact that you need to ask these questions (no shame, just pointing out where you're at) makes me hesitant about recommending what can easily be a more complex setup.
TL;DR -- SQL Server isn't Oracle and Oracle isn't SQL Server. Either option works and there are pros and cons to each.
If you're using 12.1 or later with the multitenant option, you could create separate pluggable databases in a single container database. The other option, which works in any version of Oracle, would be to create a separate schema. It would be possible, as well, to create a separate database, though that is generally not the preferred approach unless you have a particular need to do things like upgrade the database that one application is using without affecting the other.
Creating a Database
If you create a separate database, you'd end up with complete separate memory structures (i.e. the SGA and PGA for each database would be separate) as well as a completely separate set of background processes (each database would have its own log writer process(es) for example). That is a very heavyweight option-- you can't have too many databases on a single server before you start having a lot of contention for RAM, for scheduling all the background processes, etc. It does provide for the maximum separation between different applications-- each database can be running a different version of Oracle with a different set of initialization parameters-- but this also tends to increase the complexity of managing the environment. This generally only makes sense when you have third party applications that require a specific version of the database or a specific set of initialization parameters.
Creating a Schema
If you create a separate schema, you still have a single database so the two schemas are sharing the same memory structures (competing with each other for space in the SGA's buffer cache, for example), initialization parameters, etc. You have to exercise a modicum of planning to ensure that that the two don't interfere with each other-- you'd probably want to make sure that nether application creates public synonyms or at least that they won't wan to create the same public synonym as the other application-- but this is generally pretty trivial.
Creating a Pluggable Database
This only works in 12.1 and only if you have the multitenant option. This is the most similar to the SQL Server concept of creating a new database for each application.
You should create a new instance (schema) on the same database, where the schema in oracle is the same as the SQL server database

Writing procedures for multi-tenant database, schema as variable

I am designing a multi-tenant database. Many actions that are needed on a per-tenant basis have been written as stored procedures. Almost all of them use dynamic SQL execution since I have not found a way to differentiate between the schemas other than with dynamic statements.
EXEC( 'SELECT * FROM ' + #SchemaName + '.Contacts' )
Is there any way to create a variable representing the schema so I can call the select statements without building them dynamically?
SELECT * FROM #TheSchema.Contacts
SQL Server 2008
FWIW I have found it much more useful to separate customers by database than by schema. Reasons?
All schemas can have identical procedures, thus derive them from model instead of having to create anything. Deployment to multiple databases is no more complex than deployment to multiple schemas, actually less so I'd argue, and the only difference is the application would have to know which database to reference instead of which schema.
Different customers can have different recovery models, be backed up on different schedules, etc. This can improve your backup/restore plan especially as the data set gets larger.
Having each customer in a separate database makes it very easy to move a tenant to a different server if they get too big for the current one. Extracting the data and objects for their schema only will be quite convoluted.
Some customers may legally and/or contractually require that you not store their data in the same database as other customers.
A simpler solution is to keep the stored procedure in each of the tenant databases and execute it inside them.
This question is a bit old, but I am looking into this option as a cost savings measure - since many cloud providers charge per database and the costs associated with storage are less - especially useful for multi-tenant small databases.
From what I understand, a default schema can be set for the login, thus, setting a schema per login would automatically use that schema for all queries and sprocs. Something to consider for those that comes across this question.

Is there downside to creating and dropping too many tables on SQL Server

I would like to know if there is an inherent flaw with the following way of using a database...
I want to create a reporting system with a web front end, whereby I query a database for the relevant data, and send the results of the query to a new data table using "SELECT INTO". Then the program would make a query from that table to show a "page" of the report. This has the advantage that if there is a lot of data, this can be presented a little at a time to the user as pages. The same data table can be accessed over and over while the user requests different pages of the report. When the web session ends, the tables can be dropped.
I am prepared to program around issues such as tracking the tables and ensuring they are dropped when needed.
I have a vague concern that over a long period of time, the database itself might have some form of maintenance problems, due to having created and dropped so many tables over time. Even day by day, lets say perhaps 1000 such tables are created and dropped.
Does anyone see any cause for concern?
Thanks for any suggestions/concerns.
Before you start implementing your solution consider using SSAS or simply SQL Server with a good model and properly indexed tables. SQL Server, IIS and the OS all perform caching operations that will be hard to beat.
The cause for concern is that you're trying to write code that will try and outperform SQL Server and IIS... This is a classic example of premature optimization. Thousands and thousands of programmer hours have been spent on making sure that SQL Server and IIS are as fast and efficient as possible and it's not likely that your strategy will get better performance.
First of all: +1 to #Paul Sasik's answer.
Now, to answer your question (if you still want to go with your approach).
Possible causes of concern if you use VARBINARY(MAX) columns (from the MSDN)
If you drop a table that contains a VARBINARY(MAX) column with the
FILESTREAM attribute, any data stored in the file system will not be
removed.
If you do decide to go with your approach, I would use global temporary tables. They should get DROPped automatically when there are no more connections using them, but you can still DROP them explicitly.
In your query you can check if they exist or not and create them if they don't exist (any longer).
IF OBJECT_ID('mydb..##temp') IS NULL
-- create temp table and perform your query
this way, you have most of the logic to perform your queries and manage the temporary tables together, which should make it more maintainable. Plus they're built to be created and dropped, so it's quite safe to think SQL Server would not be impacted in any way by creating and dropping a lot of them.
1000 per day should not be a concern if you talk about small tables.
I don't know sql-server, but in Oracle you have the concept of temporary table(small article and another) . The data inserted in this type of table is available only on the current session. when the session ends, the data "disapear". In this case you don't need to drop anything. Every user insert in the same table, and his data is not visible to others. Advantage: less maintenance.
You may check if you have something simmilar in sql-server.

SQL Server replication using triggers

I have two databases (db1,db2) that reside on different servers, db1 resides on dbserver1, db2 on dbserver2.
Now I want to replicate data from db1 (old schema) to the new schema in db2 in REAL TIME. What is the best/most efficient approach here?
The first thing comes to my mind are triggers, is it possible to have trigger in db1 that inserts/updates record to db2? Is there any other approach? thanks..
[db1.OldSchema] => [db2.NewSchema]
ADDITIONAL: this only one way sync, because db2 will be used only in reporting..
This question is probably best for Database Administrators, but short answer is that there's a variety of methods you can use:
Scheduled backup/restores (if you're
happy to blow away your 2nd DB on
each restore)
Log shipping (passes over changes since last updated)
SSIS package (if you need to change the structure of the database ie: transform, then this is a good method, if the structure is the same use one of the other methods)
Replication (as you seem to want 1 way, I'd suggest transactional replication, this is the closest you'll probably get to real time, but should not be entered into lightly as it will have impacts on how you work with both databases)