Add Indexes to columns in remote table - Oracle - sql

Am querying a remote database using DBLink. Now am wondering to speed up the query, how can i add indexes to few columns in the remote table.
Would appreciate if anyone can provide any recommendations around the same.

You could use DBMS_JOB or DBMS_SCHEDULER packages on the remote database to schedule a job, executing DDL.
But consider this, if Oracle throws an exception for DDL over databse links, there must be a good reason for it, right? You don't want anyone messing with your schema remotely over a database link. So instead, talk to the remote DBA and try to figure out solutions with him/her.

it can't be done over the dblink (even if your dblink is using the owning schema) you will see
ORA-02021: DDL operations are not allowed on a remote database

You could create a Materialized View in the remote database based in your query, add your prefered indexes to it, and then, if you need it, create a synonym for that materialized view.

John,
A good place to start would be the following Oracle documentation on "Tuning Distributed Queries".
http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/ds_appdev004.htm

you could create the indexes in the remote database and build up your query in a view form (in the remote database of course).
that way the remote database will complete the query using all the methods he got (like indexes) and bring you back only the wanted resultes.

Related

reading data from another db

Let's say I have main database called db1. There is also another database called db2. Sometimes I need to get data from db2. Is it better to directly get data from db2 or to make view in db1?
If you're getting data from db2 you should create views in db2 for each query. Why? To create interfaces.
If someone will make changes in db2 he don't know your queries which are executed from db1. Your queries can stop working. If you create views for your queries in db2 and from db1 query view#db2 anyone who change structure in db2 will see invalid view in case his changes damaged your queries.
Of course I mean situation when your queries are embedded in packages or views. If you just query for analytic purpose it make no difference if you do it directly, with view on db1 or view on db2 just do as it is suitable for you. But good practice is setting interfaces so I would recommend to create view on db2 for datasets your later querying from db1. It can also make sense to create additional view or synonym on db1 side to have both side interface.
You will first need to set up a driver for connectng to DB2, a TNS connection entry for Oracle to connect with and a database link in Oracle to point to the connection.
The important thing is that you try, as far as possible, to insulate changes in one DB from the other.
I've this done different ways but this has worked for me;
For each table you are querying in DB2 create a DB2 view of JUST the columns you want from that table.
Create a view in Oracle that queries DB2_VIEW#DB2_database. Although not strictly necessary just query the columns you want - its good practice.
Create a synonym for the view and query through that. If the source of the data changes and the view is replaced by a different one you can switch the synonym to point at the new view instead of changing your code.
Summary:
Unless I've misunderstood you seem to be asking should I query the table directly in DB2 or should I go through views? I suggest that going through views insulates you from changes at either end to some extent so use the views.

Which DB to connect to for higher level application managing other database

I have an application that will be creating and dropping postgres databases. The application itself has its own sql server database. Kind of a bizarre architecture but it's not by choice.
I'm a little confused on how I should connect to the postgres server to execute these create table and drop table commands. Normally in an app.config or web.config, the connection string would specify the database. In this case, I just want to specify the server.
Can queries be run directly to a postgres server, without a particular database?
Should I use the postgres database that was created by the server? I tried this... select * from pg_database and then drop database DBNAME with a result from the first query, and it gave an error saying the database does not exist.
Or I could create an empty database to connect to and submit the queries to it, despite it not being used for anything.
Can queries be run directly to a postgres server, without a particular database?
No. PostgreSQL requires that you connect to a specific database.
It's possible that restriction could actually be relaxed eventually, so you could do things that only work on the shared catalogs from a connection to no particular database. It'd require changes to how authentication works and all sorts of things, though, and I don't think having an "admin database" like the usually-empty postgres database is really a problem.
Should I use the postgres database that was created by the server?
Generally, yes. It's possible to DROP the postgres database, but you should usually just leave it alone and use it as an admin database.
You could connect to the postgres database and then run drop database <DBNAME> from there, yes. Another option would be, say, template1. (I would avoid template0 since that's essentially the root template from which template1 was created, and you could always recreate template1 quickly from template0 if something happened to it, assuming you haven't modified template1 but not template0.)
I usually connect to postgres, myself, for server-level commands.
I ran DROP DATABASE droptest; via psql after creating an empty database and seeing it returned from a pg_database query, so that definitely works in general.
Perhaps it was somehow deleted via some other process in the interim between when you queried things and when you did the DROP....
Another option would be to shell out to the command line tool dropdb instead. This is a wrapper around drop database and is what I generally use both for manual and automated instances of database drops.

connecting to remote oracle database in SQL

I need to do some data migration between two oracle databases that in different servers. I've thought of some ways to do it like writing a jdbc program but i think the best way is to do it in SQL itself. I can also copy the entire table over to the database I am migrating to but these tables are big and doesnt seem like a "elegant" solution.
Is it possible to open a connection to one DB in SQL developer then connect to the other one using SQL and writing update/insert functions on tables as if they were both in the same connection?
I have read some examples on creating linked tables but none seem to be oracle specific or tell me how to open the external connection by supplying it the server hostname/port/SID/user credentials.
thanks for the help!
If you create a Database Link, you can just select a from different database by querying TABLENAME#dblink.
You can create such a link using the CREATE DATABASE LINK statement.
It depends if its a one time thing or a normal process and if you need to do ETL (Extract, Transform and Load) or not, but ill help you out based on what you explained.
From what i can gather from your explanation, what you attempt to accomplish is to copy a couple of tables from one db to another, if they can reach one another then its really simple, you could just create a DBLINK (http://www.dba-oracle.com/t_how_create_database_link.htm) and then do a SELECT AS INSERT from either side using the DBLINK for one of the tables and the local table as the receiver or sender. Its pretty straight forward.
But if its a one time thing i would just move the table with expdp and impdp since that will be a lot faster and a lot less strain on the DB.
If its something you need to maintain and keep updated, why not just add the DBLINK and use that on both sides, this will be dependent on network performance though.
If this is a bit out of you depth or you cant create dblinks due to restrictions, SQL Developer has had a database copy option for a while and you can go as far a copying individual tables, but its very heavy on the system where its being run (http://deepak-sharma.net/2014/01/12/copy-database-objects-between-two-databases-in-oracle-using-sql-developer/).

authentication when creating table synonym in remote server

I just came across the concept of SYNONYM in a database. By reading this: http://msdn.microsoft.com/en-us/library/ms187552.aspx
and this What is the use of SYNONYM in SQL Server 2008? I figure out the purposse of synonym.
however, I still don't understand a little step in real process of creating a synonym for a remote table. I have search the web, but generally the instruction mainly focus on SQL syntax(for example this one:http://www.oninit.com/manual/informix/english/docs/dbdk/is40/sqls/02cr_prc8.html). And I find none of the guidance mention the authentication part when creating a synonym for remote table. I guess a database can't just let anyone make a synonym then get the access to its tables?
so I curious how the target remote table's database can know if the synonym reference accessing its table is legal?
The answer to your question is going to depend a lot on what database platform you are using to contain the synonym; in your question, you referenced documentation from at least two (SQL Server and Informix). I don't know much about Informix, but I'm going to assume that it's security model is different than SQL Server.
For SQL Server, the remote server must be set up as a linked server first (assuming that you are using a remote object). See http://technet.microsoft.com/en-us/library/ms188279.aspx for details on how to do that.
From CREATE SYNONYM:
You do not need permission on the base object to successfully compile the CREATE SYNONYM statement, because all permission checking on the base object is deferred until run time.
That is, there's no security issues around synonyms, because the permissions checks take place when the synonym is used, and the permission checks are based on the real object, not the synonym.

Can I create view in my database server from another database server

Is it possible to create view in my database server of another servers database table?
Let's say you have a database called Testing on server1 and you have another database Testing2 on server2. Is it possible to create view of Testing2's table Table2 in server1's database Testing?
Also, I am using SQL Server 2008.
Please let me know if you have any questions.
Thanks,
Yes, you can. First, you need to link to the other server, using something like sp_addlinkedserver.
Then you can access the data using 4-part naming. Here is an example:
create view v_server1_master_tables as
select *
from server1.master.information_schema.tables;
It is possible through linked servers. However, I wouldn't encourage you to create views based on tables from another server, as it's likely that entire table will be selected from linked server every time you use this view - optimizer may not know about this table structure to issue any filters.
I've seen it at work, where nobody knew where select * from queries on large table come from that were slowing down the database, and it appeared that it was being used somwhere in another server, in a simple query.
At least you should check if your solution won't cause the above problem. Maybe someone else could elaborate on how optimizer behave when dealing with linked servers?