MSSQL avoid multiple prefixes by using alias - sql

I am using al lot of sql queries and tyred of typing the complete prefixes of
[LINKED_SERVER_ALIAS].[LINKED_SERVER_ON_LINKED_SERVER].[DATABASEPATH].[SCHMEMA].TABLE
No way to change the serverstructure or direct login to linked server on another linked server.
Question: Is there som transact sql command to create a global alias like
create
alias my_linked_connection
for
[LINKED_SERVER_ALIAS].[LINKED_SERVER_ON_LINKED_SERVER].[DATABASEPATH].[SCHMEMA].TABLE
that it is possible to use:
select * from my_linked_connection.TABLE
an additional problem is, that these are to many prefixes, so a normal select query is only possible by openquery or declare #cmd ... exec #cmd
Thanks
Combine a part of the prefixes inside the linked server alias sp_addlinked server.

Synonym is what you are looking for here
CREATE SYNONYM schema.tablename for linkedservername.remotedatabasename.schema.tablename
This has the advantage (which I expect is what you are looking for) that you can move views, functions and procedures through your development environments whithout having to modify the object code; the only thing that should be different is that the target database for the synonym will be different each time.
Note that Synonym is a MSSQL feature and may not be supported by your ODBC/JDBC drivers so please test fully before deployment.

Related

Finding stored procedures with errors in SQL Server 2008?

I have a database which consists of almost 200 tables and 3000 stored procedures.
I have deleted some fields from some tables, how can I now find stored procedures in which those deleted fields are referred?
Have a look at the FREE Red-Gate tool called SQL Search which does this - it searches your entire database for any kind of string(s).
It's a great must-have tool for any DBA or database developer - did I already mention it's absolutely FREE to use for any kind of use??
So in your case, you could type in the column name you deleted, and select to search only your stored procedures - and within a second or so, you'll have a list of all stored procs that contain that particular column name. Absolutely great stuff!
You can use sys.sql_modules
SELECT
OBJECT_NAME(object_id)
FROM
sys.sql_modules
WHERE
definitiion LIKE '%MyDeletedColumn%'
Or OBJECT_DEFINITION
The INFORMATION_SCHEMA views are unreliable for this because the definition is split over several nvarchar(4000) rows. The 2 methods above return nvarchar(max)
Edit: Given SQL Search is free as note by marc_s, this will a better solution.
select object_name(object_id), *
from sys.sql_module
where definition like '%ColName%'
One possible approach is to call each stored procedure with dummy parameters with SET SHOWPLAN_XML ON active. This won't run the procedure, but will generate an .xml representation of the plan - and will fail if referenced columns are missing. If you make use of #temp tables, however, this'll fail regardless. :(
You'd most likely want to automate this process, rather than writing out 3000 procedure calls.
DISCLAIMER: This isn't a bulletproof approach to picking up on missing columns, but good luck finding anything better!

what does EXEC master.. Do?

I have seen this like:
EXEC master.dbo.xp_cmdshell
What does master refer to?
Update
And why sometimes is followed by two points:
master..
generally we would use master.dbo. Am I correct, so why some people write master..?
master is one of the default SQL Server system databases. You can tell because what you posted:
EXEC master.dbo.xp_cmdshell
...uses the three name notation. "master" is in the database position, "dbo" is the schema, and "xp_cmdshell" is the function/stored procedure in this case. You use this notation for also referring to tables and views, in different contexts.
This:
EXEC master..xp_cmdshell
...just omits the schema, but isn't a good idea if there are more than one schema being used in a database.
It refer to a stored procedure that has been created (by default) in the master table.
The name master refer to the database that contains the different schema of your server instance.
In Transact SQL, the fully qualified path to any object is:server_name.db_name.owner.object parm1, ...
The dots separate the four components
the first three components have defaults:
the current server
the current database
dbo (database owner) (which should be the owner of the shared tables)
master is the system database that defines the server.
For MS, since the system stored procs are still located in master, the following is completely redundant:EXEC master.dbo.stored_proc_name parm1, ...
and can be replaced with:EXEC stored_proc_name parm1, ...
and since EXEC is the default command:stored_proc_name parm1, ...
There is no difference between master.. and master.dbo.. They are both unnecessary when addressing system stored procs.
master refers to the master database (it is a System database).
In SSMS: Databases -> System Databases -> master

SQL Server Synonyms and Concurrency Safety With Dynamic Table Names

I am working with some commercial schemas, which have a a set of similar tables, which differ only in language name e.g.:
Products_en
Products_fr
Products_de
I also have several stored procedures which I am using to access these to perform some administrative functions, and I have opted to use synonyms since there is a lot of code, and writing everything as dynamic SQL is just painful:
declare #lang varchar(50) = 'en'
if object_id('dbo.ProductsTable', 'sn') is not null drop synonym dbo.ProductsTable
exec('create synonym dbo.ProductsTable for dbo.Products_' + #lang)
/* Call the synonym table */
select top 10 * from dbo.ProductsTable
update ProductsTable set a = 'b'
My question is how does SQL Server treat synonyms when it comes to concurrent access? My fear is that a procedure could start, then a second come along and change the table the synonym points to halfway through causing major issues. I could wrap everything in a BEGIN TRAN and COMMIT TRAN which should theoretically remove the risk of two processes changing a synonym, however the documentation is scarce on this matter and I can not get a definitive answer.
Just to note, although this system is concurrent, it is not high traffic, so the performance hits of using synonyms/transactions are not really an issue here.
Thanks for any suggestions.
Your fear is correct. Synonyms are not intended to used in this way. Wrapping it is a transaction (not sure what isolation level would be required) might solve the issue, but only by making the system single user.
If I was dealing with this then I would probably have gone with dynamic SQL becuase I am familiar with it. However, having thought about it I wonder if schemas could solve your problem.
If you created schema for each language and then had a table called products in each schema. Your stored proc can then reference an un-qualified table name and SQL should resolve the reference to the table that is in the default schema of the current user. You'll then need to either change what account your application authenticates as to determine which schema it uses or use EXECUTE AS in a stored proc to decide which schema is default.
I haven't tested this schema idea, I may not have thought of everything and I don't know enough about your application to know if it is actually workable in your case. Let us know if you decide to try it.

Is it possible to create a temp table on a linked server?

I'm doing some fairly complex queries against a remote linked server, and it would be useful to be able to store some information in temp tables and then perform joins against it - all with the remote data. Creating the temp tables locally and joining against them over the wire is prohibitively slow.
Is it possible to force the temp table to be created on the remote server? Assume I don't have sufficient privileges to create my own real (permanent) tables.
This works from SQL 2005 SP3 linked to SQL 2005 SP3 in my environment. However if you inspect the tempdb you will find that the table is actually on the local instance and not the remote instance. I have seen this as a resolution on other forums and wanted to steer you away from this.
create table SecondServer.#doll
(
name varchar(128)
)
GO
insert SecondServer.#Doll
select name from sys.objects where type = 'u'
select * from SecondServer.#Doll
I am 2 years late to the party but you can accomplish this using sp_executeSQL and feeding it a dynamic query to create the table remotely.
Exec RemoteServer.RemoteDatabase.RemoteSchema.SP_ExecuteSQL N'Create Table here'
This will execute the temp table creation at the remote location..
It's not possible to directly create temporary tables on a linked remote server. In fact you can't use any DDL against a linked server.
For more info on the guidelines and limitations of using linked servers see:
Guidelines for Using Distributed Queries (SQL 2008 Books Online)
One work around (and off the top of my head, and this would only work if you had permissions on the remote server) you could:
on the remote server have a stored procedure that would create a persistent table, with a name based on an IN parameter
the remote stored procedure would run a query then insert the results into this table
You then query locally against that table perform any joins to any local tables required
Call another stored procedure on the remote server to drop the remote table when you're done
Not ideal, but a possible work around.
Yes you can but it only lasts for the duration of the connection.
You need to use the EXECUTE AT syntax;
EXECUTE('SELECT * INTO ##example FROM sys.objects; WAITFOR DELAY ''00:01:00''') AT [SERVER2]
On SERVER2 the following will work (for 1 minute);
SELECT * FROM ##example
but it will not work on the local server.
Incidently if you open a transaction on the second server that uses ##example the object remains until the transaction is closed. It also stops the creating statement on the first server from completing. i.e. on server2 run and the transaction on server1 will continue indefinately.
BEGIN TRAN
SELECT * FROM ##example WITH (TABLOCKX)
This is more accademic than of practical use!
If memory is not much of an issue, you could also use table variables as an alternative to temporary tables. This worked for me when running a stored procedure with need of temporary data storage against a Linked Server.
More info: eg this comparison of table variables and temporary tables, including drawbacks of using table variables.

Cross-database queries with different DB names in different environments?

How would you handle cross database queries in different environments. For example, db1-development and db2-development, db1-production and db2-production.
If I want to do a cross-database query in development from db2 to db1 I could use the fully qualified name, [db1-development].[schema].[table]. But how do I maintain the queries and stored procedures between the different environments? [db1-development].[schema].[table] will not work in production because the database names are different.
I can see search and replace as a possible solution but I am hoping there is a more elegant way to solve this problem. If there are db specific solutions, I am using SQL Server 2005.
Why are the database names different between dev and prod? It'd, obviously, be easiest if they were the same.
If it's a single table shared, then you could create a view over it - which only requires that you change that view when moving to production.
Otherwise, you'll want to create a SYNONYM for the objects, and make sure to always reference that. You'll still need to change the SYNONYM creation scripts, but that can be done in a build script fairly easily, I think.
For this reason, it's not practical to use different names for development and production databases. Using the same db name on development, production, and optionally, acceptance/Q&A environments, makes your SQL code much easier to maintain.
However, if you really have to, you could get creative with views and dynamic SQL. For example, you put the actual data retrieval query inside a view, and then you select like this:
declare #environment varchar(10)
set #environment = 'db-dev' -- input parameter, comes from app layer
declare #sql varchar(8000)
set #sql = 'select * from ' + #environment + '.dbo.view'
execute(#sql)
But it's far from pretty...