How would you handle cross database queries in different environments. For example, db1-development and db2-development, db1-production and db2-production.
If I want to do a cross-database query in development from db2 to db1 I could use the fully qualified name, [db1-development].[schema].[table]. But how do I maintain the queries and stored procedures between the different environments? [db1-development].[schema].[table] will not work in production because the database names are different.
I can see search and replace as a possible solution but I am hoping there is a more elegant way to solve this problem. If there are db specific solutions, I am using SQL Server 2005.
Why are the database names different between dev and prod? It'd, obviously, be easiest if they were the same.
If it's a single table shared, then you could create a view over it - which only requires that you change that view when moving to production.
Otherwise, you'll want to create a SYNONYM for the objects, and make sure to always reference that. You'll still need to change the SYNONYM creation scripts, but that can be done in a build script fairly easily, I think.
For this reason, it's not practical to use different names for development and production databases. Using the same db name on development, production, and optionally, acceptance/Q&A environments, makes your SQL code much easier to maintain.
However, if you really have to, you could get creative with views and dynamic SQL. For example, you put the actual data retrieval query inside a view, and then you select like this:
declare #environment varchar(10)
set #environment = 'db-dev' -- input parameter, comes from app layer
declare #sql varchar(8000)
set #sql = 'select * from ' + #environment + '.dbo.view'
execute(#sql)
But it's far from pretty...
Related
I am using al lot of sql queries and tyred of typing the complete prefixes of
[LINKED_SERVER_ALIAS].[LINKED_SERVER_ON_LINKED_SERVER].[DATABASEPATH].[SCHMEMA].TABLE
No way to change the serverstructure or direct login to linked server on another linked server.
Question: Is there som transact sql command to create a global alias like
create
alias my_linked_connection
for
[LINKED_SERVER_ALIAS].[LINKED_SERVER_ON_LINKED_SERVER].[DATABASEPATH].[SCHMEMA].TABLE
that it is possible to use:
select * from my_linked_connection.TABLE
an additional problem is, that these are to many prefixes, so a normal select query is only possible by openquery or declare #cmd ... exec #cmd
Thanks
Combine a part of the prefixes inside the linked server alias sp_addlinked server.
Synonym is what you are looking for here
CREATE SYNONYM schema.tablename for linkedservername.remotedatabasename.schema.tablename
This has the advantage (which I expect is what you are looking for) that you can move views, functions and procedures through your development environments whithout having to modify the object code; the only thing that should be different is that the target database for the synonym will be different each time.
Note that Synonym is a MSSQL feature and may not be supported by your ODBC/JDBC drivers so please test fully before deployment.
We have a scenario in which we wish to use Azure Elastic Query so as to allow us to run aggregate queries on multiple databases geographically distributed, and which might be added to with time. However, we can't yet find useful docs or advise on how to design and run Azure Elastic Queries that can operate reliably without being modified (by hand), while data sources are added or removed.
Any advise from someone with experience on this db tech would be very welcome.
As a further, specific constraint, the disparate source databases are all SQL Express DBs - we are considering mapping these to online Azure SQL instances (PaaS).
UPDATE: I've seen something similar being asked/answered here, but am seeking a better answer.
You can create external source with a specific name that will be used on your queries but programmatically change the location and database name used by sources using Dynamic SQL:
ALTER PROCEDURE CETFromNewLocation AS
BEGIN
DECLARE #location varchar(100)
SET #location = 'myserver.database.windows.net'
DECLARE #CreateExternalTableString varchar(100)
SET #CreateExternalTableString = 'CREATE EXTERNAL DATA SOURCE MyExtSrc
WITH
(
TYPE=SHARD_MAP_MANAGER,
LOCATION=' + #location + ' DATABASE_NAME='ShardMapDatabase',
CREDENTIAL= SMMUser,
SHARD_MAP_NAME='ShardMap'
);'
EXEC sp_executesql #CreateExternalTableString
END
I have a database which consists of almost 200 tables and 3000 stored procedures.
I have deleted some fields from some tables, how can I now find stored procedures in which those deleted fields are referred?
Have a look at the FREE Red-Gate tool called SQL Search which does this - it searches your entire database for any kind of string(s).
It's a great must-have tool for any DBA or database developer - did I already mention it's absolutely FREE to use for any kind of use??
So in your case, you could type in the column name you deleted, and select to search only your stored procedures - and within a second or so, you'll have a list of all stored procs that contain that particular column name. Absolutely great stuff!
You can use sys.sql_modules
SELECT
OBJECT_NAME(object_id)
FROM
sys.sql_modules
WHERE
definitiion LIKE '%MyDeletedColumn%'
Or OBJECT_DEFINITION
The INFORMATION_SCHEMA views are unreliable for this because the definition is split over several nvarchar(4000) rows. The 2 methods above return nvarchar(max)
Edit: Given SQL Search is free as note by marc_s, this will a better solution.
select object_name(object_id), *
from sys.sql_module
where definition like '%ColName%'
One possible approach is to call each stored procedure with dummy parameters with SET SHOWPLAN_XML ON active. This won't run the procedure, but will generate an .xml representation of the plan - and will fail if referenced columns are missing. If you make use of #temp tables, however, this'll fail regardless. :(
You'd most likely want to automate this process, rather than writing out 3000 procedure calls.
DISCLAIMER: This isn't a bulletproof approach to picking up on missing columns, but good luck finding anything better!
I am working with some commercial schemas, which have a a set of similar tables, which differ only in language name e.g.:
Products_en
Products_fr
Products_de
I also have several stored procedures which I am using to access these to perform some administrative functions, and I have opted to use synonyms since there is a lot of code, and writing everything as dynamic SQL is just painful:
declare #lang varchar(50) = 'en'
if object_id('dbo.ProductsTable', 'sn') is not null drop synonym dbo.ProductsTable
exec('create synonym dbo.ProductsTable for dbo.Products_' + #lang)
/* Call the synonym table */
select top 10 * from dbo.ProductsTable
update ProductsTable set a = 'b'
My question is how does SQL Server treat synonyms when it comes to concurrent access? My fear is that a procedure could start, then a second come along and change the table the synonym points to halfway through causing major issues. I could wrap everything in a BEGIN TRAN and COMMIT TRAN which should theoretically remove the risk of two processes changing a synonym, however the documentation is scarce on this matter and I can not get a definitive answer.
Just to note, although this system is concurrent, it is not high traffic, so the performance hits of using synonyms/transactions are not really an issue here.
Thanks for any suggestions.
Your fear is correct. Synonyms are not intended to used in this way. Wrapping it is a transaction (not sure what isolation level would be required) might solve the issue, but only by making the system single user.
If I was dealing with this then I would probably have gone with dynamic SQL becuase I am familiar with it. However, having thought about it I wonder if schemas could solve your problem.
If you created schema for each language and then had a table called products in each schema. Your stored proc can then reference an un-qualified table name and SQL should resolve the reference to the table that is in the default schema of the current user. You'll then need to either change what account your application authenticates as to determine which schema it uses or use EXECUTE AS in a stored proc to decide which schema is default.
I haven't tested this schema idea, I may not have thought of everything and I don't know enough about your application to know if it is actually workable in your case. Let us know if you decide to try it.
I have about half a dozen generic, but fairly complex stored procedures and functions that I would like to use in a more generic fashion.
Ideally I'd like to be able to pass the table name as a parameter to the procedure, as currently it is hard coded.
The research I have done suggests I need to convert all existing SQL within my procedures to use dynamic SQL in order to splice in the dynamic table name from the parameter, however I was wondering if there is a easier way by referencing the table in another way?
For example:
SELECT * FROM #MyTable WHERE...
If so, how do I set the #MyTable variable from the table name?
I am using SQL Server 2005.
Dynamic SQL is the only way to do this, but I'd reconsider the architecture of your application if it requires this. SQL isn't very good at "generalized" code. It works best when it's designed and coded to do individual tasks.
Selecting from TableA is not the same as selecting from TableB, even if the select statements look the same. There may be different indexes, different table sizes, data distribution, etc.
You could generate your individual stored procedures, which is a common approach. Have a code generator that creates the various select stored procedures for the tables that you need. Each table would have its own SP(s), which you could then link into your application.
I've written these kinds of generators in T-SQL, but you could easily do it with most programming languages. It's pretty basic stuff.
Just to add one more thing since Scott E brought up ORMs... you should also be able to use these stored procedures with most sophisticated ORMs.
You'd have to use dynamic sql. But don't do that! You're better off using an ORM.
EXEC(N'SELECT * from ' + #MyTable + N' WHERE ... ')
You can use dynamic Sql, but check that the object exists first unless you can 100% trust the source of that parameter. It's likely that there will be a performance hit as SQL server won't be able to re-use the same execution plan for different parameters.
IF OBJECT_ID(#tablename, N'U') IS NOT NULL
BEGIN
--dynamic sql
END
ALTER procedure [dbo].[test](#table_name varchar(max))
AS
BEGIN
declare #tablename varchar(max)=#table_name;
declare #statement varchar(max);
set #statement = 'Select * from ' + #tablename;
execute (#statement);
END