Is it possible to create a stored procedure as
CREATE PROCEDURE Dummy
#ID INT NOT NULL
AS
BEGIN
END
Why is it not possible to do something like this?
You could check for its NULL-ness in the sproc and RAISERROR to report the state back to the calling location.
CREATE proc dbo.CheckForNull #i int
as
begin
if #i is null
raiserror('The value for #i should not be null', 15, 1) -- with log
end
GO
Then call:
exec dbo.CheckForNull #i = 1
or
exec dbo.CheckForNull #i = null
Your code is correct, sensible and even good practice. You just need to wait for SQL Server 2014 which supports this kind of syntax.
After all, why catch at runtime when you can at compile time?
See also this Microsoft document and search for Natively Compiled in there.
As dkrez says, nullability is not considered part of the data type definition. I still wonder why not.
Oh well, it seems I cannot edit #Unsliced post because "This edit deviates from the original intent of the post. Even edits that must make drastic changes should strive to preserve the goals of the post's owner.".
So (#crokusek and everyone interested) this is my porposed solution:
You could check for its NULL-ness in the sproc and RAISERROR to report the state back to the calling location.
CREATE proc dbo.CheckForNull
#name sysname = 'parameter',
#value sql_variant
as
begin
if #value is null
raiserror('The value for %s should not be null', 16, 1, #name) -- with log
end
GO
Then call:
exec dbo.CheckForNull #name 'whateverParamName', #value = 1
or
exec dbo.CheckForNull #value = null
One reason why you may need such syntax is that, when you use sp in C# dataset GUI wizard, it creates function with nullable parameters if there is no null restriction. No null check in sp body helps it.
Parameter validation is not currently a feature of procedural logic in SQL Server, and NOT NULL is only one possible type of data validation. The CHAR datatype in a table has a length specification. Should that be implemented as well? And how do you handle exceptions? There is an extensive, highly developed and somewhat standards-based methodology for exception handling in table schemas; but not for procedural logic, probably because procedural logic is defined out of relational systems. On the other hand, stored procedures already have an existing mechanism for raising error events, tied into numerous APIs and languages. There is no such support for declarative data type constraints on parameters. The implications of adding it are extensive; especially since it's well-supported, and extensible, to simply add the code:
IF ISNULL(#param) THEN
raise error ....
END IF
The concept of NULL in the context of a stored procedure isn't even well-defined especially compared to the context of a table or an SQL expression. And it's not Microsoft's definition. The SQL standards groups have spent a lot of years generating a lot of literature establishing the behavior of NULL and the bounds of the definitions for that behavior. And stored procedures isn't one of them.
A stored procedure is designed to be as light-weight as possible to make database performance as efficient as possible. The datatypes of parameters are there not for validation, but to enable the compiler to give the query optimizer better information for compiling the best possible query plan. A NOT NULL constraint on a parameter is headed down a whole nother path by making the compiler more complex for the new purpose of validating arguments. And hence less efficient and heavier.
There's a reason stored procedures aren't written as C# functions.
Related
I'm currently working on a .NET application and want to make it as modular as possible. I've already created a basic SELECT procedure, which returns data by checking inputted parameters on SQL Server side.
I want to create a procedure that parses structured data as string and inserts its' contents to corresponding table in database.
For example, I have a table as
CREATE TABLE ExampleTable (
id_exampleTable int IDENTITY (1, 1) NOT NULL,
exampleColumn1 nvarchar(200) NOT NULL,
exampleColumn2 int NULL,
exampleColumn3 int NOT NULL,
CONSTRAINT pk_exampleTable PRIMARY KEY ( id_exampleTable )
)
And my procedure starts as
CREATE PROCEDURE InsertDataIntoCorrespondingTable
#dataTable nvarchar(max), --name of Table in my DB
#data nvarchar(max) --normalized string parameter as 'column1, column2, column3, etc.'
AS
BEGIN
IF #dataTable = 'table'
BEGIN
/**Parse this string and execute insert command**/
END
ELSE IF /**Other statements**/
END
TL;DR
So basically, I'm looking for a solution that can help me achieve something like this
EXEC InsertDataIntoCorrespondingTableByID(
#dataTable = 'ExampleTable',
#data = '''exampleColumn1'', 2, 3'
)
Which should be equal to just
INSERT INTO ExampleTable SELECT 'exampleColumn1', 2, 3
Sure, I can push data as INSERT statements (for each and every 14 tables inside DB...), generated inside an app, but I want to conquer T-SQL :)
This might be reasonable (to some degree) on an RDBMS that supports structured data like JSON or XML natively, but doing this the way you are planning is going to cause some real pain-in-the-rear support and, more importantly, a sql injection attack vector. I would leave this to the realm of the web backend server where it belongs.
You are likely going to invent your own structured data markup language and parser to solve this as sql server. That's a wheel that doesn't need to be reinvented. If you do end up building this, highly consider going with JSON to avoid all the issues that structured data inherently bring with it, assuming your version of sql server supports json parsing/packaging.
Your front end that packages your data into your SDML is going to have to assume column ordinals, but column ordinal is not something that one should rely on in a database. SQL Amateurs often do, I know from years in the industry and dealing with end users that are upset when a new column is introduced in a position they don't want it. Adding a column to a table shouldn't break an application. If it does, that application has bad code.
Regarding the sql injection attack vector, your SP code is going to get ugly. You'll need to parse out each item in #data into a variable of its own in order to properly parameterize your dynamic sql that is being built. See here under the "working with parameters" section for what that will look like. Failure to add this to your SP code means that values passed in that #data SDML could become executable SQL instead of literals and that would be very bad. This is not easy to solve in SP language. Where it IS easy to solve though is in the backend server code. Every database library on the planet supports parameterized query building/execution natively.
Once you have this built you will be dynamically generating an INSERT statement and dynamically generating variables or an array or some data structure to pass in parameters to the INSERT statement to avoid sql injection attacks. It's going to be dynamic, on top of dynamic, on top of dynamic which leads to:
From a support context, imagine that your application just totally throws up one day. You have to dive into investigate. You track the SDML that your front end created that caused the failure, and you open up your SP code to troubleshoot. Imagine what this code ends up looking like
It has to determine if the table exists
It has to parse the SDML to get each literal
It has to read DB metadata to get the column list
It has to dynamically write the insert statement, listing the columns from metadata and dynamically creating sql parameters for the VALUES() list.
It has to execute sending a dynamic number of variables into the dynamically generated sql.
My support staff would hang me out to dry if they had to deal with that, and I'm the one paying them.
All of this is solved by using a proper backend to handle communication, deeper validation, sql parameter binding, error catching and handling, and all the other things that backend servers are meant to do.
I believe that your back end web server should be VERY aware of the underlying data model. It should be the connection between your view, your data, and your model. Leave the database to the things it's good at (reading and writing data). Leave your front end to the things that it's good at (presenting a UI for the end user).
I suppose you could do something like this (may need a little extra work)
declare #columns varchar(max);
select #columns = string_agg(name, ', ') WITHIN GROUP ( ORDER BY column_id )
from sys.all_columns
where object_id = object_id(#dataTable);
declare #sql varchar(max) = select concat('INSERT INTO ',#dataTable,' (',#columns,') VALUES (', #data, ')')
exec sp_executesql #sql
But please don't. If this were a good idea, there would be tons of examples of how to do it. There aren't so it's probably not a good idea.
There are however tons of examples of using ORMs or auto-generated code in stead - because that way your code is maintainable, debugable and performant.
I want to insert the results of a stored procedure into a temp table using OPENROWSET. However, the issue I run into is I'm not able to pass parameters to my stored procedure.
This is my stored procedure:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[N_spRetrieveStatement]
#PeopleCodeId nvarchar(10),
#StatementNumber int
AS
SET NOCOUNT ON
DECLARE #PersonId int
SELECT #PersonId = [dbo].[fnGetPersonId](#PeopleCodeId)
SELECT *
INTO #tempSpRetrieveStatement
FROM OPENROWSET('SQLNCLI', 'Server=PCPRODDB01;Trusted_Connection=yes;',
'EXEC Campus.dbo.spRetrieveStatement #StatementNumber, #PersonId');
--2577, 15084
SELECT *
FROM #tempSpRetrieveStatement;
OpenRowSet will not allow you to execute Procedure with input parameters. You have to use INSERT/EXEC.
INTO #tempSpRetrieveStatement(Col1, Col2,...)
EXEC PCPRODDB01.Campus.dbo.spRetrieveStatement #StatementNumber, #PersonId
Create and test a LinkedServer for PCPRODDB01 before running the above command.
The root of your problem is that you don't actually have parameters inside your statement that you're transmitting to the remote server you're connecting to, given the code sample you provided. Even if it was the very same machine you were connecting to, they'd be in different processes, and the other process doesn't have access to your session variables.
LinkedServer was mentioned as an option, and my understanding is that's the preferred option. However in practice that's not always available due to local quirks in tech or organizational constraints. It happens.
But there is a way to do this.
It's hiding in plain sight.
You need to pass literals into the string that will be executed on the other server, right?
So, you start by building the string that will do that.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[N_spRetrieveStatement]
#PeopleCodeId nvarchar(10),
#StatementNumber int
AS
SET NOCOUNT ON
DECLARE
#PersonId INT,
#TempSQL VARCHAR(4000) = '';
SELECT #PersonId = [dbo].[fnGetPersonId](#PeopleCodeId);
SET #TempSQL =
'EXEC Campus.dbo.spRetrieveStatement(''''' +
FORMAT(#StatementNumber,'D') +''''', ''''' +
FORMAT(#PersonId,'D') + ''''')';
--2577, 15084
Note the seemingly excessive number of quotes. That's not a mistake -- that's foreshadowing. Because, yes, OPENROWSET hates taking variables as parameters. It, too, only wants literals. So, how do we give OPENROWSET what it needs?
We create a string that is the entire statement, no variables of any kind. And we execute that.
SET #TempSQL =
'SELECT * INTO #tempSpRetrieveStatement ' +
'FROM OPENROWSET(''SQLNCLI'', ''Server=PCPRODDB01;Trusted_Connection=yes;'', ' + #TempSQL +
'EXEC Campus.dbo.spRetrieveStatement #StatementNumber, #PersonId';
EXEC (#TempSQL);
SELECT *
FROM #tempSpRetrieveStatement;
And that's it! Pretty simple except for counting your escaped quotes, right?
Now... This is almost beyond the scope of the question you asked, but it is a 'gotcha' I've experienced in executing stored procedures in another machine via OPENROWSET. You're obviously used to using temp tables. This will fail if the stored procedure you're calling is creating temp tables or doing a few other things that -- in a nutshell -- inspire the terror of ambiguity into your SQL server. It doesn't like ambiguity. If that's the case, you'll see a message like this:
"Msg 11514, Level 16, State 1, Procedure sp_describe_first_result_set, Line 1
The metadata could not be determined because statement '…your remote EXEC statement here…' in procedure '…name of your local stored procedure here…' contains dynamic SQL. Consider using the WITH RESULT SETS clause to explicitly describe the result set."
So, what's up with that?
You don't just get data back with OPENROWSET. The local and remote servers have a short conversation about what exactly the local server is going to expect from the remote server (so it can optimize receiving and processing it as it comes in -- something that's extremely important for large rowsets). Starting with SQL Server 2012, sp_describe_first_result_set is the newer procedure for this, and normally it executes quickly without you noticing it. It's just that it's powers of divination aren't unlimited. Namely, it doesn't know how to get the type and name information regarding temp tables (and probably a few other things it can't do -- PIVOT in a select statement is probably right out).
I specifically wanted to be sure to point this out because of your reply regarding your hesitation about using LinkedServer. In fact, the very same reasons you're hesitant are likely to render that error message's suggestion completely useless -- you can't even predict what columns you're getting and in what order until you've got them.
I think what you're doing will work if, say, you're just branching upstream based on conditional statements and are executing one of several potential SELECT statements. I think it will work if you're just not confident that you can depend on the upstream component being fixed and are trying to ensure that even if it varies, this procedure doesn't have to because it's very generic.
But on the other hand you're facing a situation in which you literally cannot guarantee that SQL Server can predict the columns, you're likely going to have to force some changes in the stored procedure you're calling to insist that it's stable. You might, for instance work out how to ensure all possible fields are always present by using CASE expressions rather than any PIVOT. You might create a session table that's dedicated to housing what you need to SELECT just long enough to do that then DELETE the contents back out of there. You might change the way in which you transmit your data such that it's basically gone through the equivalent of UNPIVOT. And after all that extra work, maybe it'll be just a matter of preference if you use LinkedServer or OPENROWSET to port the data across.
So that's the answer to the literal question you asked, and one of the limits on what you can do with the answer.
Is it possible to create a stored procedure that has parameters which are all optional individually, but at least one of the parameters must be provided?
For example, if I have a procedure that updates a record, I must pass in a record id, and then at least one column to update. The procedure should check to ensure at least one additional parameter/column is provided.
I would put an if statement as the first action.
IF #param1 is null and #param2 isnul and #param3 is null
Begin
--steps tpo raise an error or exit the proc
End
I would do this in my programming language of choice instead of a stored proc. Why? This is the type of logic checking that TSQL is not very good at; the syntax for checking this will be "icky" and it will be slow.
Also, the biggest performance advantage a stored proc gives you is running compiled SQL. In this case, because the SQL needs to be dynamically built, you lose that advantage. Thus, why do it as a stored procedure?
Use the following in where clause
where
(isnull(#param1,0)=0 or id=#param1)
and
(isnull(#param2,'')='' or name=#param2)
Is there a way to validate programmability objects in SQL Server 2008?
I have a database with ~500 programmability objects which depend on other programmability objects (not only tables).
If I do some refactoring, it is very hard find other objects which are broken by the changes. For example if I change the parameter count...
Original state of database:
CREATE FUNCTION [dbo].[GetSomeText]() RETURNS nvarchar(max) AS BEGIN RETURN 'asdf' END
/* uses "GetSomeText()" function */
CREATE FUNCTION [dbo].[GetOtherText]() RETURNS nvarchar(max) AS BEGIN RETURN [dbo].[GetSomeText]() + '-qwer' END
Now I do some refactoring (add parameter #Num to GetSomeText() function):
ALTER FUNCTION [dbo].[GetSomeText](#Num int) RETURNS nvarchar(max) AS BEGIN RETURN 'asdf' + CAST(#Num as nvarchar(max)) END
Now the function GetOtherText() is broken, because it is calling GetSomeText() function without a required parameter.
Is there a way to get information about this error?
Currently I script every programmability object as ALTER, run the alter script, and check for errors. This way looks to be too complex (and is hard to use in T-SQL only enviroment).
EDIT:
Thanks for answers! I know how to get dependenices or list of all objects.
The problem is in checking the body of object. If I get the dependency, is there other way to check validity than run ALTER script?
I don't think there's a way to find the dependency. You can, however, find everything that references the name of the object you're changing like this:
select OBJECT_DEFINITION(o.object_id) as objectDefinition, *
from sys.objects o
where o.type in ('P', 'FN')
and OBJECT_DEFINITION(o.object_id) like '%GetSomeText%'
o.type in ('P', 'FN') limits the search to P - Procedures and FN - Scalar Functions. Check out more info about OBJECT_DEFINITION: http://msdn.microsoft.com/en-us/library/ms176090.aspx
Perhaps you could try introducing some automated database developer/unit testing.
With 500 SQL objects it would be onerous to go back and 'retro fit' for them all. Best approach might be to incrementally create these tests as the need to refactor/change existing APIs/create new SQL objects arises
These automated tests could then be included as part of your overall continous integration approach. Note for the example given you would still have the issue of finding existing dependencies. But once there was sufficient test coverage the tests should highlight any breaking changes introduced.
I have created a test tool that might be of use - but there are a number of others out there:
http://dbtestunit.wordpress.com/
One of the easiest ways to get dependency is to use sp_depends. This does work with functions, but you need to be sure you are in the right DB context:
USE MyDatabase
EXEC sp_depends #objname = N'dbo.FunctionName'
This will show you any object whether it be a function, stored proc, table, or view that has a dependency for the listed object.
This is not always accurate with cross-database dependencies, though, so be aware.
According to this forum discussion, SQL Server (I'm using 2005 but I gather this also applies to 2000 and 2008) silently truncates any varchars you specify as stored procedure parameters to the length of the varchar, even if inserting that string directly using an INSERT would actually cause an error. eg. If I create this table:
CREATE TABLE testTable(
[testStringField] [nvarchar](5) NOT NULL
)
then when I execute the following:
INSERT INTO testTable(testStringField) VALUES(N'string which is too long')
I get an error:
String or binary data would be truncated.
The statement has been terminated.
Great. Data integrity preserved, and the caller knows about it. Now let's define a stored procedure to insert that:
CREATE PROCEDURE spTestTableInsert
#testStringField [nvarchar](5)
AS
INSERT INTO testTable(testStringField) VALUES(#testStringField)
GO
and execute it:
EXEC spTestTableInsert #testStringField = N'string which is too long'
No errors, 1 row affected. A row is inserted into the table, with testStringField as 'strin'. SQL Server silently truncated the stored procedure's varchar parameter.
Now, this behaviour might be convenient at times but I gather there is NO WAY to turn it off. This is extremely annoying, as I want the thing to error if I pass too long a string to the stored procedure. There seem to be 2 ways to deal with this.
First, declare the stored proc's #testStringField parameter as size 6, and check whether its length is over 5. This seems like a bit of a hack and involves irritating amounts of boilerplate code.
Second, just declare ALL stored procedure varchar parameters to be varchar(max), and then let the INSERT statement within the stored procedure fail.
The latter seems to work fine, so my question is: is it a good idea to use varchar(max) ALWAYS for strings in SQL Server stored procedures, if I actually want the stored proc to fail when too long a string is passed? Could it even be best practice? The silent truncation that can't be disabled seems stupid to me.
It just is.
I've never noticed a problem though because one of my checks would be to ensure my parameters match my table column lengths. In the client code too. Personally, I'd expect SQL to never see data that is too long. If I did see truncated data, it'd be bleeding obvious what caused it.
If you do feel the need for varchar(max) beware a massive performance issue because of datatype precedence. varchar(max) has higher precedence than varchar(n) (longest is highest). So in this type of query you'll get a scan not a seek and every varchar(100) value is CAST to varchar(max)
UPDATE ...WHERE varchar100column = #varcharmaxvalue
Edit:
There is an open Microsoft Connect item regarding this issue.
And it's probably worthy of inclusion in Erland Sommarkog's Strict settings (and matching Connect item).
Edit 2, after Martins comment:
DECLARE #sql VARCHAR(MAX), #nsql nVARCHAR(MAX);
SELECT #sql = 'B', #nsql = 'B';
SELECT
LEN(#sql),
LEN(#nsql),
DATALENGTH(#sql),
DATALENGTH(#nsql)
;
DECLARE #t table(c varchar(8000));
INSERT INTO #t values (replicate('A', 7500));
SELECT LEN(c) from #t;
SELECT
LEN(#sql + c),
LEN(#nsql + c),
DATALENGTH(#sql + c),
DATALENGTH(#nsql + c)
FROM #t;
Thanks, as always, to StackOverflow for eliciting this kind of in-depth discussion. I have recently been scouring through my Stored Procedures to make them more robust using a standard approach to transactions and try/catch blocks. I disagree with Joe Stefanelli that "My suggestion would be to make the application side responsible", and fully agree with Jez: "Having SQL Server verify the string length would be much preferable". The whole point for me of using stored procedures is that they are written in a language native to the database and should act as a last line of defence. On the application side the difference between 255 and 256 is just a meangingless number but within the database environment, a field with a maximum size of 255 will simply not accept 256 characters. The application validation mechanisms should reflect the backend db as best they can, but maintenance is hard so I want the database to give me good feedback if the application mistakenly allows unsuitable data. That's why I'm using a database instead of a bunch of text files with CSV or JSON or whatever.
I was puzzled why one of my SPs threw the 8152 error and another silently truncated. I finally twigged: The SP which threw the 8152 error had a parameter which allowed one character more than the related table column. The table column was set to nvarchar(255) but the parameter was nvarchar(256). So, wouldn't my "mistake" address gbn's concern: "massive performance issue"? Instead of using max, perhaps we could consistently set the table column size to, say, 255 and the SP parameter to just one character longer, say 256. This solves the silent truncation problem and doesn't incur any performance penalty.
Presumably there is some other disadvantage that I haven't thought of, but it seems a good compromise to me.
Update:
I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.
Update 2:
Please see Pyitoechito's answer on this page.
The same behavior can be seen here:
declare #testStringField [nvarchar](5)
set #testStringField = N'string which is too long'
select #testStringField
My suggestion would be to make the application side responsible for validating the input before calling the stored procedure.
Update: I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.
This is probably occurring because the 256th character in the string is white-space. VARCHARs will truncate trailing white-space on insertion and just generate a warning. So your stored procedure is silently truncating your strings to 256 characters, and your insertion is truncating the trailing white-space (with a warning). It will produce an error when said character is not white-space.
Perhaps a solution would be to make the stored procedure's VARCHAR a suitable length to catch a non-white-space character. VARCHAR(512) would probably be safe enough.
One solution would be to:
Change all incoming parameters to be varchar(max)
Have sp private variable of the correct datalength (simply copy and paste all in parameters and add "int" at the end
Declare a table variable with the column names the same as variable names
Insert into the table a row where each variable goes into the column with the same name
Select from the table into internal variables
This way your modifications to the existing code are going to be very minimal like in the sample below.
This is the original code:
create procedure spTest
(
#p1 varchar(2),
#p2 varchar(3)
)
This is the new code:
create procedure spTest
(
#p1 varchar(max),
#p2 varchar(max)
)
declare #p1Int varchar(2), #p2Int varchar(3)
declare #test table (p1 varchar(2), p2 varchar(3)
insert into #test (p1,p2) varlues (#p1, #p2)
select #p1Int=p1, #p2Int=p2 from #test
Note that if the length of the incoming parameters is going to be greater than the limit instead of silently chopping off the string SQL Server will throw off an error.
You could always throw an if statement into your sp's that check the length of them, and if they're greater than the specified length throw an error. This is rather time consuming though and would be a pain to update if you update the data size.
This isn't the Answer that'll solve your problem today, but it includes a Feature Suggestion for MSSQL to consider adding, that would resolve this issue.
It is important to call this out as a shortcoming of MSSQL, so we may help them resolve it by raising awareness of it.
Here's the formal Suggestion if you'd like to vote on it:
https://feedback.azure.com/forums/908035-sql-server/suggestions/38394241-request-for-new-rule-string-truncation-error-for
I share your frustration.
The whole point of setting Character-Size on Parameters is so other Developers will instantly know
what the Size Limits are (via Intellisense) when passing in Data.
This is like having your documentation baked right into the Sproc's Signature.
Look, I get it, Implicit-Conversion during Variable Assignments is the culprit.
Still, there is no good reason to expend this amount of energy battling scenarios
where you are forced to work around this feature.
If you ask me, Sprocs and Functions should have the same engine-rules in place,
for Assigning Parameters, that are used when Populating Tables. Is this really too much to ask?
All these suggestions to use Larger Character-Limits
and then adding Validation for EACH Parameter in EVERY Sproc is ridiculous.
I know it's the only way to ensure Truncation is avoided, but really MSSQL?
I don't care if it's ANSI/ISO Standard or whatever, it's dumb!
When Values are too long - I want my code to break - every time.
It should be: Do not pass go, and fix your code.
You could have multiple truncation bugs festering for years and never catch them.
What happened to ensuring your Data-Integrity?
It's dangerous to assume your SQL Code will only ever be called after all Parameters are Validated.
I try to add the same Validation to both my Website and in the Sproc it calls,
and I still catch Errors in my Sproc that slipped past the website. It's a great sanity-check!
What if you want to re-use your Sproc for a WebSite/WebService and also have it called from other
Sprocs/Jobs/Deployment/Ad-Hoc Scripts (where there is no front-end to Validate Parameters)?
MSSQL Needs a "NO_TRUNC" Option to Enforce this on any Non-Max String Variable
(even those used as Parameters for Sprocs and Functions).
It could be Connection/Session-Scoped:
(like how the "TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" Option affects all Queries)
Or focused on a Single Variable:
(like how "NOLOCK" is a Table Hint for just 1 Table).
Or a Trace-Flag or Database Property you turn on to apply this to All Sproc/Function Parameters in the Database.
I'm not asking to upend decades of Legacy Code.
Just asking MS for the option to better manage our Databases.